text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Design and Research of Oscillator Circuit with Soft Starting in Power Management Chip In this paper, an oscillator circuit with a soft start function is designed for DC-DC converters, and its working principle is analyzed. Based on the NUVOTON 0.35 μm BCD technology, the circuit is fully simulated and optimized via Cadence software. Under the typical environment of 5 V and 27°C, the typical oscillation frequency of the oscillator is 800 kHz and the duty ratio is 20%. The expected index is reached and the circuit function is realized. At the same time, a temperature-independent current bias is designed to charge and discharge a capacitor in the oscillator so that the frequency of the oscillator is very little affected by temperature. Under the simulation condition of -40°C–140°C, the operation of the oscillator is stable and its frequency variation is 2.95% over the above temperature range. Introduction At present, whether in the field of national defense military equipment, or civil equipment, the power supply system is inseparable.DC-DC chip is widely used in all kinds of electronic equipment due to their characteristics of small size and high efficiency [1].It is an indispensable type of power supply for the rapid development of today's electronic industry. The oscillator is an important part of the switching power supply chip [2].As the core module of the DC-DC switching power converter chip, the performance of the oscillator has a direct impact on the power supply chip.This requires that the oscillator can produce a clock signal that does not vary substantially with temperature.When the circuit is just started, the steady state of the circuit has not been completely established, so the logic control circuit may output a high level for a long time to make the high-voltage power tube turn on at the full duty ratio, and the chip will output a large pulse current, which exceeds the current that the load can bear, easily causing load damage.To avoid the generation of starting surge current, this paper designs an oscillator circuit with a soft start based on the structure of a constant current source charge-discharge stretching oscillator with a double comparator [3]. Soft start circuit design The soft start circuit designed in this paper is shown in Figure 1.The soft start circuit is composed of current-limiting MOS tube Mb1~Mbn, soft start capacitor C, comparator, transmission gate, inverter, and Rise_pul.VREF is connected to the upper voltage of 2.1 V, TG is the transmission gate, Rise pul is the circuit that outputs short pulses along the rising edge of the input, and M1-M10 is the comparator.IBIAS1 is the soft start current bias.The current is scaled down by Ma and Mb1~Mbn current mirrors to generate a soft starting current I of nA level [4].A large capacitor is charged by a small current, and the capacitor voltage rises slowly.The upper voltage of the oscillator is replaced by the capacitor voltage to realize the soft start function of the system [5].Soft startup time t can be expressed as: The specific working principle is as follows: the capacitor C starts charging from 0 V, and the voltage rises slowly, but it is lower than VREF.The comparator outputs a high level, TG2 is switched on, and VH outputs capacitor voltage.When the capacitor voltage is higher than VREF, the comparator output is low level, TG1 on, and VH output VREF at the same time when the comparator is reversed.Rise pul outputs a short pulse, and then through a phase inverter, the M14 tube is on for a very short time, and the capacitor is charged rapidly, which avoids flipping the comparator output. Temperature-independent bias current design In the application of integrated circuits, the operating temperature is often in the range of -40 ႏ-140 ႏ.Therefore, to provide a stable and ideal current source for the circuit or system, the influence of temperature on the reference current source circuit should be minimized [6].The current circuit designed in this paper is shown in Figure 2. The operational amplifier AMP, NMOS tube M1, and resistor R1 form a voltage-current converter.Because ܹ ଶ ‫ܮ‬ ଶ Τ : ܹ ଷ ‫ܮ‬ ଷ Τ = 1: 1 and ‫ܫ‬ ଶ = ‫ܫ‬ ଵ , VREF is 1.2 V voltage generated by band-gap reference.R1 and R2 are poly resistances with negative temperature characteristics, so I1 is a positive temperature coefficient current. Oscillator circuit design In integrated circuits, the relaxation oscillator is a nonlinear electronic oscillator circuit that produces a non-sinusoidal repetitive output signal [7]. Figure 3 shows the oscillator circuit designed in this paper, which is composed of an energy storage capacitor, comparator, inverter, RS flip-flop, and D flip-flop.Ibias connects temperatures-independent current bias, COMP1's reverse voltage VH is the soft starting voltage, COMP2's forward voltage VL is 0.4 V, pin 20 out outputs a 20% duty cycle square wave signal, and half out pin outputs square wave signal with 50% duty cycle. In the initial state, the upper plate voltage of capacitor C1 is 0, and capacitor C1 is charged by current I1.When the voltage on the capacitor exceeds the lower limit voltage VL of COMP2, COMP2 outputs a low level, COMP1 outputs a low level.Both inv1 and inv2 output a high level, and the RS trigger outputs an unchanged low level.When the upper limit voltage VH of the comparator COMP1 is exceeded by the voltage on the capacitor C1, COMP1 inverts the output high level, COMP2 continues to output a low level, inv1 outputs a low level, inv2 outputs a high level, and RS flip-flop sets "1" to output high level.when the RS flip-flop outputs a high level, the NMOS tube M4 is on, and the PMOS tube M3 is off, so the capacitor C1 starts to discharge.When the upper voltage of the capacitor C1 is lower than the upper voltage of COMP1 VH, COMP1 reverses to the low level, COMP2 continues to output a low level, inv1, and inv2 output a high level, and the output of the RS trigger is unchanged.When capacitor C1's voltage continues to drop to COMP2's lower limit voltage VL, COMP2 turns over to a high level, COMP1 continues to output a low level, inv1 outputs a high level, inv2 outputs a low level, and RS trigger sets "0" to output low level.NMOS tube M4 is closed, and PMOS tube M3 is switched on.We charge the capacitor, move on to the next cycle, and repeat it. The oscillator cycle is determined by the capacitor charge and discharge time, assuming a capacitor charging time of ‫ݐ߂‬ ଵ and the discharge time of ‫ݐ߂‬ ଶ .The capacitor charge current is I1, and the discharge current is I2, which can be obtained from the following formula [8]: Capacitor charging time is: Capacitor discharge time is: The period of the oscillator is: where ߂ܸ = ‫ܪܸ‬ െ ‫,ܮܸ‬ as can be seen from Formula (14), the frequency of the oscillator is: In conclusion, the expected square wave signal can be obtained by adjusting the width-length ratio of capacitors C1 and M1 to M5 during the whole working cycle. Soft start circuit simulation Simulation conditions: power supply voltage is 5 V, temperature is 27 ႏ, and TT processes angle.The designed soft start time is t=4 ms.Through the calculation of Formula (1), the soft start capacitor C is designed to be 4.1 pF, the soft start current is designed to be 2 nA, the upper voltage of the oscillator VH is 2.1 V, and the initial voltage of the capacitor is 0 V. Figure 4 shows the simulation results of the soft start circuit.As can be seen from Figure 4, the initial voltage of the upper plate of the soft-start capacitor C is 0 V.After the circuit is started, the voltage of the upper plate of the capacitor C slowly rises and reaches the upper voltage of the oscillator 2.1 V after 4 ms, and then is replaced by the reference voltage of 2.1 V.The upper voltage of the oscillator no longer changes, and the soft start function is completed. Simulation of reference current independent of temperature Simulation conditions: the power supply voltage is 5 V, TT processes angle, and the reference current is scanned from -40 ႏ to 140 ႏ.Specific simulation results are as follows: Oscillator circuit simulation The transient simulation was carried out on the oscillator under the conditions of the supply voltage of 5 V, process angle of TT, and temperature of 27 ႏ.The transient simulation results were shown in Figure 6.The oscillator output amplitude is 5 V, the oscillation period is 1.25 ȝS, the duty ratio is 20%, and the oscillation frequency is 800 KHz.Table 1 shows the oscillation frequency of the oscillator in the temperature range of -40 ႏ-140 ႏ, with stable performance and a frequency offset of 2.95%.It is better than the frequency offset in [9]. Conclusion An oscillator circuit suitable for a DC-DC switching power converter chip is designed by using a constant current source to charge and discharge the oscillation capacitor in the oscillator with double comparators included.At the same time, a soft start circuit is applied to avoid the generation of surge current [10].The simulation results show that the output frequency of the oscillator is little affected by the temperature, and the variation is only 2.95% over a temperature range of -40 ႏ to +140 ႏ.The performance of the circuit fully meets the requirements of use in the design of a DC-DC switching power supply. Figure 4 . Figure 4.The soft start output signal Figure 5 . Figure 5. Reference current as a function of temperature Figure 6 . Figure 6.Transient simulation of the oscillator Table 1 . Results of oscillator frequency variation with temperature
2,240
2023-10-01T00:00:00.000
[ "Engineering" ]
Simulating photon scattering effects in structurally detailed ventricular models using a Monte Carlo approach Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon “packets” as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct “humped” morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with “virtual-electrode” regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon "packets" as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct "humped" morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with "virtual-electrode" regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. INTRODUCTION Cardiac optical mapping provides high-resolution spatiotemporal recordings of electrophysiological activity from the surface of myocardial tissue (Efimov et al., 2004). The method utilizes specialized membrane-bound fluorescent dyes, which, upon illumination at the correct wavelength, transduce local changes in transmembrane potential as changes in fluorescent emission, which can be recorded by optical detectors. However, light is known to be highly scattering and relatively weakly absorbing in cardiac tissue at both the excitation and emission wavelengths of the voltage-sensitive fluorescent dyes used. Illuminating light penetrates relatively deeply (a few millimeters) into the tissue, with the subsequent emitted fluorescence also scattering and escaping the surface to be detected. Consequently, the measured fluorescent signals are thought to contain information regarding the electrophysiological state of, not just the tissue surface, but of tissue within a three-dimensional "scattering volume" underneath the surface recording site. The resulting effect of averaging electrical states within a subsurface volume is known to give rise to distortion in recorded fluorescent signals relative to measurements obtained with micro-electrode recordings and as well as those predicted by computational models. Such distortion effects include the prolongation of the optically-recorded action potential upstroke duration (Girouard et al., 1996;Gray, 1999;Hyatt et al., 2003Hyatt et al., , 2005Bishop et al., 2006a), the modulation of surface-recorded shock-end polarization levels following strong extracellular shocks (Janks and Roth, 2002;Bishop et al., 2006bBishop et al., , 2007 as well as the appearance of "dual-humped" action potentials from recording sites above intramural reentry (Efimov et al., 1999(Efimov et al., , 2000Bray and Wikswo, 2003;Bishop et al., 2007). In recent decades, computational models of cardiac electrophysiological dynamics have been instrumental in enriching our understanding of a variety of physiological and pathological cardiac phenomena. Optical mapping measurements provide a key component in fully utilizing the predictions made from such models, facilitating an important form of experimental comparison and model validation. Using models directly alongside experimental measurements often also allows for a more detailed mechanistic understanding of particular experimental findings. However, the presence of distortion effects due to photon scattering in optical mapping has the potential to render such a direct comparison with models problematic, limiting the use of optical recordings to validate simulations of electrical activity and compromising the interpretation of experimental mapping data. To combat these issues, combined electrophysiological and optical simulation models have been developed to synthesize the fluorescent signals recorded in optical mapping experiments. The use of such combined models have provided important post-processing tools which can be applied to "raw" electrophysiological simulation data derived from standard mono-/bidomain simulations. As they intrinsically simulate the presence of the distortion artifact, they consequently provide a much closer comparison with experimental optical mapping recordings, improving the validation procedure of simulation results. Furthermore, their ability to simulate each stage of the optical mapping process facilitates a much more complete mechanistic understanding of the origin of fluorescent signal distortion, suggesting ways in which novel interpretation of the recorded signal, guided by the models, may provide invaluable information regarding intramural electrophysiological activity (Hyatt et al., 2003(Hyatt et al., , 2005Bishop et al., 2007;Walton et al., 2010). By far the most common models used to simulate the fluorescent optical signals have been continuum models, based on the photon diffusion equation, valid as light is relatively highly scattering in biological tissue at the wavelengths used. Both analytical (Hyatt et al., 2003) and numerical (finite element) (Bishop et al., 2006a) solution methods for the photon diffusion equation have been successfully used; the former being largely restricted to modeling regular (plane/slab) geometries whilst the latter has been able to simulate fluorescent signals from anatomically-realistic ventricular models. However, despite providing important insight into the three-dimensional interaction between light scattering and the complex electrical activity patterns that underlie paced rhythms (Hyatt et al., 2005;Bishop et al., 2006a) as well as following strong electrical shocks and during episodes of arrhythmia (Bishop et al., 2007(Bishop et al., , 2011a, the geometrical models used in these studies have all represented the myocardial wall as solid, compact myocardium. In recent years, there has been an advent of the construction of highly-detailed ventricular models derived from high-resolution (up to 25 μm) MR data Bishop et al., 2010b). These models contain a wealth of fine-scale anatomical features such as the coronary vasculature, intramural extracellular cleft spaces in addition to endocardial structures such as papillary muscles and trabeculations, and have been used to demonstrate the potential importance of these features in mechanisms of arrhythmia and electrotherapy (Bishop et al., 2010b(Bishop et al., ,a, 2012Rantner et al., 2013). During a typical optical mapping experiment, the presence of intramural cavities (vessels, clefts) would become filled with transparent saline solution. These cavities thus represent regions of non-scattering optical media in which photon diffusion does not hold, and thus the photon diffusion equation can not be applied, preventing the use of these latest high-resolution anatomical models along with the previous optical simulation methods. The increasing use of such detailed anatomical models in computational cardiac electrophysiology thus hastens the need to develop an optical simulation tool capable of simulating light transport through both scattering and non-scattering media alike. Such a tool will be able to validate the predictions made by this new generation of models and provide further insight into experimental measurements which aim to probe the mechanisms of fine-scaled anatomy in various aspects of cardiac function. The above requirements suggest the use of a discrete stochastic modeling approach, such as the Monte Carlo (MC) method, which is able to simulate the movement of photons throughout a medium, regardless of its optical properties. The MC method models the propagation of individual photons, or packets of photons, simulating individual scattering, absorption, reflection, and refraction processes. MC methods have been used extensively with much success in a wide variety of applications to biomedical optics phenomena, being the "Gold Standard" to which other methods (such as the photon diffusion method) are compared against due to the accuracy of their predictions, combined with the wealth of information which goes into, and which it is possible to extract from, each simulation (Wang et al., 1995;Okada et al., 1996;Jacques, 1998). As each photon interaction event occurs stochastically, each photon trajectory is unique and thus large numbers of photons need to be simulated to obtain good statistics, leading to high computational expense (Arridge, 1993;Wang et al., 1995). Compared to photon diffusion approaches, MC methods have been relatively sparsely applied to the simulation of optical mapping signals. As individual photon trajectories are simulated, the explicit origin of each photon recorded by the optical detector is known. MC methods have thus been largely used to provide information regarding the spatial distribution of tissue contributing to the recorded optical mapping signal (i.e., the scattering volume) (Ding et al., 2001) and how this might be influenced by different illumination strategies (Ramshesh and Knisley, 2003), pixel sizes and optical detection setups . Such studies have also been exclusively restricted to highly simplified geometric models, usually representing regular slabs of (solid) cardiac tissue, due to the requirement of very rapidly and efficiently computing the photon packet's location within the tissue; trivial in regular, structured grids. However, in unstructured meshes, such as those necessarily used to construct the latest anatomically-detailed models, tracking a packet's position, and in addition, checking for potential boundary interactions, in a highly-optimized computational manner becomes a significant challenge. Recently, MC models of photon propagation within unstructured tetrahedral meshes have been proposed, where the interaction between photon packet and triangular tetrahedral element face is computed recursively and rapidly (Shen and Wang, 2010), and has been applied to model light propagation within a whole body mouse mesh, including organs. The goal of this study is to present a novel application of the MC method of photon scattering simulation within unstructured meshes, proposed by Shen and Wang (2010), applied for the first time to the simulation of cardiac voltage-sensitive fluorescent optical mapping signals. The developed approach is used to understand the origin of recorded fluorescent signals within highly anatomically-detailed ventricular models, and to specifically understand how the presence of large blood vessel cavities close to the epicardial recording surface may significantly distort the recorded optical signal during pacing and following the application of strong extracellular shocks. GEOMETRICAL FINITE ELEMENT COMPUTATIONAL MODELS OF CARDIAC TISSUE High-resolution, unstructured tetrahedral finite element meshes representing cardiac left-ventricular (LV) wedge preparations were used throughout to simulate optical mapping signal synthesis. The models included three geometrically-simplistic cuboid models (both with and without the presence of a blood vessel) in addition to an anatomically-detailed model derived from high-resolution (25 μm) rabbit MR-data (Bishop et al., 2010a,b). The first simple cuboid model represented compact myocardial tissue with no cavities, of dimension 4 × 4 × 2 mm in the x-, y-, and z-directions, respectively. The other two cuboid models, of the same overall dimension, contained smooth cylindrical cavities of diameter 350 and 800 μm representing a blood vessel passing through the intramural ventricular wall in the global apex-base direction at respective depths of 100 and 200 μm beneath the epicardial surface (to cavity edge). Figure 1A depicts the geometrically-simple models in addition to an example MR image (Bishop et al., 2010b) of vessels used to guide the choice of cavity size and depth. The anatomically-detailed MR-derived model (Bishop et al., 2010a) was based-on a left ventricular (LV) wedge preparation of height 6 mm in the apex-base direction and represented approximately one quarter of the LV free wall, as shown in Figure 1B. The high level of anatomical detail present in the MR-images was successfully carried-over to the model, representing an intricate amount of details regarding fine-scale intramural structures such as blood vessels and extracellular clefts. The meshing software Tarantula (CAE Solutions, Austria) was used to create the meshes directly from segmented binary voxel image stacks. For geometrically-simple models, binary image stacks were manually created in Matlab. In the case of the MR-derived model, processing, and segmentation of the MR data was performed to generate the binary mask, as described in detail in Bishop et al. (2010a,b). Meshes had a mean element discretization of approximately 50 μm within cardiac tissue. Transversely-rotational fiber orientation was assigned to the models, rotating ±60 • between epi-and endocardial surfaces. A previously described algorithm based-on a Laplace-Dirichlet approach (Bishop et al., 2010a) was used for assigning the smooth negotiation of cardiac fibers around intramural cavities, informed from histology (Gibb et al., 2009). The electrically-insulating effects of the connective tissue surrounding blood vessel walls was represented by assigning tagged elements around vessel cavities in the meshes with reduced electrical conductivity values derived directly from experiment (Bishop et al., 2010a). In addition to representing the myocardial tissue, the meshes also contained an unstructured finite element representation of the perfusing bath, including the bath contained within all intramural cavities (blood vessels and extracellular cleft spaces) and that surrounding the tissue on all sides, as would be the case for a perfused optical mapping preparation. For the simple cuboid models, a surrounding bath of width 100 μm (approximately two element widths) was modeled. For the LV wedge model, a surrounding bath of width 100 μm was defined on all cut faces, with the width determined by the geometry of the MR-data bordering epi-and endocardial surfaces. SIMULATION OF ELECTRICAL ACTIVITY Electrical activation within the ventricular model was simulated using a monodomain representation using the Cardiac Arrhythmia Research Package (CARP), the specifics of the numerical regimes of which have been described extensively elsewhere (Vigmond et al., 2003). Experimentally-derived conductivities were assigned along the fiber and cross-fiber directions within the intracellular and extracellular domains (Clerc, 1976). Bath conductivity was set to 1.0 S/m, with vessel lumen wall conductivity 0.01 S/m. Cell membrane dynamics within the myocardial tissue were represented by the recent Mahajan-Shiferaw rabbit ventricular cell model (Mahajan et al., 2008). with representations of small (center) and large (right) sub-epicardial intramural blood vessels, informed from high-resolution rabbit MR data (Bishop et al., 2010b). (B) High-resolution MR-derived LV wedge preparation (Bishop et al., 2010a). Simulations of electrophysiological dynamics were performed as the first step in the pipeline, prior to optical mapping photon scattering simulation, thus providing V m values at 1 ms (or less) discretization across all finite element nodes within the models. Two stimulation protocols were used in both simplistic and anatomically-detailed wedge models to initiate wavefront propagation circumferentially (approximately parallel to epicardial recording surface, following stimulation of a transmural cut face) and in a transmural direction (approximately toward epicardial recording surface, following stimulation of the endocardium). Strong extracellular S2 shocks were also applied to the simplified cuboid models via plate electrodes in the yz-plane, located at the extremities of the extracellular bath in the x-direction. Shock waveforms were square, mono-phasic and of 5 ms duration. Shock strengths used were 10, 20, and 40 V, applied to diastolic tissue. Analysis was performed on V m distributions at shock-end. BASIC PHOTON TRANSPORT SIMULATION USING MC The fundamental algorithm used to simulate the step-by-step propagation and interaction of photons through cardiac tissue is based-on that of Wang et al. (1995). The algorithm describes the transport of photons through multi-layered biological tissue within a structured, regular domain, discretized into equal cubic optical elements into which physical quantities are stored. Photon propagation Briefly, photons are propagated through the tissue in packets, each of which has an associated packet weight, W. At any time, the photon packet's position is described by Cartesian coordinates x, y, z. Its current direction of movement is defined by two angles, the deflection angle θ and the azimuthal angle ψ, from which the directional cosines μ x , μ y , and μ z may be defined where r represents the current direction of propagation andx,ŷ, andẑ are Cartesian unit vectors. The photon packets move in successive, free-fly steps in which neither absorption nor scattering occurs. The distance of each step size, s, depends upon the local optical properties, specifically the tissue absorption and scattering coefficients (μ a and μ s , respectively). and involves sampling from a probability distribution. where ξ is a random uniformly distributed variable 0 ≤ ξ ≤ 1 and μ t is the optical interaction coefficient, defined as μ t = μ a + μ s . The photon packet is then advanced forward by s, so long as interaction with a boundary does not occur, into its new position Once at its new location, the photon packet interacts with the tissue by firstly depositing a proportion of its weight, given by via absorption into the current optical element in which it resides. It is important to note that determining the optical element in which a photon packet will reside after moving to its new location (x , y , z ) is relatively trivial in regular geometries (cuboid/slab, etc.), through knowledge of the resolution of the optical grid. Such a method must be highly computationally-efficient as it must be evaluated numerous times for each individual photon packet trajectory for each of the many millions of packets launched. Following absorption, the remaining packet weight is then scattered into a new direction which depends upon the current direction of travel and the optical anisotropy of scattering of the material, g, and is determined by the Henyey-Greenstein function (Wang et al., 1995). Note that a value of g = 0 represents isotropic scattering, whereas g = 1 or g = −1 represents forward and back scattering, respectively. The scattering function also includes additional stochasticity from the use of additional random numbers. Photon propagation continues, absorbing and scattering between steps until the packet weight falls below a certain threshold, at which point it is either terminated and a new packet initiated, or else given the chance to continue propagating, in keeping with the conservation of energy (Wang et al., 1995). Interaction with boundaries During its journey the photon packet may attempt to crossover and interact with a boundary separating two regions with different optical properties. This boundary could, if propagating within an inhomogeneous medium, represent an internal boundary between different regions of the tissue, or else it could represent an external boundary at the edge of the tissue domain. When such an event occurs, the photon packet may be internally reflected or else transmitted through into the adjoining medium. Snell's Law is used to derive the angle of transmittance α t from the known angle of incidence α i and, along with Fresnel's formula, the internal reflectance R calculated to determine the probability of reflecting. If reflected or transmitted, the photon packet continues by moving the remaining distance from its initial step-size. For external boundaries, upon transmission the packet may be tracked through the surrounding media until it potentially passes back into the tissue medium or interacts with the detection device , or alternatively it is killed and another packet launched. In a regular, structured domain, keeping track of these interactions is relatively straight-forward as the definition of surfaces is usually simpler (and regular) with relatively few planar surface definitions within the model, allowing fast and efficient checks to be performed which do not hinder computational performance of the scattering algorithm. However, in a highly unstructured environment, such as that represented by the finite element LV wedge model of Figure 1B, this process is significantly more complex. Due to its unstructured nature, each triangle defining the interface between different media represent their own (often unique) planar surface. Thus, performing checks for intersection on such a large number of surfaces for each photon trajectory at each step becomes rapidly computationally intractable. PHOTON TRANSPORT SIMULATION IN UNSTRUCTURED TETRAHEDRAL FINITE ELEMENT MODELS The main algorithm used to simulate the propagation and interaction of photons through the finite element meshes representing cardiac tissue during both the processes of illumination and fluorescent emission was based upon that of Shen and Wang (2010) which describes a tetrahedron-based inhomogeneous MC optical simulator, built upon that of Wang et al. (1995). To model the interaction of photons with, and transport through, the perfusing bath, in addition to other specific features of the optical mapping system, adaptations to this method were necessary, described below. Face-boundary interaction checking The algorithm introduced by Shen and Wang (2010) introduces a fast and efficient procedure for determining photon-triangle interactions recursively and rapidly within an inhomogeneous medium defined by tetrahedral elements. In the case of a domain defined by an unstructured tetrahedral finite element mesh, at any time, a photon packet resides within a tetrahedral element having four triangular faces. During each photon packet step, a rapid test can be performed to assess whether the photon trajectory intersects with one of these four boundaries. From each triangular boundary, an in-ward pointing surfacenormal vector can be found, defined as pointing toward the centroid of the tetrahedron, shown in Figure 2. In our implementation, the distance of the starting position x, y, z to the plane along the direction of photon propagation (a j ) is calculated for each triangular face j. If any a j < s and a j is positive, then interaction has occurred. If no interaction has occurred, the photon simply moves to its new position x , y , z which must also reside within the same tetrahedral as the packet has not exited the element. If an interaction does occur, the packet will be moved to the first intersection point (lowest positive a j ). Then, if the bordering tetrahedron which shares this triangular face boundary is part of the same medium, the packet continues to propagate into this new tetrahedron, checking again each of the four faces for possible intersections with the packet's remaining step-size of its trajectory, s − a min . If the neighboring element is part of a different medium, then reflection or transmission occurs in a similar manner to that described above for regular media. Such a process continues until the photon packet has moved its entire distance s during this particular step, with absorption and scattering occurring within the element that the photon packet resides at the end of this step. The procedure is then repeated for new steps until the photon packet is terminated, as described above. The power of this method is that the element in which the packet resides is continually tracked as it propagates through the tissue, making it easy to assess when a packet attempts to cross-over a boundary between two different optical media. Furthermore, for each sub-step of the packet's main free-fly step (as it moves from element to element), only four intersection checks need be performed. Full details of the fundamental algorithm can be found in Shen and Wang (2010). A (2D) schematic of the boundary interaction method is shown in Figure 2. Element connectivity The speed of the above algorithm can be significantly improved by utilizing a series look-up tables, as applied extensively in our implementation. The most important of these is a table detailing the element face-to-face connectivity, which has length number of elements and 4 columns (one for each face). For each tetrahedral FIGURE 2 | 2D schematic of triangular face boundary interaction algorithm for a photon packet attempting to propagate through a boundary between two tissue elements (left) or between a tissue element and a bath element (right). s represents the total step-size of the packet, attempting to move between its initial position at x, y, z to its final position at x , y , z . Moving along this trajectory, it encounters a boundary face (face 1), at a distance a 1 from x, y, z. Note that distances to other faces 2 and 3 (a 2 and a 3 ) are negative. The site of interaction is shown by a thin green line. In the case of two tissue elements (left), the packet's path is undisturbed and it continues to move its remaining step-size (s − a 1 ) in the adjacent element into which it passes. In the case of the boundary between tissue and bath (right) the photon packet experiences either reflection back into the tissue element, or transmission into the bath, subject to refraction. www.frontiersin.org September 2014 | Volume 5 | Article 338 | 5 face of each element, the element number of the neighboring tetrahedron which also directly shares this same face is listed, which gives the element into which the photon packet will move if it passes through this particular face of this element. If the element has a face which forms the exterior boundary to the entire domain, then there will be no neighboring element into which to pass and thus a flag is added acknowledging this. Such a look-up table allows to very quickly trace the trajectory of a photon packet as it passes from element-to-element through the domain. An additional look-up table can also be constructed which specifically identifies external surface boundaries. Such a table has the same dimensions as the above element face-to-face connectivity, but instead specifies which faces represent boundaries between different optical domains, including exterior faces. Adaptation for photon movement within the bath One important modification is required to the above algorithm in the case of simulating optical mapping signals. During optical mapping experiments, the cardiac tissue is continually perfused by saline solution. In some cases, the entire preparation sits submerged within an extensive bath of dimensions much larger than the preparation itself (Bishop et al., 2007(Bishop et al., , 2011b. In the simple first approximation, the domain of interest is composed of just two optical media types, myocardial tissue and saline solution (or bath). When photons exit the myocardial tissue they therefore enter the bath. Although the photons both scatter and absorb readily in the myocardial tissue, this is not the case in the bath. Here, the interaction rate is significantly lower with the photons traveling a significantly larger free-fly step-size before absorbing and scattering. In an optical mapping set-up, the distances that photons may travel within the bath medium is still relatively small compared to this large interaction distance. Therefore, in our specific case of simulating optical mapping signals we assume that the photons move freely throughout the bath medium, with straight-line trajectories neither undergoing absorption nor scattering. Thus, when a photon exits the myocardial tissue into the bath, its trajectory is traced in a straight-line until it either passes out of the domain entirely (as would usually be the case for photons exiting the epicardial surface) or alternatively intersects with myocardial tissue once again (as would happen in the case when it exits a cavity surface). In the latter case, the same boundary interaction method as above is followed with the packet either being transmitted into the myocardium at this point once again or being reflected back into the bath. During its propagation through the bath medium, the packet loses no weight (no absorption), its current step-size is not reduced and the remaining step-length and weight (which is had when it left) is continued once it re-enters the tissue. SIMULATION OF FLUORESCENT OPTICAL MAPPING SIGNALS The above photon transport algorithm is used to simulate both the processes of excitation illumination and (voltage-sensitive) fluorescent emission, using optical properties (μ a , μ s , g) obtained at the wavelengths specific to illumination and emission (Ding et al., 2001), which together form the foundations for the production of the signal recorded during cardiac optical mapping. Simulating illumination To simulate uniform illumination of the tissue surface by an external source, packets are incident upon the external tissue boundary, at an illumination angle θ illum , usually taken at 0 to replicate illuminated light normal to the tissue surface (Ding et al., 2001;Bishop et al., 2009). As the photon packets propagate through the tissue they deposit weight within the tissue which is logged in the optical elements, which thus represents photon density (photons per volume) due to illumination throughout the tissue. Optical parameters of cardiac tissue measured at the typical illumination wavelength (488 nm) of the commonly-used voltage-sensitive dye DI-4-ANNEPS are μ a = 0.52, μ s = 23.0 and g = 0.94 (Ding et al., 2001). Simulating fluorescent emission In the case of fluorescent emission, the fluorescent photons originate from dye molecules within the tissue itself; the more photons received by a region of tissue during illumination, the more fluorescent dye molecules get excited and fluoresce. To simulate this process, the total number of fluorescent photon packets emitted from each optical element within the tissue is directly proportional to the corresponding excitation photon density at that same element. In contrast to the process of illumination, fluorescent photon packets are emitted at randomly distributed angles (isotropic emission) from their point source within the tissue. Optical parameters of cardiac tissue measured at the typical emission wavelength (669 nm) of the commonly-used voltagesensitive dye DI-4-ANNEPS are μ a = 0.1, μ s = 21.8, and g = 0.96 (Ding et al., 2001). Simulating optical detection If a photon packet exits the tissue surface within an area from which signals are being recorded, it deposits its total weight at the time of exit as the recorded fluorescent signal from that particular region, or "pixel," of imaged tissue. Successive photon packets exiting the same region build-up the total signal recorded from this pixel, with the total recorded intensity thus representing the final accumulated packet weight. Furthermore, each and every fluorescent photon packet which exits from this specific region on the tissue surface contains information regarding its point of origin within the tissue. With this information, along with the weight of the exiting packet, a distribution can be built-up which shows the relative fraction of recorded fluorescence that originates from a given volume of tissue, termed the scattering or interrogation volume (Ding et al., 2001;Bishop et al., 2006aBishop et al., , 2009, which thus provides essential information regarding the origin of optical mapping signals under different circumstances. In this study, photons exiting the tissue at all angles were recorded as "detected." Simulating voltage-sensitive fluorescent emission The computed scattering volumes for each detection site provides quantification of the relative fraction of recorded fluorescence that originates from different regions of tissue beneath the detection pixel. This information is then be convoluted with the calculated distribution of V m at corresponding points throughout the tissue to simulate voltage-sensitive fluorescent emission . V opt represents the total signal intensity collected at a given optical detection site. Note that V m can be normalized and scaled such that the total recorded fluorescence faithfully replicates the experimental scenario of an approximate 10% change upon a background of fluorescence. Distributions of V m at sequential outputted time intervals (0.2 − 1.0 ms, for example) can thus be used to create a time-varying simulated optical signal. DATA ANALYSIS The total accumulated weight of photon packets exiting pixels of differing square edge-lengths (160, 320, 640 μm) at chosen locations on the epicardial surface ( Figure 1A) was recorded as the total optical signal, V opt . Action potential upstroke durations were defined as the time interval between 10 and 90% depolarization for simulated V opt signals. V opt action potentials were normalized between the resting and maximum depolarized values. In addition to total optical signals recorded from a pixel, scattering volumes were constructed for each pixel during fluorescent emission, detailing the relative contribution from each optical tissue element in the domain to the total signal recorded from that particular pixel. Here, normalized scattering volumes are plotted over the meshes whereby the relative contribution from each optical element is scaled to the maximal contribution value within the mesh. Photon diffusion theory says that in tissue in which scattering dominates over absorption, the photon diffusion equation is valid (Arridge, 1993;Jacques, 1998). For uniform illumination over the surface of a semi-infinite plane or slab, diffusion theory gives the decay of photon density into the tissue as where z is the depth into the tissue and δ is the penetration depth, with a value given by where D is the diffusion coefficient and is equal to D = 1 3(μ a + μ s (1−g)) . We note here that the analytic solution to the photon diffusion equation in the case of a slab of finite thickness is actually of a more complex form to that showed above (Bernus et al., 2005). Although in this study we use a slab of finite thickness, we refer to this solution for the more simple case of a semi-infinite slab as an approximation which may be expected to be good for relatively thick slabs (4 mm) with respect to the penetration depth of the illuminating light (δ illum = 0.59 used here.) ILLUMINATION Excitation illumination was simulated using the MC algorithm described above in Section 2.5.1. 50, 000 photon packets were incident from every epicardial surface triangle in the simple cuboid models and the total accumulated photon weight deposited within the tissue logged. Figure 3A shows the total deposited photon density within the models. Here, we clearly see the attenuation of photon density with depth away from the illuminated epicardial surface in all models. However, in regions around the subepicardial vessels in the small and large vessel models the distribution of photon density is distorted, emphasized by the corresponding highlighted regions. Photon density around the distal side of the vessel cavity relative to the epicardium is higher compared to a similar depth in the no vessel model. This difference is emphasized in Figure 3B, which plots the profile of illumination photon density with depth beneath the surface for all models. For example, the photon density on the distal side of the cavity in the large vessel model is over twice as large (3.4 × 10 7 ), compared to the corresponding depth location (1000 μm) in the no vessel model (1.6 × 10 7 ). In the small vessel model, this difference is less significant, but still over 30% larger (4.9 × 10 7 in the vessel model, compared to 3.7 × 10 7 in the no vessel model at a depth of 450 μm). The overall profiles of the illumination photon density with depth, shown in Figure 3, are fundamentally of a monoexponential form, with a small subsurface peak, as expected (Hyatt et al., 2008;Bishop et al., 2009). Analysis of the profile of the no vessel model for this specific cuboid geometry demonstrated that it decayed marginally more rapidly than a simple exponential, due to photon escape from the surfaces at z = ±2 mm, reflecting the fact that the mono-exponential form of Equation 5 is only an approximation for the case of a finite thickness slab (Bernus et al., 2005). In order to validate our model to assess whether it behaved in approximately the expected manner upon variation of optical absorption and scattering parameters, as predicted from diffusion theory in the case of a semi-infinite slab, we used an additional solid (no vessel) model of larger dimensions (4 × 4 × 4 mm) and simulated illumination in a similar manner to above. Figure 4A shows the plot with depth of the illumination photon density in this case, with Figure 4B showing the corresponding log-plot. As can be seen from Figure 4B, the decay of photon density with depth within this thicker slab was very close to a monoexponential for depths > 1000 μm, giving a penetration depth of δ = 0.60 mm, compared to the theoretical value of 0.58 mm predicted from the diffusion theory approximation in the case of a semi-infinite slab. Figures 4C,D now show the variation in the δ values derived from the approximate fitted mono-exponential for our MC model and that predicted from diffusion theory in the case of a semi-infinite slab (simply derived directly from Equation 5) as the optical parameters μ a and g are varied to change the optical absorption and scattering properties of the tissue, respectively. Both plots demonstrate that our MC model compares well with the expected change in penetration depth δ as μ a and g are varied as predicted from diffusion theory. Despite the monoexponential solution to the diffusion equation only being valid in the case of semi-infinite slabs, the approximation in the case of the 4 mm cube used here is expected to be good, and thus the plotted variation of δ with μ a and g provides a good indication that our model is behaving as expected upon parameter variation. FLUORESCENT EMISSION Following uniform epicardial illumination, fluorescent emission was simulated as described in Section 2.5.2. Figure 5 shows the normalized scattering volumes (defined in Section 2.6) for pixels of edge-lengths 160, 320, and 640 μm for each model. Firstly, Figure 5 shows that large pixel dimensions collect more photons from a more largely distributed spatial region beneath the recording surface. More importantly, though, the presence of the cavity in the small and large vessel models significantly distorts the dimensions of the scattering volume, making it extend deeper into the tissue relative to the case of the no vessel model, more apparent as the pixel size increases and for the larger vessel. This effect is most noticeable in the right-hand panel of the 640 μm pixel where the maximum color-bar scale has been adjusted to 20% of the max intensity to more clearly highlight contributions from deeper tissue regions. The deeper penetration of the scattering volumes seen in the vessel models is further highlighted in Figure 6A which shows depth profiles of the normalized scattering volume contributions for the 640 μm pixel, with Figure 6B showing a corresponding log-plot to emphasize differences. Although the contributions to the total recorded signal are relatively minor in these regions, the value at the cavity edge distal to the epicardium in the large vessel model is still almost 3-fold larger than the corresponding location (1000 μm depth) in the no vessel model (0.059 vs. 0.022 of the normalized scattering volume intensity). In the small vessel model it is 0.281 (at 450 μm depth) compared to 0.215 in the no vessel model. VOLTAGE-SENSITIVE FLUORESCENT EMISSION The functional consequence of these differences in scattering volumes from surface locations near to cavities was assessed by using them to simulate voltage-sensitive fluorescent signals, as described in Section 2.5.4. Voltage-sensitive fluorescent signals were simulated during wavefront propagation through the simple cuboid models in both circumferential and transmural propagation directions, as described in Section 2.2. Figure 7 shows simulated optical signal V opt upstrokes in the simple cuboid models for both circumferential (top) and transmural (bottom) electrical propagation, for each pixel size. In the case of circumferential propagation, the action potential upstroke morphologies are largely similar between different models, all showing the well-known prolongation with respect to the raw electrical V m upstroke (≈ 1 − 2 ms), whilst remaining approximately symmetrical. However, the vessel models show a more significantly prolonged upstroke for all pixel sizes, with the large vessel model upstrokes longer than the small vessel upstrokes; for example upstrokes of 5.70 ms (large vessel), 5.11 ms (small vessel) and 4.74 ms (no vessel) are witnessed for the 640 μm pixel. Note that little difference was seen between upstroke duration for different pixel sizes. In the case of transmural propagation, more significant differences are witnessed in upstroke morphology between the models. All models show the well-known asymmetrical prolongation for wavefront propagation toward the recording site, with the lower part of the upstroke being prolonged, with the upper part less so (Hyatt et al., 2003(Hyatt et al., , 2005Bishop et al., 2006a). However, the vessel models, particularly the large vessel model, show a significant "hump" in the upstroke profile at low polarization levels (0.1 − 0.4) for all pixel sizes. In addition, the vessel models again show a greater overall upstroke prolongation than the no vessel model, being, for example. 5.08 ms (large vessel), 4.55 ms (small vessel) and 4.24 ms (no vessel) for the 640 μm pixel. Finally, we demonstrate the applicability of our method to simulate fluorescent voltage-sensitive signals from the anatomically-detailed wedge model. Figure 8A shows the spatial distributions of two different scattering volumes corresponding to pixels (edge-length 640 μm) on the epicardial surface of the wedge model both close to (right) and distant (left) to a large subepicardial vessel. Similarly to the case of the simple models in Figure 5, the scattering volume associated with the pixel above the large subepicardial vessel is distorted with respect to the volume above the compact tissue, extending more deeply into the myocardium. Figure 8B shows the V opt action potential upstrokes following simulation of voltage-sensitive fluorescent emission using these scattering volumes along with spatial distributions of V m following circumferential and transmural pacing. Again, similarly to the simple models in Figure 7, the upstrokes recorded close to the large vessel is prolonged to a greater degree than that recorded away from the vessel: 4.50/5.31 ms close to the vessel and 3.71/4.50 ms distant to the vessel for circumferential and transmural pacing, respectively. Furthermore, the distortion present in the upstroke morphology following transmural pacing is, although apparent, less evident than in the simplified cuboid models. Note that, as upstrokes are recorded from slightly different spatial locations, slight differences in overall activation times are also apparent. In the case of circumferential pacing, the foot of the action potential is also slightly less prolonged than the corresponding no vessel case in Figure 7 due to its close proximity to the pacing site, which also leads to the wavefront being less curved as it passes under the recording pixel. SIMULATION OF SHOCK-END VIRTUAL-ELECTRODE MEASUREMENTS V m distributions at the end of extra-cellular shocks applied to simple cuboid models (as described in Section 2.2) were used along with the scattering volumes shown in Figure 5 to compute simulated optical shock-end signals. Figure 9A shows the intramural V m shock-end distributions in each of the models for a shock strength of 20 V. As expected, following the shock, the electrophysiological simulation shows strong V m polarization levels on the surfaces, with the epicardial surface (closest to the cathode) strongly depolarized (>100 mV) and the endocardial surface (nearest the anode) strongly hyperpolarized (< −150 mV). However, for strong shock-strengths, there exists a complex distribution of polarization levels around the vessel cavities, with the side of the vessel cavity distal to the epicardium becoming strongly hyperpolarized and the side proximal to the epicardium depolarized. Figure 9B shows the corresponding simulated optical signals for shock-strengths between 10 and 40 V for each of the models corresponding to a pixel of edge-length 320 μm. The Figure shows that the optical signal predicted by the no vessel model increases, as expected, with applied shock-strength, experiencing polarization levels of 143% action potential amplitude at 10 V, and reaching 160% at 40 V. For weak shocks, the small vessel model predicts similar shockend polarization levels to the no vessel model (143% action potential amplitude), with the large vessel model predicting larger polarizations of 160%. However, intriguingly. in contrast to the no vessel model, the signals recorded from surface locations above vessels show the opposite trend, decreasing in magnitude as shock-strength increases. At a shock-strength of 40 V, the large vessel model has decreased to a polarization level of 145% action potential amplitude, whereas the small vessel model has decreased further to just 120%. UTILITY OF MODELING METHODOLOGY We have introduced a novel method for the simulation of voltagesensitive cardiac optical mapping signals using an adapted MC model of light transport. Our method presented here overcomes the previous limitations of continuum diffusion-based models, allowing photon propagation to be simulated through scattering media (tissue) and non-scattering media (saline-filled cavities) alike. Importantly, the model can therefore be applied to simulate fluorescent signals from the latest high-resolution anatomicallydetailed computational geometries, including the presence of fine-scaled anatomical complexity such as intramural blood vessel cavities and extracellular cleft spaces (Bishop et al., 2010b), which shock-strengths of 10, 20, and 40 V for each of the no vessel (red squares), small vessel (blue circles), and large vessel (green triangles) models. Optical signals are normalized with respect to the action potential amplitude following pacing. was not possible using diffusion-based models. This has allowed us to investigate the intricate interaction of fluorescent signal distortion due to light penetration and photon scattering in the vicinity of such non-scattering intramural cavities, such as vessels, which has key relevance to combined modeling and experimental optical mapping studies. Firstly, it will provide an essential tool to facilitate a closer, and essential, validation of the predictions made from computational simulation results using these latest highly-detailed MR-based models with optical mapping recordings which, until now, has not been possible. Secondly, it will provide important insight into the underlying mechanisms of fluorescent signal distortion and the role of fine-scale structures, which may be of significant importance in facilitating a better interpretation of optical mapping signals from high-resolution imaging systems (Bub et al., 2010;Kelly et al., 2013). IMPORTANCE OF LIGHT INTERACTION WITH FINE-SCALE FEATURES Using our model, we have demonstrated the differences in fluorescent signals recorded close to subepicardial cavities and those from above regions of compact myocardium, which can, in certain circumstances, be significant. These differences (shown here as differences in optical scattering volumes, action potential upstrokes and shock-end polarization recordings) may, to a degree, explain pixel-by-pixl heterogeneity in opticallyrecorded electrophysiological metrics such as action potential upstrokes and durations, frequently seen in experimental recordings. Furthermore, the models used here have been derived-from, or base-on, rabbit MR-data. However, the relative degree of signal distortion due to intramural structures from larger species (such as pig, canine, or even human samples frequently used in optical mapping experiments) may be even more significant, due to the relatively larger sizes of the cavities and anatomy involved, whilst having similar optical properties for absorption and scattering. Such distortion effects could also have even more relevance in the use of optical dyes with longer excitation and/or emission wavelengths in which cardiac tissue is less absorbing and where signals are known to be collected from much larger scattering volumes (Walton et al., 2010). We note here that extracellular cleft spaces we not represented in our simplified slab models, but were only present in the detailed MR-derived wedge model, and consequently a detailed analysis of their specific effects was not performed. However, although we believe that large extracellular cleft spaces (containing large regions of non-scattering media) have the potential to affect the optical signal in a similar manner to the vessel cavities investigated in detail in this study, the large extracellular clefts witnessed in the MR data tended not to be located near the epicardial surface, but were found more intramurally within the tissue. Thus, we suggest that extracellular cleft spaces play a lesser role in optical signal distortion. INTERPRETATION OF UPSTROKE MEASUREMENTS The overall prolongation of the simulated optical action potential upstroke was similar to that reported experimentally (Girouard et al., 1996;Gray, 1999;Hyatt et al., 2005) and in other simulation studies using both diffusion and MC methods (Hyatt et al., 2003(Hyatt et al., , 2005(Hyatt et al., , 2008Bishop et al., 2006aBishop et al., , 2009). In addition, the well-acknowledged difference in optical upstroke morphology for wavefronts propagating toward (transmural propagation) compared to parallel to (circumferential propagation) was also observed (Hyatt et al., 2003(Hyatt et al., , 2005Bishop et al., 2006a). However, our model allowed us to demonstrate important differences in both overall upstroke duration and upstroke morphology when recording signals in the viscinity of subsurface cavities, with upstroke duration being increased and having a noticeable "hump" at low polarization levels (transmural propagation), relative to recordings above compact tissue. Such differences may be explained by the important differences in scattering volumes highlighted in Figures 5, 6, 8, demonstrating how signals collected from above subsurface cavities contain a higher proportion of their intensity from deeper intramural depths, beneath the cavity itself. In the case of transmural propagation, this leads to earlier detection of the wavefront, causing the early "hump" in the upstroke. In the case of circumferential propagation, the nature of the intramural fiber architecture causes the wavefront to be curved (concaved), with intramural layers leading epi-/endocardial surface regions (Figure 7). Thus, collection of relatively more signal from intramural depths above a cavity leads to an earlier detection of the wavefront in these regions and thus a more prolonged upstroke duration. Many optical mapping studies in recent years have used careful measurements of both upstroke durations and morphologies from the epicardial surface to infer detailed information regarding localized subsurface wavefront direction (Hyatt et al., 2003(Hyatt et al., , 2005 and relative electrotonic loading (Kelly et al., 2013). Although, in this study we highlighted the significantly different effects on the upstroke due to the interaction of the wavefront with large sub-epicardial cavities for different overall global wavefront propagation directions (toward and parallel to the recording surface), the interaction with localized subsurface wavefront orientations with cavities may also represent an important consideration. The findings from this study therefore suggest careful interpretation of such optical recordings, in light of the exact surface location from which they are taken with respect to subsurface intramural cardiac anatomy. INTERPRETATION OF SHOCK-END FLUORESCENT MEASUREMENTS The collection of a significant fraction of the total fluorescent signal from a subsurface scattering volume of tissue beneath the surface recording site has previously been suggested to underlie the apparent reduction in optically-recorded "surface" fluorescent signals following strong extracellular shocks, relative to the magnitudes predicted by computational bidomain simulations (Janks and Roth, 2002;Bishop et al., 2006bBishop et al., , 2007. Such electrophysiological simulations show that polarization levels decrease rapidly into the tissue depth over length-scales of the order of a length constant, with intramural tissue correspondingly being of significantly weaker polarization levels than the strongly polarized external tissue surfaces (as shown in Figure 9A). Collecting signals from within the scattering volume has an averaging effect that modulates the recorded optical signal due to the inclusion of the more weakly-polarized intramural tissue. When combined with optical mapping signal synthesis models, simulations show a decrease in epicardial shock-end values, significantly reducing the disparity between simulations and experiments; however, the simulated values still consistently over-estimated those obtained experimentally (Bishop et al., 2007). The reduction in shock-end optical polarization levels seen in recordings above intramural vessel cavities undercovered in this study, relative to the stronger polarizations predicted above compact myocardial tissue, may go some way to explaining this previous disparity. Recently, detailed modeling (Bishop et al., 2010a(Bishop et al., , 2012Luther et al., 2012) [and experiments (Fast et al., 1998;Fast, 2002)] has also shown that intramural cavities can induce the formation of "virtual-electrodes" during strong extracellular shocks with opposite sides of the cavities becoming de-/hyper-polarized due to a movement of current out of, and back into, the intracellular domain as it traverses the cavity. The findings from our study (Figure 9) suggest that, as the complex distribution of polarization levels surrounding the cavities of subsurface vessels lies within the scattering volume, they make a significant contribution to the surface recorded optical signal. Here, the reduced conductivity of the vessel lumen wall (Bishop et al., 2010a) leads to the epicardial side of the cavity becoming strongly depolarized with the distal mid-myocardial side becoming strongly hyperpolarized (Bishop et al., 2012). As shock strength increases, the virtual-electrode pattern around the vessel cavities becomes stronger and more wide-spread, more so for the larger vessel. This is more noticeable for the hyperpolarized tissue on the distal mid-myocardial side of the cavity, as there is less tissue on the epicardial side of the vessel which is already strongly depolarized by the shock anyway and so the amount of depolarized tissue in this region saturates. Due to the fact that the scattering volume extends beyond the distal side of the cavity (seen in Figure 5), these strongly hyperpolarized regions contribute to the total collected optical signal, reducing its magnitude with respect to recordings above compact myocardium where the scattering volume only samples from tissue which is either depolarized or of mid-polarization levels. As the subepicardial tissue is strongly depolarized by the shock anyway, the additional affect of the virtual-electrode from the vessel in this region does not have a major impact on the total recorded fluorescent signal (although it does appear to slightly increase the polarization level for the large vessel, evident at the weaker shock strengths). Thus, for stronger shocks, a larger amount of more hyperpolarized tissue on the distal side of the cavity contributes to the signal, causing optical polarization levels above the cavity to decrease with shock strength. The lack of this cavity-driven effect in the absence of vessels, causes the expected increase in simulated polarization levels with shock strength. STUDY LIMITATIONS A potential limitation of our study is the relatively thin nature of the slab, particularly in the z-direction, and the resulting effect photon interactions with boundaries may have on the results presented here, which would not be expected to be present in the whole ventricular in vivo case. We have performed additional simulations using the larger no vessel model used in Section 3.1 and in Figure 4 of twice the thickness in the z-direction, being of dimension 4 × 4 × 4 mm, to investigate potential changes in the scattering volume due to boundary interactions. Our simulations showed no discernible differences in the spatial scattering volume within the model. More precisely, at the quoted depths of 450 and 1000 μm in Section 3.3, the values of the normalized intensity contributions to the scattering volume were 0.219 and 0.026, respectively, compared to the quoted values for the standard thinner no vessel model of 0.215 and 0.022, respectively. We therefore believe that the relatively small size of our models and the relatively close proximity of boundaries has not unduly affected the major findings of this study. CONCLUSIONS This study has presented a novel application of a MC photon scattering model to simulate, for the first time, cardiac optical mapping signals over anatomically-complex, unstructured, tetrahedral, finite element computational models, including representations of fine-scale anatomy and intramural cavities. This novel approach was used to demonstrate significant differences in optical action potential upstrokes (durations and morphologies) recorded above subepicardial vessels compared to those recorded above compact myocardial tissue, due to differences in subsurface optical signal collection volumes. Such differences were also responsible for significant reductions in the apparent opticallymeasured epicardial polarization above vessel cavities, compared to from tissue away from cavities. We have therefore demonstrated the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity and to understand its recording by experimental imaging techniques.
12,926.2
2014-08-07T00:00:00.000
[ "Physics" ]
bSiteFinder, an improved protein-binding sites prediction server based on structural alignment: more accurate and less time-consuming Motivation Protein-binding sites prediction lays a foundation for functional annotation of protein and structure-based drug design. As the number of available protein structures increases, structural alignment based algorithm becomes the dominant approach for protein-binding sites prediction. However, the present algorithms underutilize the ever increasing numbers of three-dimensional protein–ligand complex structures (bound protein), and it could be improved on the process of alignment, selection of templates and clustering of template. Herein, we built so far the largest database of bound templates with stringent quality control. And on this basis, bSiteFinder as a protein-binding sites prediction server was developed. Results By introducing Homology Indexing, Chain Length Indexing, Stability of Complex and Optimized Multiple-Templates Clustering into our algorithm, the efficiency of our server has been significantly improved. Further, the accuracy was approximately 2–10 % higher than that of other algorithms for the test with either bound dataset or unbound dataset. For 210 bound dataset, bSiteFinder achieved high accuracies up to 94.8 % (MCC 0.95). For another 48 bound/unbound dataset, bSiteFinder achieved high accuracies up to 93.8 % for bound proteins (MCC 0.95) and 85.4 % for unbound proteins (MCC 0.72). Our bSiteFinder server is freely available at http://binfo.shmtu.edu.cn/bsitefinder/, and the source code is provided at the methods page. Conclusion An online bSiteFinder server is freely available at http://binfo.shmtu.edu.cn/bsitefinder/. Our work lays a foundation for functional annotation of protein and structure-based drug design. With ever increasing numbers of three-dimensional protein–ligand complex structures, our server should be more accurate and less time-consuming.Graphical Abstract bSiteFinder (http://binfo.shmtu.edu.cn/bsitefinder/) as a protein-binding sites prediction server was developed based on the largest database of bound templates so far with stringent quality control. By introducing Homology Indexing, Chain Length Indexing, Stability of Complex and Optimized Multiple-Templates Clustering into our algorithm, the efficiency of our server have been significantly improved. What’s more, the accuracy was approximately 2–10 % higher than that of other algorithms for the test with either bound dataset or unbound dataset Background Most biological processes involve the interaction of ligands with proteins. Functional characterization of ligand-binding sites of proteins is a key issue in understanding those biological processes [1][2][3][4]. In addition, identifying the location of protein-binding sites is a vital first step in structure-based drug design [5][6][7][8]. However, functional characterization of proteins through experimental method is a labor intensive and time-consuming process. A computational tool to predict the functional binding sites in a protein is therefore of practical importance. To date, a variety of computational methods have been developed for protein-binding sites prediction, which can be divided into four categories: geometry based methods [9][10][11][12][13][14], energy based methods [15,16], alignment based methods [17][18][19][20] and other miscellaneous methods [21][22][23]. Alignment based methods can be further divided into sequence alignment based and structural alignment based methods. Recently, increasing structural genomics projects have led to the exponential growth of the number of available protein structures. As a consequence, structural alignment based methods exceeded other methods due to its more efficient and more accurate performance. In 1996, Lichtarge et al. [17] developed the first structural alignment based algorithm for protein-binding sites prediction, entitled evolutionary trace method (ET method). It is based on the extraction of functionally important residues from sequence conservation patterns in homologous proteins, and on their mapping onto the protein surface to generate clusters identifying functional interfaces. In 2007, Brylinski and Skolnick developed a popular structural alignment method called FINDSITE [18]. For a given target sequence, FINDSITE identifies ligand-bound template structures from a set of distantly homologous proteins recognized by the PROS-PECTOR_3 threading approach and superposes them onto the target's structure using the TM-align structural alignment algorithm. Binding pockets are identified by the spatial clustering of the center of mass of templatebound ligands that are subsequently ranked by the number of binding ligands. In 2009, Oh et al. [24] developed LEE, a two-stage template-based ligand binding site prediction method, where templates are used first for protein 3D modeling and then for binding site prediction by structural clustering of ligand-containing templates to the predicted 3D model. Later in 2010, Wass et al. [25] described a new method called 3DligandSite. Structures similar to the query are identified by using MAM-MOTH [26] against a library of protein structures with bound ligands. The structural based alignment of the similar structures and the query superposes ligands onto the query structures. After filtering, the top 25 ligands are retained for analysis and further clustering. In 2012, another comparative approach called COFACTOR was proposed by Zhang group [19]. COFACTOR recognizes functional sites of protein-ligand interactions using lowresolution protein structural models, based on a globalto-local sequence and structural comparison algorithm. The major advantage of COFACTOR over the existing methods is the optimal combination of global and local structural comparisons for identifying protein-binding sites. But, the global comparison can be distracted by structural variations in the regions far away from the binding pockets; meanwhile the local comparison has a high false positive rate since the number of residues involved is too small. Later in 2013, Zhang group published another structural alignment based algorithm, TM-SITE [20]. Different from COFACTOR, TM-SITE compares the structures of a subsequence from the first binding residue to the last binding residue (called SSFL) on the query and template proteins, which solve the problems of global-to-local structural comparison algorithm. These methods provide us valuable choices to predict the binding sites. However, their performance needs to be improved for lack of accuracy or time-efficiency or both since the structural information of protein-ligand complexes (bound protein) are underutilized. Herein, we built so far the largest database of bound templates with stringent quality control. And on this basis, Stability of Complex as a new criterion and Optimized Multiple-Templates Clustering algorithm are introduced to improve the accuracy. Meanwhile, Homology Indexing and Chain Length Indexing are used to accelerate the efficiency of the structural alignment. Finally, we presented a user friendly protein-binding sites prediction web server (bSiteFinder), at http://binfo. shmtu.edu.cn/bsitefinder/. Rules of five The protein data in PDB database are filtered through the rules below: 1. The macromolecule type is protein, no DNA and RNA. 2. Experiment method is set to X-ray. 3. X-ray resolution is between 0 and 3.0. 4. Has free ligands = yes. 5. Sequence length is over 20. Number of ligand atoms In the process of building databases, which database a protein finally falls into depends on whether it contains ligands and whether these ligands have enough atoms. For this reason, ligands identification, which is judged by the rules mentioned below, plays a key role. Every HETATM residue is recognized through HET records from the header of PDB files. Notably, some of the residues are modified on normal chains, which are not counted as true ligands because of their present in the MODRES records. Hence, the selected ligands only come from HET records excluding MODRES ones. Water molecule is included in HETATM but not regarded as a ligand. Analyzing the data, we define that a ligand should possess 6 or more atoms as a basic rule to identify a ligand. Stability of Complex The binding site check criterion is using as the standard of judging the bound structure's stability. Only if any one of atoms of the ligand has a distance within 4 Å from the geometry center of the calculated binding site, the structure of complex is considered to be stable. Homology Indexing Homology Indexing is implemented by using SCOPe, version 2.03 [27]. First, a four-digit classification number is searched based on PDB ID and CHAIN ID of the query chain. After that, all the protein chains with the same classification number are obtained and used to constitute the template database for subsequent structural alignment. Chain Length Indexing Only the chains, which have length difference with query chain less than 30 %, are used as candidates for subsequent structural alignment. Structural alignment The structural alignment between query and templates in bSiteFinder is implemented by using Combinatorial Extension (CE) algorithm, which is provided by Biojava [28]. Different from traditional dynamic programming algorithm and Monte Carlo algorithm, CE algorithm defines continuous residues in the sequence as aligned fragment pairs (AFPs), which is used in local alignment between query and template. Finally, the optimized alignment results are obtained by expanding or abandoning the local AFPs. Optimized Multiple-Templates Clustering After structural alignment, template will be mapped to query. Then, the templates which meet the requirement of Stability of Complex are ranked according to the similarity with query chain, and ligands of the top 20 templates at most will be picked out. After 20 times of structural alignments, all the ligands in templates will be mapped to the query. Further, these ligands are clustered into different clusters. The number of ligand geometric centers, which have a distance less than 3 Å from the certain ligand geometric center, is counted for each ligand. After that, the ligand with the largest number is defined as the center of the Top1 binding site (Fig. 1). Then, this ligand and all the other ligands within 3 Å are removed for searching the centers of the Top2 and Top3 binding site in the same way. Detection of binding sites On the condition that protein chains have ligands, we define all residues within the distance of 8 Å from ligands as the components of the binding site. On the condition that binding site is detected by doing structural alignment with templates, all residues within the distance of 10 Å from mapped ligands are defined as the components of the binding site. It should be noted that if the bound proteins' stabilities did not pass the evaluation of Stability of Complex, the bound proteins would be treated as unbound proteins with original ligands removed. Test and evaluation methods For comparing with other binding site prediction algorithms, two widespread adopted datasets from LIG-SITEcsc [29] were used for testing our algorithm with the same criteria of evaluating the accuracy of binding site prediction. The first test set contained 210 proteins with ligands (bound dataset). At the suggestion of RCSB, Fig. 1 Workflow of Optimized Multiple-Templates Clustering. Template (b) is mapped to query (a) by structural alignment to form query-template complex (c). Then, the template chain will be removed, and the ligand will be retained (d). After 20 times of structural alignments, the ligands in templates will be mapped to the query (e). The number of ligand geometric centers, which have a distance less than 3 Å from the certain ligand geometric center, is counted for each ligand (f). The ligand with the largest number is defined as the center of the Top1 binding site (g) protein 1B6N was replaced by 1Z1H. The second test set contained 48 proteins with/without ligands (bound/ unbound dataset). Here, the accuracy and Matthews Correlation Coefficient (MCC) [30] were both used to evaluate our algorithm. Accuracy A widely accepted verification method [13] was used. For bound protein, if the protein-ligand's stability has passed the evaluation of Stability of Complex, the accuracy is 100 %. If the protein-ligand's stability did not pass the evaluation of Stability of Complex, the original ligands of bound protein will be removed and in this situation, the bound protein will be regarded as unbound protein and may have a lower accuracy. For unbound proteins, if the geometric center of a binding site has a distance within 4 Å from any one of the atoms of the predicted ligands, this binding site is regarded as a correctly predicted binding site. Otherwise, this binding site is regarded as an incorrectly predicted binding site. MCC Another evaluation index, MCC, was also used to evaluate the accuracy of binding site prediction. For each protein chain, all the residues were divided into four categories: TP: correctly predicted binding site residues; TN: correctly predicted nonbinding site residues; FP: incorrectly predicted as binding site residues; and FN: incorrectly predicted as nonbinding site residues. MCC scores are defined as: For bound proteins that passed the evaluation of Stability of Complex, the MCC is 1. Otherwise, the bound proteins was regarded as unbound proteins and MCC would be lower than 1. For unbound proteins, the structural alignment between query and template is implemented to map the ligands in bound proteins to the unbound proteins. Then, the mapped pseudo ligands were used to detect the binding site as describe in "Detection of Binding Sites". To evaluate our methods, we divided the residues of query chains into residues of predicted binding site (Res-BS-Pre) and residues of predicted non-binding site (Res-NBS-Pre). At the same time, we also define residues of experimental binding site as Res-BS-Exp and residues of experimental non-binding site as Res-NBS-Exp according to the original ligands of query chains. Therefore, Create template database Our algorithm will maximize the information of bound proteins. Herein, we built so far the largest database of bound templates from PDB database with stringent quality control. Figure 2 shows the workflow of creating template database, which include four steps as follow: (1) Workflow of binding sites detection When a query protein is submitted by user for binding site prediction, it will be firstly divided into chains. After that, the prediction will be done for each chain. Figure 3 shows the workflow of binding sites detection. Each protein chain will be processed by following steps: 1. Binding sites prediction of high quality bound protein (Part 1) Detection of Binding Sites is employed for binding site detection, when the protein chains meet the requirement of Number of Ligand Atoms and Stability of Complex. Otherwise, enter the following process. Binding sites prediction of unbound protein with bound templates of same Homology Indexing (Part 2) If the query chain has a four-digit classification number in SCOPe and has bound template with the same Homology Indexing in template database, the binding site of this query chain will be detected as the following procedure. First, structural alignments between query chain and templates will be done, and the top 20 bound templates which are the most similar to the query will be selected subsequently. The locations of ligands are detected by mapping the ligands in templates to the query, and then the optimization of binding sites was following by using the new developed Optimized Multiple-Templates Clustering method. Finally, Detection of Binding Sites will be employed for binding site detection. Otherwise, enter the following process. Workflow of binding sites detection. Each protein chain submitted would be processed successively by following steps: 1 Binding sites prediction of high quality bound protein (Part 1), or enter the following process. 2 Binding sites prediction of unbound protein with bound templates of same Homology Indexing (Part 2), or enter the following process. 3 Binding sites prediction of unbound protein with bound templates of Chain Length Indexing (Part 3). Any protein chains submitted into our system could receive the results of binding sites via efficient computation Binding sites prediction of unbound protein with bound templates of Chain Length Indexing (Part 3) If the query chain has no satisfactory homologous bound template, the binding site of this query chain will be detected as the following procedure. Chain Length Indexing will be employed to search the bound templates, which have difference with query chain less than 30 % in length, in template database. Then enter the process as the description above (Part 2 of "Workflow of binding sites detection") with top 20 most similar bound templates. Any protein chains submitted into our system could receive the results of binding sites via efficient computation. Performance of our algorithm and its comparison with others Two widely adopted datasets including 210 bound and 48 bound/unbound dataset [29] were used for testing our algorithm, and the results are shown in Tables 1 and 2. The accuracy of our algorithm is approximately 2-10 % higher than that of other algorithms for the test with either bound or unbound datasets. In addition, with size of the dataset increased, our algorithm exhibited even more advantage over others regarding accuracy (The accuracy differences between our algorithm and the second highest algorithm in the Top1 increase from 2.4 % with 48 unbound dataset to 11.8 % with 210 unbound dataset). For bound chain (such as PDB ID: 5p2p, CHAIN ID: A), the binding site is composed of residues within 8 Å from the ligand (Fig. 4a). For unbound chain (such as PDB ID: 3p2p, CHAIN ID: A), unlike bound chain, the binding site is detected with the aid of templates (PDB ID: 1oxr, CHAIN ID: A). First, the ligand in template is mapped to unbound chain. Then the binding site is composed of residues within 10 Å from the ligand (Fig. 4b). See Method part for details. Table 3 shows the alignment frequency between templates and the query from the 48 unbound dataset after Homology Indexing is used. Without Homology Indexing, 48 unbound dataset should be aligned with each of chains in template database, which means that there are 48 × 101,315 time-consuming structural alignments needed to be done. But, with the Homology Indexing introduced, it can be reduced to 25,127 structural alignments, which only account for only 0.5 % of that without Homology Indexing. It's worth noting that alignment frequencies, in Table 3, reach hundreds or even thousands in practical, which may be due to the uneven distribution of different protein families in template database at present. Stability of Complex Examining the bound chain structures in PDB database, it is observed that ligands do not always have a stable binding with protein chains at binding site, such as PDB ID: 2j22, CHAIN ID: A (Fig. 5). For this kind of bound structures, binding sites could not be computed directly based on their ligands. Thus, Stability of Complex is introduced into our algorithm to avoid these situations. Looking for similar templates by structural alignments is needed for unbound chains which have no ligands to compute the binding site. In the process of structural alignment and ligand mapping successively, ligand in template may not have a stable bind with unbound chain (Fig. 6a, b). Likewise, Stability of Complex is employed here to decide whether ligand from template and unbound chain can form a new stable bound structure. Similarly, Stability of Complex is introduced to build a template database (see details in Fig. 2), which reduced the number of bound structures from 117,823 to 101,315 with 14 % structures removed. Not only improved the quality of template database, this operation also reduced the number of time-consuming structural alignments. An Optimized Multiple-Templates Clustering method Similar to FINDSITE [31], 3DLigandSite [25] and COFACTER [19], the prediction accuracy of our algorithm is improved by Optimized Multiple-Templates Clustering. However, in other works, the cluster number is required in previous algorithms, which actually could not be obtained before computing. In addition, the distances between ligands in each cluster have no reasonable physical meaning. In our algorithm, this deficiency is overcome by defining a new constraint, which restrict that the distances between geometric centers of all the ligands (for one binding site) in the same cluster should be less than a certain threshold (cluster radius). Ligands in multiple templates could be clustered automatically following the constraint with reasonable physical meaning, and there has no need to estimate cluster number before clustering. Considering the space complexity of bound structure, cluster radius to be used is optimized based on test set. For 48 unbound dataset, threshold is set from 1.0 to 8.0 Å to compute the accuracy of the Top1 and Top3. Table 6 shows the accuracy computed with different cluster radius, and the accuracies of the Top1 range from 72.3 to 85.4 %. It's worth noting that the accuracy of our algorithm with any cluster radius is higher than that of other algorithms (Tables 2, 6). Result in Table 6 indicates that the Top1 and Top3 have highest prediction accuracies with 48 unbound dataset, when cluster radius is set to 3.0 Å. Thus, 3.0 Å is set as the default parameter by bSiteFinder in Optimized Multiple-Templates Clustering. Conclusions bSiteFinder as a protein-binding sites prediction server was developed based on the largest database of bound templates so far with stringent quality control. Each protein chain submitted would be processed by following steps: (1) Binding sites prediction of high quality bound protein; (2) Binding sites prediction of unbound protein with bound templates of same Homology Indexing; (3) Binding sites prediction of unbound protein with bound templates of Chain Length Indexing. Any protein chain submitted could receive the results of binding sites via efficient computation. By introducing Homology Indexing, Chain Length Indexing, Stability of Complex and Optimized Multiple-Templates Clustering into our algorithm, the efficiency of our server have been significantly improved. What's more, the accuracy was approximately 2-10 % higher than that of other algorithms for the test with either bound dataset or unbound dataset. For 210 bound dataset, bSiteFinder achieved high accuracies up to 94.8 % (MCC 0.95). For another 48 bound/ unbound dataset, bSiteFinder achieved high accuracies up to 93.8 % for bound proteins (MCC 0.95) and 85.4 % Fig. 6 a Unbound chain (PDB ID: 1bbs, CHAIN ID: A, blue) and related appropriate template (PDB ID: 1hrn, CHAIN ID: B, yellow). After mapping the ligand (03D, red) in template to unbound chain, a new stable bound structure is formed with the tightly binding between the ligand and unbound chain. The top 20 templates at most ranked according to the similarity would be subsequently clustered. b Unbound chain (PDB ID: 1bbs, CHAIN ID: A, blue) and related appropriate template (PDB ID: 3g6z, CHAIN ID: A, yellow). After mapping the ligand (NAG, red) in template to unbound chain, a new stable bound structure could not be formed. The reason is that there are more residues (see the red circle) in template than unbound chain which have a close connection with the ligand for unbound proteins (MCC 0.72). An online bSiteFinder server is freely available at http://binfo.shmtu.edu.cn/ bsitefinder/, and the source code is provided at the methods page. Our work lays a foundation for functional annotation of protein and structure-based drug design. With ever increasing numbers of three-dimensional protein-ligand complex structures, our server should be more accurate and less time-consuming.
5,152
2016-07-11T00:00:00.000
[ "Computer Science", "Biology" ]
Sr–fresnoite determined from synchrotron X-ray powder diffraction data The fresnoite-type compound Sr2TiO(Si2O7), distrontium oxidotitanium disilicate, has been prepared by high-temperature solid-state synthesis. The results of a Rietveld refinement study, based on high-resolution synchrotron X-ray powder diffraction data, show that the title compound crystallizes in the space group P4bm and adopts the structure of other fresnoite-type mineral samples with general formula A2TiO(Si2O7) (A = alkaline earth metal cation). The structure consists of titanosilicate layers composed of corner-sharing SiO4 tetrahedra (forming Si2O7 disilicate units) and TiO5 square-based pyramids. These layers extend parallel to the ab plane and are stacked along the c axis. Layers of distorted SrO6 octahedra lie between the titanosilicate layers. The Sr2+ ion, the SiO4 tetrahedron and the bridging O atom of the disilicate unit are located on mirror planes whereas the TiO5 square-based pyramid is located on a fourfold rotation axis. The fresnoite-type compound Sr 2 TiO(Si 2 O 7 ), distrontium oxidotitanium disilicate, has been prepared by high-temperature solid-state synthesis. The results of a Rietveld refinement study, based on high-resolution synchrotron X-ray powder diffraction data, show that the title compound crystallizes in the space group P4bm and adopts the structure of other fresnoite-type mineral samples with general formula A 2 TiO(Si 2 O 7 ) (A = alkaline earth metal cation). The structure consists of titanosilicate layers composed of corner-sharing SiO 4 tetrahedra (forming Si 2 O 7 disilicate units) and TiO 5 square-based pyramids. These layers extend parallel to the ab plane and are stacked along the c axis. Layers of distorted SrO 6 octahedra lie between the titanosilicate layers. The Sr 2+ ion, the SiO 4 tetrahedron and the bridging O atom of the disilicate unit are located on mirror planes whereas the TiO 5 square-based pyramid is located on a fourfold rotation axis. Experimental A synthetic sample of Sr-fresnoite was made by melting a stoichiometric mixture of SrCO 3 , TiO 2 and SiO 2 to form a glass. This glass was then quenched to 293 K, reground and then heated for 7 days at 1323 K. A small amount of CeO 2 (NIST SRM 674a) standard was added to this powdered sample to act as an internal standard. Refinement The powdered sample was loaded into a 0.7 mm diameter quartz capillary, prior to synchrotron X-ray powder diffraction data collection using the P02.1 high resolution powder diffraction beamline at the PETRA-III synchrotron. The beam on the sample was 0.8 mm wide and 1.27 mm high. Powder diffraction data were collected using a PerkinElmer XRD 1621 flat panel image plate detector, which was approximately 1.4 m from the sample. One powder diffraction dataset was collected at 293 K out to approx. 11.9°/2θ, the data collection time was 30 s. Powder diffraction data were converted to a list of 2θ and intensity using FIT2D (Hammersley et al., 1996, Hammersley, 1997. Powder diffraction data in the range 1-11.7°/2θ were used for the Rietveld refinement. Data below 1°/2θ were excluded due to scatter from the beam stop and as there were no Bragg reflections in this region. Data above 11.7°/2θ were excluded as this corresponded to the edge of the image plate detector where the Bragg peaks were weaker. The main Bragg reflections of the powder diffraction pattern could be indexed in space group P4bm with similar lattice parameters to those of PDF card 39-228 (ICDD, 1989). The unit cell of the incommensurately modulated structure (Höche et al., 2002) corresponds to a doubled c axis compared to that given on the PDF card. The doubled c axis does not match with some of the low-angle Bragg reflections for the Sr 2 TiO(Si 2 O 7 ) sample used in the present study, therefore this incommensurate structure was not used for Rietveld refinement. Bragg reflections for three impurity phases could also be identified in the powder diffraction data. SrTiO 3 and SrSiO 3 were formed as by-products during preparation. Initial lattice parameters for the three Sr-containing phases were refined using local software. The CeO 2 (NIST SRM 674a) standard was used to calibrate the sample to detector distance. The CeO 2 lattice parameter was fixed at 5.4111 Å so as to calibrate the wavelength as 0.207549 Å. The P4bm crystal structure of the mineral fresnoite (Ba 2 TiO(Si 2 O 7 ); Ochi, 2006) was used as a starting model for the Rietveld refinement (Rietveld, 1969) of the structure of Sr 2 TiO(Si 2 O 7 ). The crystal structures of SrSiO 3 (Machida et al., 1982), SrTiO 3 (Mitchell et al., 2000) and CeO 2 (Goldschmidt & Thomassen, 1923) were used for the impurity phases in the refinement. Isotropic atomic displacement parameters were used for all phases. For the Sr 2 TiO(Si 2 O 7 ) phase the Si-O and Ti-O distances in the SiO 4 and TiO 5 polyhedra were soft-constrained to those for Ba 2 TiO(Si 2 O 7 ) (Ochi, 2006). Computing details Data collection: local software; cell refinement: local software; data reduction: local software; program(s) used to solve structure: coordinates taken from a related compound; program(s) used to refine structure: FULLPROF (Rodriguez-Carvajal, 2001); molecular graphics: VESTA (Momma & Izumi, 2008); software used to prepare material for publication:
1,208.2
2012-12-05T00:00:00.000
[ "Materials Science" ]
Instanton effects on CP-violating gluonic correlators In order to better understand the role played by instantons behind nonperturbative dynamics, we investigate the instanton contributions to the gluonic two point correlation functions in the SU(2) YM theory. Pseudoscalar-scalar gluonic correlation functions are calculated on the lattice at various temperatures and compared with the instanton calculus. We discuss how the instanton effects emerge or disappear with temperature and try to provide the interpretation behind it. Introduction θ-vacuum of QCD is the sum of degenerate vacua labeled by the integer winding number which, through the instanton effect, have transition between each other.In each transition, the difference of winding numbers in between two states is the topological charge.This superposition allows a phase parameter θ, which appears only in non-perturbative contributions.In the high temperature above the critical temperature T c , the susceptibility in terms of θ, so called the topological susceptibility χ t , is well described by the instanton calculus. The instanton effect, for instance, is involved in the strong CP problem.In the path integral, the non-zero θ comes into the CP-violating term of the QCD Lagrangian, θF F. From the observation of the electric dipole moments of nucleon, the vacuum angle θ is bounded by extremely small value.The fact that there is no reason in the Standard Model to have θ ≃ 0 in nature is a long standing puzzle, namely the strong CP problem.Two candidates as a solution for this problem are related to the instanton.One is the Peccei-Quinn (PQ) mechanism [1][2][3], which introduces the axion field a(x) to make the QCD vacuum conserve CP symmetry.The smallness of θ parameter is explained as the coefficients of F F term in SM θ = θ + a(x)/ f a feel the axion potential χ t cos θ stemming from by the instanton effects followed by θ = − ⟨a⟩ / f a with the PQ scale f a much higher than the EW scale.The other solution is the case of the bare u quark mass m u being zero, then the θ parameter is unphysical.The case in which the additive mass shift of O(m d m s /Λ QCD ) to the u quark mass comes from the 't Hooft vertex originated from instanton effects.The ambiguity of the non-perturbative u quark mass still leaves room for the statement m u = 0. [4] In spite of the importance of the role of the instanton, the understanding of the instanton contribution from analytic calculation within quantum field theory is quite difficult except for particular situations.The instanton picture is valid only at high temperature and may not apply to lower temperature, on a fortiori to zero temperature.Let us briefly introduce the situation of the instanton calculus based on quantum field theory.Due to the tunneling nature between degenerate classical vacua with distinct winding number, the instanton solution is the classical solution of the SU(N) Yang-Mills action with topological charge Q.This solution is known as the BPST instanton solution at zero temperature, and HS caloron solution for trivial holonomy and KvBLL caloron solution for non-trivial holonomy at finite temperature.At zero temperature, the instanton calculus of observables is ill-defined due to the IR divergence which essentially stems from the asymptotic freedom of QCD.While, at finite temperature, the temperature effect introduces cutoff for the long range interaction at scale x ∼ 1/T due to the Debye screening, then all observables are IR finite.Especially, at very high temperature the interaction between instantons can be neglected and the dilute instanton gas approximation (DIGA) works well.The temperature dependence of the topological susceptibility χ t calculated by DIGA is checked by the quenched lattice simulation.[5] However, at somewhat lower temperature, the interaction between instantons cannot be neglected and DIGA becomes invalid. Our interests on the instanton are following three points.The first point is whether the instanton picture makes sense in in the local observables such as the correlator as well as a global observable such as the topological susceptibility.The second point is, in the finite temperature regime, from which temperature the instanton picture makes sense.Although the instanton picture of the topological susceptibility is established at high temperature in pure Yang-Mills theory, there is still ambiguity about the temperature where the dilute instanton gas approximation starts working.The third point is that if we consider the observable which has IR finite instanton contribution at zero temperature, such observable may comply with the instanton picture even at zero temperature. The SU(2) Yang-Mills theory is interesting in the last context.The gluonic 2-point-correlation function ⟩ is IR finite [6] even at zero temperature, due to the higher power of the instanton size suppression coming from the field strength in the instanton background.There may be still room for the instanton to play an important role at zero temperature, especially in this class of observables.However, we don't know how the instanton effects show up in the correlator calculated through the lattice simulation even at higher temperature where the instanton picture works in other global observables. In this work we firstly investigate the case of finite temperature focusing on the CP-violating (CPV) gluonic correlator, where we can examine the first and the second point proposed above.In order to prove the validity of the instanton picture from the lattice simulation, it is easy to consider the observables which are dominated by the non-perturbative instanton contribution rather than the perturbative contribution in terms of the order counting of the strong coupling and the operator product expansion.This is why we choose the CPV gluonic correlator. Instanton calculus We will briefly introduce the outline of the calculation of the thermal instanton contribution to the CPV 2pt-gluon correlation function.Due to the self-duality of the instanton solution, FF = F F, three combinations of four dimensional operator FF and F F have same x-dependence.In the following, the CPV correlator will be denoted as ⟨sq(x)⟩ using the action density s(x) ∝ FF(x) and the topological charge density q(x) ∝ F F(x).At zero temperature, the leading instanton contribution to this function is derived by Dine et al. [6] as where subscript θ denotes the average in the θ-vacuum and b is the beta-function coefficient in SU(2) Yang-Mills theory, namely b = 22/3.At finite temperature we first calculated the leading instanton contribution.Since the temperature generates a cutoff for the interaction at distance x ∼ 1/T , the x-dependence of the correlator undergoes 2 EPJ Web of Conferences 175, 12009 (2018) https://doi.org/10.1051/epjconf/201817512009Lattice 2017 a non-trivial change because of the temperature effect.Naively, the leading instanton contribution to the correlator can be written as where γ(x) should be determined by the direct calculation. The thermal instanton density in pure Yang-Mills theory with finite temperature is defined through the partition function of topological charge Q = 0 and 1 sector in the semi-classical approximation.The quantum (=thermal) fluctuation come into the distribution n(ρ, T ) of the size ρ and the position z = (⃗ z, τ 0 ) of the instanton. ] , where N is the number of colors and µ denotes the renormalization scale.We use a 1-loop level result for the instanton density, so β = 11N/3.The full formulae are shown in our previous paper.[5] Note that the temperature fluctuation comes into the IR cutoff in terms of the instanton size ρ with cutoff scale ρ cut ∼ 1/πT .The leading instanton contribution to the gluon 2-pt correlator is calculated using the background field method with instanton background.In the semi-classical approximation, the gluon correlator in the instanton background is written using the thermal instanton density as where F Inst.,µν (x) is the field strength tensor constructed from the thermal instanton solution A µ (x).Thus, the contribution from one instanton requires the integration of the product of field strength tensor with instanton density in terms of the instanton size ρ and the instanton position z in the whole region. The thermal instanton solution with trivial holonomy is known as HS-caloron ) and τ a is a Pauli matrices.This solution is in so-called singular gauge, which has a pole at the instanton position, namely R = 0.However, the calculation of the correlator should be done in the regular gauge.This singularity can be removed by a periodic singular transformation 1 , After the transformation, we obtain the thermal instanton potential in the regular gauge The regularity of this function is checked as below.In the vicinity of the instanton position z, following approximation is valid, where . Thus, the transformed gauge field behaves near the instanton position as Substituting the regular instanton solution A ′ µ (x) into the integrand of Eq. ( 1), we calculated it numerically and eventually obtain figure.1.The horizontal axis denotes log(2πT |⃗ x|) and the vertical axis denotes log , where C is unimportant constant.The behavior of gluon correlator is x −6 if 2πT x ≫ 1 and constant if 2πT x ≪ 1.The shape of curve is same in any temperature because the resulting formula is a function of not x but 2πT x.If we choose the reference temperature as T c , namely this plot is shifted by ln(T/T c ) in the horizontal direction.We will compare this plot with the CP-violating 2pt-gluon correlator calculated by lattice simulation, in which the region around log(2πT |⃗ x|) ∼ 1 will be compared in Sec. 5.In this comparison, we will use the horizontal axis as ln(2πT c x). Simulation setup The gauge configurations are generated on two lattices, using the Wilson gauge action with N c = 2 and Hybrid Monte Carlo algorithm with the Omelyan integrator.The lattice size is 24 3 ×6 and 32 3 ×8.The code is implemented by modifying the Bridge++ code set [7] so that SU(2) simulation can be performed.The physical temperatures, T/T c , listed in table. 1 are derived by using non-perturbative beta function given by Engels et al. [8].In order to reduce the residual statistical correlations of configurations, each configuration has separations of ten trajectories.The numbers of configuration listed in table. 1 count out the configurations which have topological charge Q = ±1, because they are used for the statistical analysis of the CPV gluonic correlator in the fixed topological charge sector. Gradient flow with large flow time The gradient flow in Yang-Mills theory [9,10] is an evolution of gauge field in terms of the diffusion equation along the fictitious time t, so called flow time.In continuum theory the flow equation is where A µ (x) is the gauge field in 4d Euclidean Yang-Mills theory and S YM is the Yang-Mills action.This procedure makes the field configuration smoothed with smearing radius x ∼ √ 8t, which corresponds to the diffusion length.Interestingly, the flow equation above does not modify the classical configuration especially the instanton, because the force becomes zero for stationary solutions.Then, the large flow time configuration approaches some classical solutions.In the lattice simulation at very high temperature, if the configurations generated via HMC algorithm have non-trivial topological charge, the configurations are considered as classical instanton configuration with quantum fluctuations.The flow extracts the classical instanton configuration without changing the instanton size and position.Thus even though the quantum fluctuation around the instanton solution is removed by the large flow, the instanton size distribution of flowed configurations still preserves the information of the instanton density n(ρ, T ) of the original gauge configurations.Although this reasoning only applies for very high temperature, we extrapolate this method for lower temperature in order to investigate from which temperature the instanton picture makes sense. Figure .2 shows the transition of the topological charge during the Wilson flow in the region t/a 2 ∈ [20, 160] in the left column, and the distribution of the topological charge in four flow times, t/a 2 = 20, 40, 60, 80 in the right column.The upper row shows results of lower temperature T/T c ≃ 0.72 and the lower row shows ones of upper temperature T/T c ≃ 1.9.Even when the flow time is above t/a 2 = 80, some configuration are not stable in terms of the topological charge.This would occur due to the discretization error of the Wilson gauge action used in HMC and the Wilson flow.In the simulation of 2pt-gluon correlator shown in Sec. 5, we used the flowed configuration with t/a 2 = 80 to measure the topological charge and the CP-violating gluon correlator.The number of configuration with |Q| = 1 listed in table. 1 is determined by the topological charge at t/a 2 = 80 as well. CP-violating gluonic correlator at fixed topological charge sector Using the configuration generated as described in Sec. 3 and evolved by the Wilson flow with large flow time described in Sec. 4, the result of the lattice simulation is compared with the instanton calculus described in Sec. 2. In order to pay attention to the fact that the temperature effect introduces suppression for the long range interaction longer than scale x ∼ 1/T , we choose the horizontal axis as dimension-less ξ c ≡ ln(2πT c x).In this case the horizontal position where cutoff appears differs by temperature with horizontal shift ln(T/T c ).As a result of numerical integration we have the CPV correlator in one instanton background as shown in figure.On the other hand, the CPV correlator calculated via the lattice simulation is denoted as |Q|=1 (2πT c ) 8 , where the ensemble average of the correlator is calculated using configurations with non-trivial topological charge |Q| = 1.In this average, the sign of the correlator of configurations with Q = −1 is flipped.Here, the ensemble average of CPV correlator is zero when using all configuration generated by HMC algorithm or with Q = 0. Note that the result of the fixed topological charge sector |Q| = 1 which is calculated via the lattice simulation is not exactly equal to one of the I = 1.The unit topological charge denotes the sum of all sectors which has n + instanton and n − instanton satisfying n + −n − = 1.In the finite volume simulation, it is conceivable that configurations with Q = 1 generated by HMC algorithm contain up to several number of instanton or anti-instanton due to the limited box size.Below we assume that most of the configurations with |Q| = 1 contain only one (anti-)instanton in the box and we can approximate the observables given by the lattice simulation as ⟨O⟩ |Q|=1 ≈ ⟨O⟩ I=1 .We will compare the observables analytically calculated in the one instanton background, ⟨O⟩ I=1 , and one with |Q| = 1 numerically calculated using the lattice simulation, ⟨O⟩ |Q|=1 . Then we compared them in two ways.The first one adopts two parameters A and B for the overall factor and additional constant term, respectively.In our analysis, we are not interested in the overall factor of the instanton density which depends on the renormalization scale.The constant term B denotes the contribution from disconnected part of the 2pt-function and the finite volume effect [11], The second fitting introduces three parameters, the overall factor A, the constant term B and the effective temperature r = T ′ /T c .In addition to the degree of freedom of the first fitting, we adopt the effective temperature T ′ to somehow understand the correlator calculated by the lattice simulation in which the cutoff temperature is different from what instanton calculus predicts. A Figure .3 denotes the CP-violating gluonic 2pt-function calculated using the lattice simulation in blue line with 1 σ error and instanton calculus in red dotted line.In figure.3 (a) and (b) the lattice result of T/T c ≃ 0.72 and T/T c = 1.857 are fitted using Eq.(1), respectively.In the high temperature range, the correlator is well fit by these two parameters, so the instanton picture is valid in the high temperature regime as expected.However, in the low temperature the fit is not good.The correlator given by the lattice simulation shows larger correlation in the large distance than the instanton prediction.The position where power of x is falling down represents the typical instanton size in the ensembles.The instanton profile produced by the lattice simulation behaves like one predicted by the instanton calculus at lower temperature than the input temperature.In figure . 3 (c) and (d), the lattice result of T/T c ≃ 0.72 and T/T c = 1.857 respectively are fitted using Eq. ( 2).We obtain r ≃ 0.37 ± 0.021 and 1.827 ± 0.023 for the lattice simulation with T/T c ≃ 0.72 and 1.9, respectively, where the statistical errors are determined by the jack-knife method.In the low temperature T/T c = 0.72, the instanton size distribution calculated by the lattice simulation behaves as that with effective temperature T/T c ≃ 0.37. Conclusion and outlook The instanton effect is involved in the solution of the strong CP problem in the Standard Model.Existing knowledge about the instanton picture contains the ambiguities both at zero and finite temperatures.We try to clarify the role of the instanton picture behind the non-perturbative dynamics via the lattice simulation focusing on the CPV gluon 2pt-function.Since, in very high temperature, the instanton size distribution of the configurations generated by HMC is unchanged under the gradient flow, the configurations after applying large flow still have information of the instanton size distribution behind the original configurations.We measure the typical size or cutoff determined by temperature ∼ 1/T comparing the correlator of analytic result with one of numerical result.In the high temperature region, we found instanton picture describes the CPV correlator calculated by lattice simulation.In the low temperature region, we found the large size instantons have larger contribution than expected, which is equivalent to the case when instantons distribute with lower effective temperature.As a future work, we will investigate from which temperature the DIGA prediction is valid and how the CPV correlator behaves in the low temperature region with more points.We will also carry out the simulation with the Symantzik improved action in the HMC and gradient flow, which reduces the discretization error included in our current results. Figure 2 . Figure 2. The result of the gradient flow with large flow time. 1 as a function of ξ c , Π Inst.I=1 (ξ c ) = ⟨sq(ξ c )⟩ Inst.I=1 C Inst. (2πT c ) 8 , where C Inst. is unimportant constant and subscript I = 1 denotes the one instanton background.In this function the horizontal axis is shifted from the plot shown in figure. 1 by the relation, ln(2πT c x) = ln(2πT x) − ln(T/T c ).
4,277
2018-03-01T00:00:00.000
[ "Physics" ]
A 3D Printer Guide for the Development and Application of Electrochemical Cells and Devices 3D printing is a type of additive manufacturing (AM), a technology that is on the rise and works by building parts in three dimensions by the deposit of raw material layer upon layer. In this review, we explore the use of 3D printers to prototype electrochemical cells and devices for various applications within chemistry. Recent publications reporting the use of Fused Deposition Modelling (fused deposition modeling®) technique will be mostly covered, besides papers about the application of other different types of 3D printing, highlighting the advances in the technology for promising applications in the near future. Different from the previous reviews in the area that focused on 3D printing for electrochemical applications, this review also aims to disseminate the benefits of using 3D printers for research at different levels as well as to guide researchers who want to start using this technology in their research laboratories. Moreover, we show the different designs already explored by different research groups illustrating the myriad of possibilities enabled by 3D printing. There are countless CAD software types that cover all the steps of the 3D printing process, which starts with modeling. Some software programs stand out in this stage as SolidWorks, for example, which is one of the most employed modeling software. It is based on a parametric computation system, building 3D forms by geometric shapes. The software is excellent for 3D engineers and designers, being easy to use by beginner operators and enthusiasts too. CATIA software, on the other hand, supports multiple stages of product development, making it easy to collaborate between different subjects. There are possible modeling projects of electric, fluid, and electronic systems, for example, which makes the software useful to several industries (aviation, consumer goods, electronics, etc). Iventor software presents functional 3D projects and, in addition to modeling, it is possible to evaluate the mechanical behavior of the built pieces, simulating movements of the structure and external forces influences like gravity, for example. Besides them, there are other remarkable software programs, such as AutoCad, Fusion360, NX, Solid Edge, etc. For the slicing step, PrusaSlicer presents itself as a new version of old famous slicing software, made by Prusa Research, a world reference 3D printer industry. It has some notable features, such as MSLA (resin) and multi-material support, smooth variable layer height, custom supports using modifier meshes, ability to "wipe" into infill, among others features. MatterControl is another example of slicing software; it is possible to slice using a variety of advanced settings support generation, software bed leveling and with integrated controls for dual extrusion. Slic3r, Simplify3D, SliceCrafter also are examples of software. Designed for processing and editing unstructured 3D meshes, Meshlab is an example of modeling correction software. It is used to edit, inspect, fix and repair STL files. This software is also used specifically in filling holes in meshes, depending on a good deal of technical knowledge to apply all the tools appropriately, being recommended to experienced users. Meshfix, however, is a simple alternative to STL repair. It can fix various defects in meshes, such as holes, non-manifold elements, and self-intersections. Autodesk Netfabb is one of the most famous software on the market. It presents STL repair tools with automatic, semi-automatic, and manual repair options, which allows the user to find the best solution for their project. Besides that, this software also covers the entire setup before 3D printing, being a robust and versatile tool. Autodesk Meshmixer, 3D Builder, and Blender are good examples of modeling correction software too. There are several types of printing software, including the slicing ones cited before (PrusaSlicer, Slic3r and MatterControl, for example), which makes the project real. Repetier-Host, for example, is an open source and highly capable software of 3D printer control. It offers multi-extruder support, up to 16 extruders, multi-slicer support via plugins, and support for virtually any FFF on the market. This software also offers remote access via its server, being possible to access the 3D printer remotely. OctoPrint, in addition, presents a different way to control the printing jobs. Combined with a Wi-Fienabled device, this software allows control over the printer machine remotely via OctoPrint's web interface. The software accepts G-code of practically any slicing software and allows visualization of them before and during printing. Both are free, being great options for printing software. S.2 Main printing parameters: important settings for slicing parts To print a part in 3D, some steps need to be taken: the first step is modeling the object and these are saved in formats such as: .STL, .OBJ, .X3D, .AMF and others; the second step is the configuration of the printing/slicing parameters: to perform this part, files are modeled in a format other than .STL must be converted to .STL; thirdly, object slicing must be done, this process transforms the .STL file into a .G-code file, the only format that the printer understands. This .G-code file provides commands for the printer to perform all the necessary movements to build the object, such as: heating the nozzle, making automatic leveling, retracting or releasing filament, go up, to the side, turn off heating. The slicing works as a translator for the 3D printer and finally this .G-code file must be shared with the 3D printer via USB cable, memory card or command software, such as octoprint for it to start printing the object. We saw in the previous section that there are several modeling, slicing, printing, and modeling correction software. In this section, we will present the main parameters that need to be configured to obtain functional parts in FFF 3D printers: temperature, speed, type of support, filling, layer height, and flow. Before using the printer for the first time, it is important that the build chamber is leveled and calibrated, this will prevent objects from being printed warped or from giving an error during printing. After choosing the filament, it is necessary to make the appropriate temperature adjustment for this material, using temperatures higher than the ideal for the chosen material can leave the pieces brittle, full of loose strings, and in severe cases, they can carbonize the material inside the extruder or nozzle causing clogging. Lower temperatures can cause low fluidity causing the lack of material in parts of the part, decreasing its mechanical resistance. To perform the temperature calibration for each filament, there are some features, such as printing a temperature calibration tower by adjusting the temperature every 5 °C. There are several models of calibration towers in free repositories, for example, Thingiverse. Print speed is also an important parameter, because the higher the print speed, the lower the resolution of the parts. The printing speed must also be changed according to the size of the nozzle, because the larger the outlet diameter of the nozzle, the lower the printing speed must be so that more material accumulates inside the extruder, this causes the flow to maintain itself correctly and fluidic materials are put where you need it. When printing very large pieces, with many details or cylindrical, it is often necessary to print these over a support. The distance between the support and the printed layer needs to be carefully chosen so that the support does not join the piece, making the removal impossible. The size of the support needs to be adjusted so that it serves its purpose saving the use of the material in these regions, as they will be discarded. The height of the layer must also be adjusted according to the diameter of the nozzle, the lower the height of the layer the higher the resolution of the parts, this parameter also increases the printing time when choosing to make high resolution parts. The infill, wall and top/bottom are the most important parameters in relation to the resistance of the printed objects, where the choice of a piece with 100% filling will be denser, but it does not mean that it becomes more resistant. The more filled, the greater the filament expenditure and the printing time increases considerably. Tully and Meloni (2020) explain in detail these last parameters (Tully and Meloni, 2020). S.3 Features of raw material: filaments Although the PLA filament is indicated for visual prototyping, this filament has also been applied in surgical implants for bone fractures, autologous and heterologous grafts. There are studies on the use of PLA as material for medical sutures, extracellular matrix framework, and many others. (An et al., 2000;Rezwan et al., 2006;Ferreira et al., 2008;Lou et al., 2008). ABS is used in various applications, such as automotive parts, musical instruments, ATMs, helmets, luggage, toys, and many others. OBS: Prints of parts using PLA do not emit toxic fumes during printing like ABS. Other filaments that are being used more extensively are the flexible filaments and although there are several types, PP and TPU that is a variation of TPE will be highlighted. The applications of these two filaments are varied, for instance, in the sealing of doors and windows, medical supplies, sealing of household appliances, toys, shoe soles, hinges, and others, representing good applicability in R&D.
2,123.8
2021-07-02T00:00:00.000
[ "Chemistry", "Engineering" ]
3D QSAR and docking studies on benzoylsulfonohydrazides as histone acetyltransferase KAT6A inhibitors Sixty-one analogs of benzoylsulfonohydrazides were subjected to 3D QSAR studies using CoMFA and CoMSIA techniques followed by docking studies to develop a correlation of the structure with their respective activities. The generated model had shown good predictability and the contour analysis followed by docking study has provided an insight to develop new inhibitors. The cross-validation values corresponding to CoMFA and COMSIA were observed to be within the acceptable criterion (q2 > 0.5). The docking analysis of the best active compound shown was −41.81 kcal/mol. From the obtained analysis results of CoMFA as well as CoMSIA, the data can be useful to develop more potent histone acetyltransferase inhibitors. INTRODUCTION Histone acetyltransferases has a crucial role in hematogenesis and are one among the chromatin modifying enzymes that are responsible for the post-translational modifications of the histone in the nucleosome of a cell which include methylation, acetylation, ADP-ribosylation, phosphorylation, and ubiquitination (Allfrey et al., 1964;Des Jarlais and Tummino, 2016;Lawrence et al., 2016;Luger and Richmond, 1998;Sterner and Berger, 2000;Sadakierska and Filip, 2015) Among these, acetylation is known to be the earliest modification which is related to the gene activation as it is linked functionally with transcription activation by adding acetyl group (-COCH3) to the s-amino group present in lysine residue resulting in loosening of the nucleosome structure (Bannister and Miska, 2000;Roth et al., 2001;Parthun, 2012). This acetylation occurs at the N-terminal of the basic amino acid (lysine) dense region of the histone core; as a result, the acetyl-CoA gets transferred to -NH+ of lysine neutralizing the +ve charge (Loidl, 1994). There are three different families of histone acetyltransferase (HAT) which are p300, , and MYST (MOZ, Ybf2, Sas2, and Tip60 as founding members) (Voss and Thomas, 2018). Monocytic leukemia zinc finger protein (MOZ HAT) is an oncogene of MYST family which is involved directly in the process of hematopoiesis as it forms HAT complex that acetylates H2 (A and B), H3 and H4 corresponding to the up-regulation of gene there by activating the oncogene resulting in acute myeloid leukemia (AML) (Borrow et al., 1996;Champagne et al., 2001;Dohner et al., 2015). Granulocytic leukemia (synonym for AML) is a cancer that is characterized by the over production of white blood cells that are immature (myeloblasts) and functionally causes affect on blood as well as bone marrow. These cells prevent leukopoiesis of normal blood cells that act as the defense system in the body resulting in poor immune system and also cause anemia, bruising, and easy bleeding (Camos et al., 2006;Ullah et al., 2008). KAT6A is one among the five subfamilies of MYST HAT that is responsible for the hostile form of acute myeloid leukemia due to rearrangement in the KAT6A gene (Lowenberg et al., 1999). Hence, inhibiting HAT KAT6A would help preventing the continuous growth of tumors and their metastasis in case of AML. Literature survey reveals only two compounds that were discovered which include WM-8014 and WM-1119 of which latter was found to be the most active compound with IC50 value of 0.25 pM showing less protein-binding than the former with IC50 value of 2.3 pM and high protein-binding (Baell et al., 2018). Based on the study of WM-1119, the same group further discovered benzoylsulfonohydrazides as the potent inhibitors of HAT KAT6A (Leaver et al., 2019). Therefore, in the present in silico study, we endeavored to develop a 3D QSAR model adopting CoMFA and CoMSIA techniques on 61 benzoylsulfonohydrazide analogs from which the contour maps of the most active compound could give an insight in developing inhibitors with enhanced activity against HAT KAT6A. Data set preparation Sixty-one benzoylsulfonohydrazides as potent inhibitor of histone acetyltransferase KAT6A were taken from the literature (Leaver et al., 2019). These molecules were reported to inhibit in micro-molar range (IC50) and were converted to pIC50; which is the negative logarithm of IC50 (i.e., pIC50 = −log IC50). The molecules were constructed in SYBYL X and minimized by Gasteiger-Huckel charges using distance dependent dielectric and Powell-conjugate gradient algorithm with 0.05 kcal/mol convergence. All the default parameters were adopted during the minimization of the molecules. Alignment The alignment of the molecules determines the accuracy of the model. The molecules in the present study were aligned upon the most active compound by selecting a basic skeleton of benzoylsulfonohydrazide. Figure 1 represents the common structure used to sketch the molecules, whereas alignment of all 61 optimized molecules on the basic skeleton are presented in Figure 2. 3D QSAR Model construction Model of 3D QSAR of benzoylsulfonohydrazides in the present study was constructed on SYBYL X in which CoMFA and CoMSIA methods were adopted to determine the relation between the bioactivity with their corresponding 3D structure of molecules. CoMFA model describes the steric fields and the electrostatic fields of the molecules under study, whereas CoMSIA model describes the hydrophobic, HBD and HBA along with steric as well as electrostatic fields. Statistical validation Partial least square method (PLS) is a standard statistical regression tool used to predict the 3D QSAR model. It was adopted for the present study as it can analyze the data in a realistic way and interpret the contribution of the molecular structure with the biological activity. For CoMFA, all the parameters which include cross-validation, correlation coefficient, standard error of estimate, f-value etc. were obtained taking number of components of 5 and 6, whereas column filtering of 2 and 1 for CoMFA and CoMSIA, respectively. The study was conducted dividing the molecules in 1:3 ratio of test and training molecules, respectively. Therefore, 15 molecules were selected randomly which were grouped as test set and the remaining 46 were grouped as training set. With this, leave one out (LOO) was used in order to establish the reliability of the generated model for CoMFA as well as CoMSIA. All other parameters were recorded for no-validation, cross-validation, and bootstrapping. The activity was predicted for the test as well as training set and correlated with the experimental pIC50 values. Molecular docking It is an important technique to find the interaction of the ligand with a specific protein of interest. The study was done using AutoDockTools software (Morris et al., 2009). The protein structure (6CT2) of 2.128A resolution in PDB format was downloaded from database of protein (https://www.rcsb.org) and the ligand structures that were constructed and their energy was minimized in sybyl X and were further prepared within the docking software. The active ligand interacting site of the protein was noted from PDBSum (http://www.ebi.ac.uk) as SER 690 (A). The x, y, z coordinates were taken from Spdbv protein viewer tool and entered in the grid to generate active site grid box. The crystal water was removed prior to docking simulation followed by docking of the protein with highest and lowest active ligands. Compound PS-97 was the most active while the least active compound was PS-9. Statistical results A model for CoMFA and CoMSIA was produced using 61 KAT6A inhibitors using benzoylsulfonohydrazide as the skeleton for alignment. The image of alignment of the inhibitors can be visualized in Figure 2. The model was developed by randomly dividing the molecules in 1:3 ratio of test and training set and performing PLS analysis to determine the prediction power of the model. The pIC50 values were predicted for both series. Table 1 shows the predicted as well as residual values of these analogues and their correlation is shown in Figures 3 and 4, respectively. Statistical analysis The statistical results obtained after running PLS regression for both the variables, i.e., CoMFA and CoMSIA, which includes LOO, no validation, cross validation, and also Bootstrapping, were recorded to evaluate reliability of the developed model. In case of CoMFA, the q 2 obtained was 0.678; r 2 was 0.948, whereas F-value and SEE was 144.505 and 0.226, respectively. Steric field contribution was 47.8%, while Contour analysis Contour maps analyze the characteristics of the fields around the molecules. These are used to find the basic structural requirement for the bioactivity which facilitate for the development of inhibitors with high potency. Best active compound (P5-97) of training set was considered to analyze contour maps by setting contribution values of favored as well as disfavored region to 80% and 20%, respectively. The steric contributions of the best active compound can be visualized in Figures 5 (CoMFA) and 6 (CoMSIA), whereas electrostatic contribution can be observed in Figures 7 and 8, respectively. Other parameters of CoMSIA which include hydrophobicity, HBD and HBA are shown in Figures 9-11, respectively. Molecular docking analysis Docking study reveals the interaction of the selected ligand with the protein of interest which enables us to understand the model of 3D QSAR. The docking study was performed using AutoDock tools software and the results are outlined in Table 3. From the docked images of the potent compound (P5-97) as viewed in Figures 12 and 13, it was observed that all the three oxygen atoms were responsible for binding with the amino acids GLY657, ARG660, and ARG655 of the active region. The calculated binding energy was found to be −11.81 kcal/mol. Moreover, from the docking (Figs. 14 and 15) of the least active compound, P5-9 shows that the oxygen atoms, one from sulfono group, the other from benzoyl group, and the R2 methoxy group were responsible for binding the drug with GLY657, GLY659, and LYS763 amino acids in the active region of the protein. The binding energy corresponding to P5-9 was −9.68 kcal/mol. DISCUSSION From steric contour maps corresponding to CoMFA and CoMSIA represented in Figures 5 and 6, respectively, the result suggests that, by attaching substituent at green colored region would enhance the activity while yellow contour indicate the decreased activity. Therefore, attachment of various groups at R3 position of the benzoyl group would increase the potency of compound P5-97. Electrostatic contour maps of respective CoMFA and CoMSIA, red color represent the negative favorable region where as blue color represents the positive favorable regions. Increasing the positive nature at the benzoyl ring and increasing negative charge at R2 position could enhance the activity of the compound. Hydrophobic contour maps from Figure 9 with white and yellow region indicate attachment of hydrophilic and hydrophobic groups, respectively, would result in compounds with increase in the activity. The substitution with hydrophobic group at R3 position and hydrophilic groups at the R5 position of benzoyl and the phenyl ring at the sulfo terminal would significantly enhance the inhibitory action. From Figure 10 of H-bond donor contour map, substitution with electron withdrawing groups to both the nitrogens of the hydrazine may favor to increase the activity of the molecule. Furthermore, it is clear from the docking study that the key components responsible for binding the ligand (P5-97) within the active region are the oxygen atoms of benzoylsulfonohydrazide. Moreover, from the hydrophobic contour map (Fig. 9) obtained by 3D QSAR study, presence of hydrophilic groups on both the phenyl rings of benzoylsulfonohydrazides would show more binding interactions such that it fits better in the active region of protein. From the obtained results of the study, it provides significant proposition to develop further new compounds retaining the benzoylsulfonohydrazides as the key components. CONCLUSION The in silico 3D QSAR study of 61 benzoylsulfonohydrazide analogs of histone acetyltransferase KAT6A inhibitors were carried out. Partial least square analysis was done to evaluate the model developed for CoMFA and CoMSIA. The cross validation (q 2 ) and no validation (r 2 ) for CoMFA were 0.678 and 0.948, while for CoMSIA were 0.719 and 0.953, respectively. Therefore, the obtained results were convincing as they were within the acceptable statistical criterion (q 2 > 0.5). From the cross-validation results, the model of CoMFA and CoMSIA are nearly similar, however, CoMSIA shows to have a better predictive ability. The contour maps that were obtained from study of CoMFA and CoM-SIA of the compound (P5-97), it provides significant insight to design molecules with better inhibitory activity.
2,766.4
2020-01-01T00:00:00.000
[ "Chemistry", "Biology" ]
Chromium-catalyzed cyclopropanation of alkenes with bromoform in the presence of 2,3,5,6-tetramethyl-1,4-bis(trimethylsilyl)-1,4-dihydropyrazine Chromium-catalyzed cyclopropanation of alkenes with bromoform was achieved to produce the corresponding bromocyclopropanes. In this catalytic cyclopropanation, an organosilicon reductant, 2,3,5,6-tetramethyl-1,4-bis(trimethylsilyl)-1,4-dihydropyrazine (1a), was indispensable for reducing CrCl3(thf)3 to CrCl2(thf)3, as well as for in situ generation of (bromomethylidene)chromium(iii) species from (dibromomethyl)chromium(iii) species. The (bromomethylidene)chromium(iii) species are proposed to react spontaneously with alkenes to give the corresponding bromocyclopropanes. This catalytic cyclopropanation was applied to various olefinic substrates, such as allyl ethers, allyl esters, terminal alkenes, and cyclic alkenes. Introduction Cyclopropane is a strained three-membered carbocycle, and a common structural motif in pharmaceutical and biologically active compounds. 1 The synthesis of cyclopropanes from easily available starting materials is in high demand, and several stoichiometric synthetic protocols for the C3 ring have been developed: (1) classical reductive cyclization of 1,3-dihalopropanes or b-haloalkenes using metal-based reductants such as lithium and magnesium, 2 (2) cyclopropanation of alkenes using haloform (CHX 3 ) and a strong base in phase-transfer conditions to afford geminal dihalocyclopropanes, 3 and (3) cyclopropanation of alkenes using nitrogen-, phosphonium-, and sulfur-ylides, 4 in situ-generated zinc carbenoid from Zn reagents and CH 2 I 2 (Simmons-Smith reaction), 5 and in situ-generated chromium carbene species from excess amounts of CrCl 2 , diamine ligands, and RCHI 2 . 6 In contrast to these stoichiometric reactions, metal-catalyzed cyclopropanation of alkenes using diazomethane and its derivatives is an alternative effective protocol, despite the use of explosive diazomethane derivatives. 7 To avoid the use of explosive compounds, the development of metal-catalyzed cyclopropanation reactions using non-explosive reagents was recently explored. 8 Uyeda et al. reported that some nickel and cobalt complexes serve as catalysts for Simmons-Smith type reactions of alkenes with less reactive CH 2 Cl 2 and CH 2 Br 2 in the presence of excess zinc powder (Fig. 1a). 8f-8i Takai et al. reported that chromiumcatalyzed cyclopropanation of alkenes with Me 3 SiCHI 2 proceeds in the presence of catalytic amounts of chromium complex and excess Mn powder as a reducing reagent, from which gem-dichromiomethane complexes (Cr 2 -SiMe 3 ) are isolated (Fig. 1b), 9a and, similarly, Anwander et al. isolated an iodomethyl-bridged dichromium complex by treating CrCl 2 Fig. 1 Metal-assisted cyclopropanation of alkenes with di-and trihalomethanes; (a) cyclopropanation with excess zinc powder, (b) cyclopropanation with excess or catalytic amounts of chromium, and (c) bromocyclopropanation with catalytic amounts of chromium and organosilicon reductant 1a (This Work). Results and discussion We then screened conditions by tuning reductants, additives, and supporting ligands to optimize the chromium-catalyzed cyclopropanation of allyl benzyl ether (2a) with bromoform as a model reaction, and the results are summarized in Table 1. When we used a 1 : 1 mixture of CrCl 3 (thf) 3 (5 mol%) and TMEDA (5 mol%) in the presence of 1a (2 equiv.) in 1,2-dimethoxyethane (DME) at 50 C for 24 h, bromocyclopropane 3a was obtained in 98% yield with high trans (89%) selectivity (entry 1). Cyclopropanation at 25 C resulted in a slightly lower yield (81%) of 3a with almost the same trans selectivity (entry 1 vs. 2). No cyclopropanation product was obtained when organosilicon compounds 1b-d were used as the reducing reagents (entries 3-5), although 1b-d reduced CrCl 3 (thf) 3 to CrCl 2 , probably due to coordination of the reduction byproducts, 2,5-dimethylpyrazine (from 1b), pyrazine (from 1c), and 4,4 0 -bipyridyl (from 1d), to the chromium center, as conrmed by the inhibition of the catalytic reaction when pyrazine was added under the standard conditions. Screening of several multidentate nitrogen-based ligands revealed that TMEDA was the best ligand for this catalytic reaction (entry 1 vs. 12-17; amines, phosphines, and other ligands in ESI †). Notably, no reaction was observed when using typical organic and inorganic reductants, such as tetrakis(dimethylamino)ethylene (TDAE), Zn, and Mn powder (entries 6-8). The coordination of TMEDA to the chromium center was essentially required to produce the catalytic activity: the addition of either ZnCl 2 (2 equiv.) or MnCl 2 (2 equiv.) to the standard reaction conditions resulted in no reaction (entry 9) or lowered the yield of 3a (entry 10), respectively, due to the removal of TMEDA from the chromium center, 9a while under ligand-free conditions, the yield of 3a decreased signicantly (entry 11). When isolated CrCl 3 (tmeda) (5 mol%) was used as the catalyst, the yield of 3a was comparable with that of the in situ CrCl 3 (thf) 3 and TMEDA system (entry 18). With the optimized reaction conditions in hand, we analyzed the substrate scope of the alkenes (Table 2). Allyl phenyl ether (2b) was converted to the corresponding bromocyclopropane 3b in 92% yield with high trans selectivity. Other allyl aryl ethers 2cg with electron-withdrawing and -donating substituents on the phenyl ring were transformed to the corresponding cyclopropanes 3c-g in moderate to high yields, with a cyano group or halogen atoms at the para-position of the aryl ring remaining intact during the cyclopropanation reaction. Reaction of CHBr 3 with allyl butyl ether (2h) afforded 3h in 81% yield with a trans-: cis ratio of 87 : 13. The carbonyl group also tolerated the reductive conditions to produce cyclopropanes; benzoylsubstituted alkene 2i was converted to 3i in 75% yield, while allyl carbonate 2j, which is typically used for allylic substitution of nucleophiles, afforded 3j in 60% yield without any decomposition of 2j. Allylamine 2k was also applicable and the corresponding cyclopropylmethylamine 3k was obtained in 64% yield. Simple a-olens, such as allylbenzene 2l, 5-hexenyl acetate 2m, 1octene 2n, and vinylcyclohexane 2o, gave the corresponding cyclopropanes 3l-o in good yield. When we applied substrates possessing two olenic moieties, a terminal and monosubstituted olenic part was selectively cyclopropanated to give 3p and 3q in moderate yield. Internal alkenes with cis-conguration were also applicable to our catalytic system: cis-1,4diacetoxy-2-butene (2r) showed a moderate reactivity to give the corresponding cyclopropane 3r in 47% yield, while some cyclic alkenes such as cycloheptene (2s), cyclooctene (2t) and acenaphthylene (2u) were applicable to afford polycyclic compounds 3s, 3t, and 3u in moderate to high yields, though debromination of initially formed bromocyclopropane might be involved for the formation of 3u. Other olens such as styrene, 1,1-disubstituted alkenes, acyclic internal alkenes, and dienes were not applicable in this cyclopropanation reaction (see ESI † for the limitations of this cyclopropanation). To elucidate the reaction mechanism, we carried out a kinetic study for the formation of 3a, and the resulting data were analyzed by variable time normalization analysis (see ESI †). 15 The overall reaction rate did not change with various concentrations of chromium catalyst (0.004-0.01 M) and alkene 2a (0.08-0.12 M), giving a rate dependence of [Cr] 0 [2a] 0 , which is in sharp contrast to the report of Takai et al. who found that chromium-catalyzed cyclopropanation with Me 3 SiCHI 2 obeys rst-order dependence on the concentrations of both a chromium carbene complex and 2a, giving a rate dependence of [Cr] 1 [2a] 1 . 9a Such a difference was further observed in the reaction prole; no induction period was observed under the various reaction conditions. 16 Next, to understand how 1a functioned to generate a catalytically active species, we performed some control experiments. Direct activation of CHBr 3 by 1a was excluded because no significant rate acceleration was observed when a mixture of CHBr 3 and 1a was pre-treated by stirring at 50 C for 1 hour before adding the chromium catalyst (see ESI †). Although we tried repeatedly to isolate the dichromium species having a bridging bromomethyl group, the target complex could not be isolated and characterized, probably due to the instability of the bromomethyl-bridged dichromium species (see ESI †). In previous reports, however, gem-dichromiomethane complexes (Cr 2 -X) was isolated as key intermediates prior to generating reactive mononuclear carbene species via disproportionation (Fig. 2). Takai et al. reported the rst example of an isolated gem-dichromiomethane complex (Cr 2 -SiMe 3 ) by introducing a bulky trimethylsilyl-substituent on a carbon atom of diiodomethane, from which silylcyclopropanes were obtained upon treatment with alkenes. The related germanium derivative, Cr 2 -GeMe 3 , was also isolated and used for cyclopropanation. Anwander et al. independently observed the formation of a gem-dichromiomethane complex (Cr 2 -I) from the reaction of CrCl 2 and CHI 3 at low temperature. We next conducted a stoichiometric cyclopropanation reaction of alkene 2a with bromoform in the presence of excess CrCl 2 (Scheme 2). The desired cyclopropane 3a was not obtained even at 80 C (Scheme 2a), although formation of the corresponding cyclopropanes was observed when iodoform and diiodomethane derivatives were used. Moreover, under the catalytic conditions using 1a, the yield of 3a gradually decreased as the catalyst loading was increased from 5 to 100 mol% (Scheme 2b). The lower product yield caused by increasing the amount of the chromium salt suggested that involvement of gem-dichromiomethane species was less likely in our metal-salt free system with 1a compared with other chromium-catalyzed cyclopropanation developed by Takai et al. On the basis of these ndings, we propose the reaction mechanism shown in Scheme 3. The initial step is the activation of bromoform by chromium(II) species A to form (dibromomethyl)chromium(III) species B accompanied by the formation of an equimolar amount of chromium(III) trihalide C, which can be reduced by 1a or in situ-generated chromium(I) halide F (vide infra). Species B is dehalogenated by 1a to afford (bromomethylidene)chromium(III) D along with the elimination of Me 4pyrazine and 2 equiv. of Me 3 SiX (X ¼ Cl, Br), whose reactivity is assumed due to the reductive dehalogenation of vicinal This journal is © The Royal Society of Chemistry 2020 Chem. Sci., 2020, 11, 3604-3609 | 3607 dihaloalkanes by the organosilicon-based reductant 1d leading to the formation of alkenes. 12a In addition, the generation of metal carbene species by the dehalogenation of metal carbenoids with zinc powder was proposed for nickel-or cobaltcatalyzed cyclopropanation of alkenes with dibromomethane or dichloromethane (Scheme 3). 8b,8h Finally, the reaction of D with alkenes gives 4-membered metallacycle E, whose reductive elimination affords the desired bromocyclopropane together with a low valent nascent chromium(I) species F. The resulting F reacts with chromium(III) trihalide C to regenerate chromium(II) species A through comproportionation. Accordingly, 1a has dual functions to reduce not only a catalyst precursor, CrCl 3 (tmeda), at the initial step, but also mainly the chromium(III) species B for generating mononuclear chromium carbene species D as a key intermediate. Conclusions In summary, we developed chromium-catalyzed bromocyclopropanation of alkenes with bromoform using an organosiliconbased reductant 1a. The desired bromocyclopropanes were obtained in moderate to high yields with good trans selectivity, and the reaction was applicable to allyl ether derivatives, allyl carbonate, allylamine, and simple a-olens. Control experiments suggested that 1a played an important role in reducing the (dibromomethyl)chromium(III) species to generate mononuclear (bromomethylidene)chromium(III) as a key intermediate. Further exploration to discover the unique metal salt-free reductive transformation of organic compounds is ongoing in our laboratory. Conflicts of interest The author declares no conict of interest.
2,546.6
2020-03-11T00:00:00.000
[ "Chemistry" ]
Coherent energy exchange between vector soliton components in fiber lasers We report on the experimental evidence of four wave mixing (FWM) between the two polarization components of a vector soliton formed in a passively mode-locked fiber laser. Extra spectral sidebands with out-of-phase intensity variation between the polarization resolved soliton spectra was firstly observed, which was identified to be caused by the energy exchange between the two soliton polarization components. Other features of the FWM spectral sidebands and the soliton internal FWM were also experimentally investigated and numerically confirmed. Passive mode-locking of erbium-doped fiber lasers with a semiconductor saturable absorber mirror (SESAM) has been extensively investigated [1,2]. In contrast to the nonlinear polarization rotation (NPR) mode-locking, mode-locking incorporating a SESAM does not require any polarization element inside the laser cavity, thereby under suitable condition of the cavity birefringence, vector solitons could be formed in the lasers [3]. Recently, it was reported that even the polarization-locked vector soliton (PLVS) could be formed in the mode-locked fiber lasers [4,5]. Formation of a PLVS requires not only that the group velocities of the two orthogonal polarization components of a vector soliton are locked but also that their phase velocities are also locked. It is well known that through self-phase modulation (SPM) and cross-phase modulation (XPM), nonlinear interaction between the two polarization-modes of a fiber could result in group velocity locked vector solitons [6]. Although it was also pointed out that the four-wavemixing (also called coherent energy exchange) coupling between the polarization components of a vector soliton could have contributed to the formation of the phase locked vector solitons [4,5], so far no experimental evidence of soliton internal FWM has been presented. In this Letter we report on the experimental observation of FWM between the two orthogonal polarization components of a vector soliton formed in a fiber laser passively mode locked with a SESAM. Energy exchange between the two orthogonal polarization components of vector solitons was observed at specific frequencies on the soliton spectrum. However, our experimental results showed that the existence of FWM didn't guarantee formation of PLVS. The fiber laser is illustrated in Fig.1. It has a ring cavity consisting of a piece of 4.6 m Erbium-doped fiber (EDF) with group velocity dispersion parameter 10 ps/km/nm and a total length of 5.4 m standard single mode fiber (SMF) with group velocity dispersion parameter 18 ps/km/nm. The cavity has a length of 4.6 EDF +5.4 SMF =10m. Note that within one cavity round-trip the signal propagates twice in the SMF between the circulator and the SESAM. A circulator is used to force the unidirectional operation of the ring and simultaneously to incorporate the SESAM in the cavity. An intra cavity polarization controller is used to change the cavity's linear birefringence. To verify our experimental observations and determine the extra sideband formation mechanism, we also numerically simulated the FWM in the laser. We used the following coupled Ginzburg-Landau equations to describe the pulse propagation in the weakly birefringent fibers in the cavity: Where, u and v are the normalized envelopes of the optical pulses along the two orthogonal polarized modes of the optical fiber. 2β = 2πΔn/λ is the wave-number difference between the two modes. 2δ = 2βλ/2πc is the inverse group velocity difference. k′ is the second order dispersion coefficient, k ″ is the third order dispersion coefficient and represents the nonlinearity of the fiber. g is the saturable gain coefficient of the fiber and Ω g is the bandwidth of the laser gain. For undoped fibers g=0; for erbium doped fiber, we considered its gain saturation as where G is the small signal gain coefficient and P sat is the normalized saturation energy. The saturable absorption of the SESAM is described by the rate equation [8]: Where T rec is the absorption recovery time, l 0 is the initial absorption of the absorber, and E sat is the absorber saturation energy. To make the simulation possibly close to the experimental situation, we used the following parameters: γ=3 W -1 km -1 , Ω g =24nm, P sat =100 pJ, k″ SMF =-23 ps 2 /km, k″ EDF =-13 ps 2 /km, k′″=-0.13 ps 3 /km, E sat =1 pJ, l 0 =0.15, and T rec = 6 ps, Cavity length L= 10 m. The numerical simulations well reproduced the extra spectral sidebands and confirmed that their appearance is indeed caused by the FWM between the orthogonal soliton components. The result could also be easily understood. Due to small linear cavity birefringence, coherent coupling between the two polarization components of a vector soliton can no longer be neglected. Its existence causes coherent energy exchange between the two orthogonal soliton polarization components. Nevertheless, as far as the linear cavity birefringence is not zero, energy exchange does not occur at whole soliton spectrum, but only at certain wavelengths where the phase matching condition is fulfilled, which then leads to the formation of the discrete extra spectral sidebands. In conclusion, we have experimentally observed extra spectral sideband generation on the soliton spectra of the phase locked vector solitons in a passively mode-locked fiber ring laser. Polarization resolved study on the soliton spectrum reveal that they are caused by the coherence energy exchange between the two orthogonal polarization components of the vector solitons. Numerical simulations have confirmed our experimental observation. Especially, numerical simulations show that FWM always exists under weak cavity birefringence. As far as the net cavity birefringence is not zero, phase matching condition can only be fulfilled at certain wavelengths. Our studies suggest that appearance of the sidebands is not a characteristic of the vector soliton polarization evolution, but the FWM between the components of a vector soliton.
1,318.8
2009-03-11T00:00:00.000
[ "Physics" ]
Evidence That a Laminin-Like Insect Protein Mediates Early Events in the Interaction of a Phytoparasite with Its Vector's Salivary Gland Phytomonas species are plant parasites of the family Trypanosomatidae, which are transmitted by phytophagous insects. Some Phytomonas species cause major agricultural damages. The hemipteran Oncopeltus fasciatus is natural and experimental host for several species of trypanosomatids, including Phytomonas spp. The invasion of the insect vectors' salivary glands is one of the most important events for the life cycle of Phytomonas species. In the present study, we show the binding of Phytomonas serpens at the external face of O. fasciatus salivary glands by means of scanning electron microscopy and the in vitro interaction of living parasites with total proteins from the salivary glands in ligand blotting assays. This binding occurs primarily through an interaction with a 130 kDa salivary gland protein. The mass spectrometry of the trypsin-digest of this protein matched 23% of human laminin-5 β3 chain precursor sequence by 16 digested peptides. A protein sequence search through the transcriptome of O. fasciatus embryo showed a partial sequence with 51% similarity to human laminin β3 subunit. Anti-human laminin-5 β3 chain polyclonal antibodies recognized the 130 kDa protein by immunoblotting. The association of parasites with the salivary glands was strongly inhibited by human laminin-5, by the purified 130 kDa insect protein, and by polyclonal antibodies raised against the human laminin-5 β3 chain. This is the first report demonstrating that a laminin-like molecule from the salivary gland of O. fasciatus acts as a receptor for Phytomonas binding. The results presented in this investigation are important findings that will support further studies that aim at developing new approaches to prevent the transmission of Phytomonas species from insects to plants and vice-versa. Introduction Trypanosomatids of the genus Phytomonas are parasites of insects and plants. Species of the genus Phytomonas are found in a wide range of geographical areas, including Northern and Central Africa, China, India, several European countries, and on the American continent [1][2][3][4]. The parasitism may occur without any apparent pathogenicity in the plants, but may also cause devastating diseases in plantations of economic significance. These parasites live in the phloem or lactiferous ducts of the infected plants and have also been detected in fruits, such as pomegranates, peaches, guavas, and tomatoes [4,5]. Phytomonas serpens is a parasite of the tomato that use Phthia picta (Hemiptera: Coreidade) and Nezara viridula (Hemiptera: Pentatomidae) as natural hosts [6]. The phytophagous insect Oncopeltus fasciatus is a natural host of Phytomonas elmasiani [7] but is also able to host other species of trypanosomatids, as determined by experimental infection [8]. In the biological cycle of Phytomonas species, the parasites are ingested when a phytophagous insect feeds on an infected plant, then the flagellates pass through the intestinal epithelium and reach the hemolymph. After traveling throughout the hemocele, the protozoans reach the external face of the salivary glands. Once the parasites successfully bind to the external face of the gland, they pass through the gland epithelium and infect the salivary gland lumen. When the infected insect feeds on another plant, the flagellates are then transmitted via saliva. Therefore, the interaction between plant trypanosomatids and the vectors' salivary glands is vital for parasite transmission [5,6,9]. The pair of trilobed salivary glands of O. fasciatus is composed of a layer of simple cubical epithelium mounted on a basal lamina [10]. The chemical composition of O. fasciatus salivary gland basal lamina remains unknown. In other insects, the composition of basal lamina of distinct tissues is heterogeneous, but the protein laminin is regularly present [11][12][13][14][15][16]. Laminins belong to a family of glycoproteins that are assembled as heterotrimers of a, c and b chains [17,18]. The presence of laminin as receptors for parasites has been reported in mammalian systems, including the trypanosomatids Trypanosoma cruzi [19,20] and Leishmania donovani [21]. Also, laminins have been reported as receptors for parasites in invertebrate systems, playing an essential role in the interaction of malaria parasites with their insect vectors [16,[22][23][24][25][26]. Considering that plant infections caused by Phytomonas species can be devastating for agriculture, blocking the entrance of parasites into insect vectors' salivary glands could be viewed as a strategy for preventing the diseases they transmit. In the present study, we investigated the ex vivo interaction of P. serpens with O. fasciatus salivary glands by scanning electron microscopy and the in vitro interaction of living parasites with total proteins from the salivary glands using ligand blotting assays. We show here that the parasites bound to a 130 kDa salivary gland protein (p130), which was identified as a laminin-5 b3 chain-like protein by mass spectrometry. These results suggest that the binding of the plant trypanosomatid P. serpens to salivary glands of insect vectors, which is a crucial step for the life cycle of this parasite, first occurs through an interaction with a laminin b chain-like protein. Results Ex vivo interaction of Phytomonas serpens with Oncopeltus fasciatus salivary glands P. serpens parasites harvested in the stationary phase of growth were incubated in the presence of explanted salivary glands from O. fasciatus. Scanning electron microscopy (SEM) of the external face of the salivary glands showed a close association between the parasites and the basal lamina, where the adhesion of P. serpens occurred either through the flagellum or through the cellular body (Fig. 1A). On the other hand, the invasion of the basal lamina occurred only through the protozoan body (Fig. 1B), as after penetration of the parasites, some flagella were observed at the outer surface of the salivary glands (Fig. 1C). Parasites beneath the basal lamina are observed in Figure 1D. The disruption of the basal lamina during parasite-salivary gland interaction was also observed ( Figures 1B and 1C). In vivo experimental infection of Oncopeltus fasciatus salivary glands with Phytomonas serpens Parasites were injected into the thorax of the insects. The salivary glands of the infected insects were obtained by gently pulling off their heads [27]. Parasite adhesion to the outer surface of the salivary glands was observed by SEM. Parasite binding to the salivary glands was detected as early as six hours post-injection (data not shown). The number of adhering parasites increased 48 h post-infection ( Fig. 2A), reaching a high density of parasites attached to the glands 72 h post-infection ( Figures 2B and 2C). Ligand blotting In order to identify salivary gland proteins that are possibly involved in the binding of the parasites, total extract of gland proteins was transferred to polyvinylidene difluoride (PVDF) membranes (Fig. 3A). The incubation of the PVDF membranes with live parasites containing the surface proteins tagged with biotin showed that a 130 kDa salivary gland protein (p130) was recognized by these parasites (Fig. 3B). The 2D gel analysis of total salivary gland proteins showed two spots in the 130 kDa region (Fig. 4A, arrow). The polypeptides separated by 2D PAGE were then transferred to PVDF membranes. After the incubation of the membranes with biotinylated live parasites, it was observed that these parasites only bound to p130 (Fig. 4B). 2D PAGE analysis and mass spectrometry One of the protein spots shown in Fig. 4A, that reacted to the biotinylated live parasites (Fig. 4B) was analyzed by mass spectrometry. The analysis of the tryptic digest of this protein spot (Fig. 5A) matched 16 peptides that corresponded to 23% of the human laminin-5 b3 chain precursor sequence (Fig. 5B, underlined sequences). Search for laminin subunits in O. fasciatus embryo trancriptome A BLAST analysis applied against the transcriptome of O. fasciatus embryo [28] found a b1-like protein showing 51% similarity to human laminin b3 subunit and 65% similarity to Meleagris gallopavo laminin b1 subunit (Fig. 6). In addition, two conserved regions, one with 70% similarity to Acyrtosiphon pisum domains III and V of laminin c1 subunit (Fig. 7A) and another one with 77% similarity to a region of the domain VI (N-terminal) of A. pisum laminin c1 subunit are evident (Fig. 7B). Immunoblotting assay Total protein extract and purified p130 protein were separated by SDS-PAGE and stained with silver nitrate (Fig. 8, lanes a and b, respectively). The arrow indicates the p130 protein among all other proteins of the total protein extract (Fig. 8a). Polyclonal antibodies raised against human laminin-5 b3 chain recognized the purified p130 from O. fasciatus salivary glands by immunoblotting (Fig. 8, lane c). Inhibition of P. serpens interaction with O. fasciatus salivary glands Control parasite and parasites pre-treated with 2 or 20 mg/ml human laminin-5 or purified p130 obtained from salivary glands were allowed to interact with O. fasciatus salivary glands. At the lowest concentration tested, the human laminin-5 and p130 inhibited the binding of the parasites by 27 and 26%, respectively. At the highest concentration tested, the binding of the parasites was inhibited by 48% and 55%, respectively ( Fig. 9A and B). fasciatus embryo [28] shows a highly conserved region at the domain VI among these molecules. Black shaded residues are identical or similar amino acids present in all three sequences. Grey shaded residues are identical or similar amino acids present in two of the three sequences. The consensus sequence is represented under alignment lines. In the red rectangles are highlighted the regions with higher similarity between all three sequences. The alignments were performed using GENEDOC software [90]. doi:10.1371/journal.pone.0048170.g006 Similarly, when the salivary glands were pre-treated with antihuman-5 b3 antibodies at 1:500 and 1:100 dilutions before incubation with the parasites, the binding of the parasites was inhibited by 66 and 86%, respectively (Fig. 9C). Discussion Protozoan parasites transmitted by insects cause many diseases in animals (including humans) and plants. Altogether, about 500 million people are infected with Plasmodium species (malaria), Trypanosoma brucei complex species (African sleeping sickness), Trypanosoma cruzi (Chagas disease) and Leishmania species (leishmaniasis) [4,29]. Phytomonas species are important plant parasites that cause major economic losses especially in Latin America [1]. The arthropod saliva plays an important role in predatory, hematophagous and phytophagous insects. The saliva is injected into the animal or plant and contains compounds that are able to paralyze the prey or digest their tissues; these compounds may also prevent inflammation and hemostasis of the vertebrate host or interfere with the plant defense mechanisms [30]. The molecular composition of the saliva of a variety of arthropods has been analyzed in detail [31]. In contrast, the surface molecules of the salivary glands that are required for the entry of some important pathogenic parasites and the cell biological events occurring during invasion of the glands are mostly unknown [32]. For example, it is established that the development of the malaria parasite in the mosquito is completed when sporozoites cross the . In a parallel system, the salivary glands were pre-incubated in the presence of anti-human laminin-5 b3 chain antibodies (anti-b3 antibodies) (C). In the control systems, the parasites and salivary glands were pre-incubated in the absence of the proteins and antibodies, respectively. The proteins and antibodies were used at the indicated concentrations or dilutions. Each bar represents the mean 6 standard error of at least three independent experiments. The P values are indicated on the panels. doi:10.1371/journal.pone.0048170.g009 salivary gland epithelium [33]. On the other hand, the mechanism by which the parasite crosses the gland epithelia remains largely undefined; however, it is probably mediated by receptor [34]. Recently, it has been demonstrated that a crucial event for the attachment of T. rangeli to the salivary glands of R. prolixus is the dephosphorylation of structural phosphotyrosine (P-Tyr) residues at the surface of the glands, which is mediated by a T. rangeli P-Tyr ecto-phosphatase [35]. Likewise, Phytomonas species need to bind to the external surface of the insect vector salivary glands in order to invade the organ; subsequently, the parasites are transmitted via saliva when the infected insect feeds on another plant [5,36]. In this study, we examined the interaction between P. serpens and the external face of O. fasciatus salivary glands, both in vitro and in vivo. Using scanning electron microscopy we observed that the binding of P. serpens to the salivary gland basal lamina occurred through both the flagellum and cellular body. In the trypanosomatid-insect interactions, the adhesion to host tissue seems to occur mainly through the flagella [37][38][39][40], and binding through the cellular body is rarely observed [41,42]. In contrast with a previous study that showed Trypanosoma rangeli invading Rhodnius prolixus salivary glands with the flagellum foremost [27], the passage of P. serpens parasites through the basal lamina of O. fasciatus salivary gland occurred through the cellular body. In addition, the invasion of several parasites in an area with altered morphology and the presence of suggestive lesions were observed. Punctual damages to the basal lamina were also shown during the penetration of T. rangeli into R. prolixus and R. domesticus salivary glands [43,44]. Given that proteases that are secreted or present on the surface of many protozoan parasites are involved in tissue invasion [45][46][47][48][49][50], we suggest that the altered morphology in the basal lamina of the O. fasciatus salivary glands was promoted by surface and/or secreted protease activities of P. serpens, which would aid in gland invasion by the parasites. In fact, our group has consistent evidence of the participation of P. serpens proteases in the interaction between this parasite and the salivary glands of O. fasciatus. The pre-treatment of P. serpens with antibodies raised against the metalloprotease gp63 significantly inhibited the interaction of these parasites with O. fasciatus salivary glands [51,52]. The involvement of a cruzipain-like protease of P. serpens in the interaction with O. fasciatus salivary glands was also investigated. When the parasites were pre-treated with either protease inhibitors or anti-cruzipain antibodies, a drastic inhibition of binding was observed [53]. Furthermore, the cysteine protease produced by P. serpens cleaved at least one polypeptide located at the surface of O. fasciatus salivary glands [54]. Intriguingly, when P. serpens interacted with O. fasciatus in vivo, the parasites were preferentially attached to the regions between the salivary gland lobes, not to the exposed surface of the glands. Considering that P. serpens flagellates travel through the hemolymph to reach the insect salivary gland, it is possible that hemolymph molecules may trigger changes on the cell surface of P. serpens, enabling the parasites to bind specifically to the regions between lobes. Indeed, in the interaction between T. rangeli and R. prolixus, a hemolymph factor and/or the distribution of carbohydrate moieties on the salivary glands of R. prolixus are considered crucial for the insect vector-parasite interactions [55]. O. fasciatus is an emerging model organism, which lacks a sequenced genome [28]. In order to determine putative targets for P. serpens binding to the external face of O. fasciatus salivary glands, we used a ligand blotting assay developed by our group [56]. These experiments showed that only O. fasciatus salivary gland p130 was recognized by the biotinylated live parasites. Proteomic analysis showed a sequence similarity between p130 and the human laminin-5 b3 chain, which was corroborated by the recognition of the purified p130 by antibodies generated against human laminin-5 b3 chain. These similarities and the role of p130 in parasite binding were confirmed by binding inhibition experiments. The inhibition of parasite-gland interaction was dose-dependent when the parasites were pre-treated with human laminin-5 or p130. In addition, the same profile of inhibition was observed when the glands were pre-treated with antibodies raised against human laminin-5 b3 chain. The latter set of results together with the scanning electron microscopy observations allowed us to assume that p130 is located at the basal lamina of O. fasciatus salivary glands. The basal lamina is basically composed of proteins, including laminins [14,57,58]. The b1 and b2 chains of the Drosophila laminin have been sequenced and these polypeptides are very similar to their vertebrate counterparts. The Drosophila b2 chain is 40 and 41% identical to the human and mouse b2 chains, respectively, and 29, 30, and 29% identical to the Drosophila, human, and mouse b1 chains, respectively [12,59]. Intriguingly, the O. fasciatus salivary glands transcriptome did not reveal any laminin or laminin-related sequences [60]. At least two explanations can be proposed for these observations. One possibility is that laminin can be synthesized in a tissue or organ and transported to other sites in the body [61]. Another possibility is that because the transcriptome presents a set of mRNA transcribed at the time of its extraction from a given tissue, the laminin-5 b3 chain may not have been transcribed at that time [60]. In contrast, the ovarian and early embryonic transcriptome of O. fasciatus has been published and putative laminin genes were found in the sequence data submitted to GenBank [28]. Laminin b1 subunit has previously been described in Drosophila melanogaster [59], and both a and c laminin subunits have been identified in the hemiptera A. pisum [62]. We applied a protein BLAST analysis against R. prolixus recently sequenced genome, using H. sapiens b3 subunit and A. pisum c1 subunit sequences as queries and found predicted protein coding sequences with high similarity to both queries (data not shown). The predicted sequences contain characteristic laminin domains, namely domain VI and epithelium growth factor-like (EGF-like) domains of laminin b3 subunit, as well as laminin b domain and EGF-like domains of laminin c1, which suggests that such proteins may be present in the basal lamina of the hemiptera R. prolixus. The pairwise alignment of A. pisum and H. sapiens laminin c1 subunits with protein sequences of the O. fasciatus transcriptome also suggests that O. fasciatus presents a laminin clike subunit. The protein sequences of the O. fasciatus putative laminin subunits presented conserved domains, such as the EGFlike in the c-like subunit, and domain VI of b1 subunit. The domain VI is directly involved with laminin-collagen interactions [63] and the EGF domain has been characterized through the cysteine residues pattern, which is essential for the double stranded beta-sheet conformation [63]. This domain seems to have an essential role in cell attachment and receptor binding [64,65], corroborating with our hypothesis that p130 is the ligand for a parasite receptor. Completion of the Phytomonas spp life cycle in the insect involves passing through the midgut wall to reach the insect's hemolymph, which predominantly acts as a transport fluid to the salivary glands [5][6][7]36]. The hemolymph bathes all other insect organs besides the salivary glands, so the presence of specific surface receptors at the external face of salivary glands could be considered a target for Phytomonas species, which ultimately attach to and invade that organ [6,7]. Correspondingly, malaria parasites locate mosquito salivary glands by chemotaxis, suggesting the possibility that chemical component(s) can be identified and synthesized to block or suppress mosquito salivary gland invasion as a malaria transmission blocking strategy [66]. It is noteworthy to mention that laminin-binding proteins have been found on the surface of a variety of pathogens, including the protozoan parasites L. donovani [21,67,68], T. cruzi [19,69], Plasmodium [16,24,70], and Trichomonas [71,72]; fungi, such as Candida albicans [73][74][75], Histoplasma capsulatum [76], and Paracoccidioides [77], as well as bacteria, like Staphylococcus [78], Streptococcus [78,79], and Mycobacterium leprae [80,81]. The wide array of pathogens that use laminin as their receptor suggests that this strategy has a yet unidentified role that seems to be evolutionarily conserved [82]. The preset study is the first demonstration that a laminin-like molecule from the salivary gland of O. fasciatus acts as a receptor for Phytomonas binding. The results presented in this investigation are important findings that will support further studies that aim at developing new approaches to prevent the transmission of Phytomonas species from insects to plants and vice-versa. Insect colony A milkweed bug (Oncopeltus fasciatus) culture kit was purchased from Carolina Biological Supply Company, Burlington, North Carolina, USA. These insects originated the colony we maintain in our laboratory in plastic pitchers under a 12 h light/dark cycle at 28uC with 70-80% relative humidity. The insects were fed with commercially available peeled sunflower seeds and fresh water ad libitum. Only adults were used in all the experiments [83]. No field studies were performed in the present work. No specific permits were required for the described studies. Parasites Phytomonas serpens parasites (isolate 9T, CT-IOC-189), isolated from tomato (Lycopersicon esculentum), was provided by Dr. Maria A. de Sousa, Trypanosomatid Collection, Instituto Oswaldo Cruz, Rio de Janeiro, Brazil. The parasites were grown in Warren medium (37 g/l brain-heart infusion, 1 mg/l folic acid, 10 mg/l hemin) supplemented with 10% fetal calf serum at 26uC. Parasites were harvested at early stationary growth phase by centrifugation (10 min at 2,0006 g) and washed three times in TBS. Cellular viability was assessed by motility before and after all procedures. The viability of the parasites was never affected by the conditions used in this study. Ex vivo interaction of Phytomonas serpens with Oncopeltus fasciatus salivary glands Salivary glands were carefully dissected and explanted from adult insects seven days after moulting. The glands were placed in a Petri dish containing TBS (150 mM NaCl, 10 mM Tris, pH 7.2) at 4uC. Ten pairs of explanted salivary glands were incubated in a suspension containing 10 7 parasites in 100 ml TBS, supplemented with 1% bovine serum albumin. After an incubation period of 60 min at 26uC, unbound parasites were removed by three consecutive washes with TBS. In vivo experimental infection of Oncopeltus fasciatus with Phytomonas serpens Parasites were grown as described above, harvested at early stationary growth phase by centrifugation (15 min at 2006 g) and washed three times in sterile PBS (150 mM NaCl, 20 mM sodium phosphate, pH 7.2), at 4uC. The flagellates were then resuspended in sterile PBS (pH 7.2), and 4 ml of this suspension (5610 4 cells parasites) were injected in each insect (adult insects seven days after moulting). The parasites were injected laterally into the thorax, between the second and third thoracic segments of the insects, using a 10-ml Hamilton syringe [84]. Control insects were injected with 4 ml sterile PBS. The salivary glands of the insects were dissected and explanted at 2, 6, 24, 48 and 72 hours postinfection [27]. Scanning electron microscopy of salivary glands The salivary glands obtained from both in vitro and in vivo experimental infections were washed three times with TBS and fixed in a solution containing 2.5% (v/v) glutaraldehyde, 4% (w/v) freshly made formaldehyde, 3.7% (w/v) sucrose, and 5 mM CaCl 2 in a 0.1 M cacodylate buffer, pH 7.2, for 1 h at 26uC. After fixation, the glands were post-fixed in 1% (v/v) osmium tetroxide, 0.8 M potassium ferricyanide, and 5 mM CaCl 2 in 0.1 M cacodylate buffer, pH 7.2, for 1 h at 26uC. The glands were dehydrated in ethanol, dried using the CO 2 critical point method [85] in a Balzers apparatus model CDP-20, mounted on aluminum stubs with double coated carbon conductive tape, and sputtered with gold in a Balzers apparatus model FC-9646. Scanning electron microscopy observations were made under a Jeol JSM-5310 electron microscope. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and gel-membrane protein transfer Ten pairs of intact glands were frozen in 100 ml TBS containing 0.1% SDS supplemented with a protease inhibitor cocktail (100 mM E-64, 10 mM 1,10-phenanthroline, 10 mM pepstatin A, and 1 mM PMSF). After thawing, the glands were homogenized using a Teflon coated microtissue grinder. Homogenates were centrifuged at 8,0006 g for 10 min at 4uC and supernatant aliquots corresponding to 40 mg protein mixed with sample buffer (125 mM Tris, pH 6.8, 4% SDS, 20% glycerol, 0.002% bromophenol blue) were separated by 12% SDS-PAGE at 4uC, 150 V, and 60 mA, using a protein electrophoresis apparatus (Bio-Rad Laboratories, CA, USA). The gels were then stained for 1 h with 0.2% Coomassie brilliant blue R-250 in methanol-acetic acid-water (50:10:40) and washed in the same solvent. The molecular mass of the polypeptides was calculated according to the mobility of the ''Full Range Rainbow'' molecular mass standards (GE Healthcare, NJ, USA). Proteins separated on SDS-PAGE were transferred at 4uC, 100 V, and 300 mA for 2 h in 25 mM Tris-base, 200 mM glycin, and 20% methanol (pH 8.0) from the gels to polyvinylidene difluoride (PVDF) membranes using protein transfer cells (Bio-Rad Laboratories, CA, USA). Biotinylation of parasite surface proteins Suspensions containing live parasites (10 8 parasites/ml) in PBS, pH 8.0, were treated with 0.1 mg of impermeable biotin (Sulfo-NHS-Lc-Biotin -Pierce Biotechnology, IL, USA) per ml of reaction volume for 20 min at 4uC. The parasites were washed three times in TBS to remove the unbound biotin [56]. Ligand blotting The ligand blotting assays were carried out as previously described [56]. Briefly, the PVDF membranes were blocked in solution A (150 mM NaCl, 0.05% Tween 20, 1% bovine serum albumin (BSA), 10 mM Tris, pH 7.2) for 15 h at 4uC, in order to prevent non-specific binding [86], before incubation with live biotinylated parasites (10 8 cells/ml) for 1 h at 26uC. After the incubation, the membranes were washed three times in solution B (150 mM NaCl, 0.05% Tween 20, 10 mM Tris, pH 7.2), incubated in solution B containing peroxidase-labeled streptoavidin (0.1 mg/ml) for 1 h at 26uC, and then washed three times in solution B. The bands containing live biotinylated parasites were detected with an ECL kit (GE Healthcare, NJ, USA) according to the manufacturer's protocol. Two-dimensional (2D) gel electrophoresis Two-dimensional polyacrylamide gel electrophoresis was performed with a Multiphor II unit (GE Healthcare, NJ, USA) on an immobilized pH gradient 4 to 7 (Immobiline DryStrip, 7 cm, pH 4-7, GE Healthcare, NJ, USA) for the first dimension and SDS-PAGE on a 10% linear mini-gel (Bio-Rad system) for the second dimension. Samples were prepared by suspending ten pairs of salivary glands in lysis buffer (8.99 M urea, 0.02% Triton X-100, 0.13 M DTT, 0.02% (v/v) Pharmalyte 3-10, and 8 mM PMSF) followed by incubation at room temperature for 30 min and centrifugation at 8,0006 g for 10 min at 4uC. A volume of the supernatant containing 200 mg protein was mixed with a solution containing urea in order to achieve rehydration solution concentrations and then loaded onto the strip. Proteins were focused according to the manufacturer's instructions (GE Healthcare, NJ, USA). The gel strip was then loaded onto the polyacrylamide-SDS vertical gel and the proteins were separated as previously described [87]. Gels were then stained with Coomassie Blue G-250 [88], scanned with an Image Scanner (GE Healthcare, NJ, USA), and analyzed with the Image Master 2D Platinum software (GE Healthcare, NJ, USA). The isoelectric point values of the proteins of interest were determined using a linear 4-7 distribution, and the relative molecular mass (Mr) was determined based on protein low Mr markers (GE Healthcare, NJ, USA). Peptide mass fingerprinting Protein spots cut from 2D gels were destained with 25 mM NH 4 HCO 3 in 50% acetonitrile (ACN) and treated with porcine trypsin (Promega, WI, USA) as previously described [89]. Peptides were extracted with 50% ACN and 5% trifluoroacetic acid (TFA), and the resulting solution was dried in a Speed Vac (GE Biosciences, NJ, USA) to reduce the volume to 10 ml. One ml of the peptide solution was mixed with 1 ml of a saturated solution of a-cyano-4-hydroxycinnamic acid matrix in 50% ACN and 1% TFA. The mixture was spotted onto a MALDI-TOF sample plate (Voyager-DE, Applied Biosystem, CA, USA). Trypsin autolysis peptides masses 842.5 and 2211.1 and calibration mixture 2 (Sequazyme Peptide Mass Standard kit, PerSeptive Biosystems, CA, USA) were used as internal and external standards, respectively. Spectra were obtained in reflectron-delayed extraction mode with high resolution for 800-4000 Da range. Peptide mass fingerprints were analyzed using Protein Prospector MS-Fit interface (http://prospector.ucsf.edu) that matched the mass spectrometry data to protein sequences in the NCBI database. The criteria for identification were first a MOWSE score above 10 4 , at least a 100-fold difference in MOWSE score from the second possible hit, 20% of protein cover, and 8 matched peptides. Search for laminin subunits in the O. fasciatus embryo transcriptome Basic local alignment search tool (BLAST) was used for comparing the amino-acid sequences of the transcriptome of O. fasciatus embryo [27] with human laminin b3 subunit, as well as with the turkey Meleagris gallopavo laminin b1 subunit and the hemipteran Acyrtosiphon pisum laminin c1 subunit. The pairwise alignments were performed using GENEDOC software [90]. Purification of the laminin-like protein from the salivary glands Proteins from five hundred pairs of salivary glands (25 mg total protein) were extracted and separated by 10% SDS-PAGE as described above. The 130 kDa band was visualized after incubation of the gel in a 1 M potassium chloride solution. The band was then cut from the gel and incubated in the elution solution (50 mM sodium bicarbonate, 0.1% SDS, pH 7.8) for 1 h at 37uC. After incubation, the preparation was centrifuged at 8,0006 g for 20 min and the proteins were precipitated from the supernatant with 80% acetone at 220uC. The purity of the product was evaluated by SDS-PAGE stained with silver nitrate, as previously described [91]. Immunoblotting assay PVDF membranes containing salivary gland proteins were blocked in solution A for 15 h at 4uC and then incubated with goat polyclonal antibodies raised against the human-5 b3 laminin chain (sc-7651, Santa Cruz Biotechnology) at a 1:500 dilution. The secondary antibody used was peroxidase-conjugated rabbit antigoat IgG (A 4174, Sigma-Aldrich) at 1:10,000 dilution. The antibody dilutions and membrane washes were performed in solution A. Bound antibodies were then detected with an ECL kit (GE Biosciences, NJ, USA) according to the manufacturer's protocol. Inhibition of the ex vivo parasite-salivary gland interaction Ten pairs of explanted salivary glands were incubated for 60 min at 26uC in 100 ml TBS that contained 10 7 parasites. These flagellates had been pre-treated for 30 min at 26uC in the absence (control) or in the presence of 2 or 20 mg/ml human laminin-5 or the purified p130. Alternatively, before the interaction ten pairs of salivary glands were maintained for 30 min at 26uC in the absence (control) or in the presence of anti-human-5 b3 laminin chain goat IgG antibodies at 1:500 dilution or anti-rabbit IgG at 1:100 dilution (negative control). After incubation, unbound parasites were removed by three consecutive washes in TBS and the number of bound parasites was determined as previously described [92]. Statistical analysis The experiments of inhibition of the in vitro interaction between P. serpens and salivary glands explanted from O. fasciatus were performed in triplicates. The results are presented as the mean and standard error of the mean (SEM). Normalized data were analyzed by one-way analysis of variance (ANOVA) and differences between groups were assessed using the Student Newman-Keuls post-test. A P value of ,0.05 was considered significant.
7,093.8
2012-10-31T00:00:00.000
[ "Biology", "Environmental Science" ]
OMICS Technologies and Applications in Sugar Beet Sugar beet is a species of the Chenopodiaceae family. It is an important sugar crop that supplies approximately 35% of the sugar in the world. Sugar beet M14 line is a unique germplasm that contains genetic materials from Beta vulgaris L. and Beta corolliflora Zoss. And exhibits tolerance to salt stress. In this review, we have summarized OMICS technologies and applications in sugar beet including M14 for identification of novel genes, proteins related to biotic and abiotic stresses, apomixes and metabolites related to energy and food. An OMICS overview for the discovery of novel genes, proteins and metabolites in sugar beet has helped us understand the complex mechanisms underlying many processes such as apomixes, tolerance to biotic and abiotic stresses. The knowledge gained is valuable for improving the tolerance of sugar beet and other crops to biotic and abiotic stresses as well as for enhancing the yield of sugar beet for energy and food production. INTRODUCTION Sugar beet (Beta vulgaris. L), a species of Chenopodiaceae family, is an important sugar crop that supplies approximately 35% of the sugar in the world . In the United States, sugar beet has provided about 55 percent of the total sugar produced domestically since the mid-1990s (Benoit et al., 2015). Sugar beet was introduced to China from Arabia about 1500 years ago and it is a dicotyledonous plant with high economic value in many countries. Therefore, how to grow the crop efficiently has been a priority and extensively investigated (Draycott, 2006). Sugar beet is a biennial crop which grows a sugar-rich tap root in the first year (the vegetative stage) and a flowering seed stalk in the second year (the reproductive stage; Chen et al., 2016). The types of sugar beet can be distinguished according to various internal and external features, such as economic characters, trait diversity, and chromosome ploidy. Beta corolliflora Zoss. (2n = 36) is a wild species of the beet Corollinae section that has many characteristics including tolerance to drought, cold, salt and against disease. Sugar beets (Beta vulgaris) are classified as salt-tolerant crops (Dunajska-Ordak et al., 2014). Scientists have studied the interspecific crossing of cultivated sugar beet (Beta vulgaris L. 2n = 19) and B. corolliflora Zoss. for decades (Dalke et al., 1971;Filutowicz and Dalke, 1976). In our lab, Guo et al. obtained the sugar beet monosomic addition line M14 (Figure 1), which contains the Beta vulgaris L. genome with the addition of No. 9 chromosome of B. corolliflora Zoss (Guo et al., 1994). It has several interesting characteristics including apomixes and tolerance to drought, cold and salt stress (Guo et al., 1994). Apomixis is a mode of asexual reproduction characterized by the production of clonal seeds via the parthenogenesis development of an unreduced egg. The apomictic process bypasses meiosis and egg cell fertilization, producing offspring that are exact copies of the mother plant (Nogler, 1984;Ozias-Akins, 2006). Sugar beet M14 therefore can function as a unique germplasm for studying the characteristics of apomixes and tolerance to abiotic stresses. During evolution, plants have developed complex strategies that regulate biochemical and physiological acclimation in order to respond to biotic stress (viral, bacterial, fungal, and oomycete infections; Baum et al., 2007) and abiotic stress (salinity, drought, and low temperature; Barnabas et al., 2008;Yolcu et al., 2016). Biotic and abiotic stresses severely reduce agricultural productivity worldwide (Munns and Tester, 2008;Pinhero et al., 2011;Mishra et al., 2012). Therefore, understanding how plants respond and tolerate biotic and abiotic stresses is important for boosting plant (e.g., sugar beet) productivity under these challenging conditions. In order to minimize the negative impact of these stresses, studying how the sugar beet has evolved stress coping mechanisms will provide new insights and lead to novel strategies for improving the breeding of stress-resistant sugar beet and other crops. In recent years, genomics knowledge based on Next Generation Sequencing (NGS), gene editing systems, gene silencing, and over-expression methods have provided a large amount of genetic information to help reveal the mechanisms of biotic and abiotic stress responses in plants (Saad et al., 2013;Shan et al., 2013;Yin et al., 2014;Luan et al., 2015). At the transcriptome level, technological innovations have made it possible to overview the changes that occur at the transcriptomic level under different environmental stress conditions. Microarrays and RNA sequencing techniques are employed to elucidate the differential expression of genes involved in biotic and abiotic stress responses in a variety of plant species (Kreps et al., 2002;Shinozaki and Yamaguchi, 2007;Ergen and Budak, 2009;Mitchell et al., 2014;Akpinar et al., 2015;Wang et al., 2016). Proteomics and metabolomics are two emerging "-omic" techniques in the post-genomic era (Fernandez-Garcia et al., 2011). Proteomics technologies allow the simultaneous identification and quantification of thousands of proteins that are an essential tool for understanding the biological systems and their regulations (Silva-Sanchez et al., 2015). Proteomics can be used to compare proteomes under varying stress conditions (Draycott, 2006;Liu et al., 2008;Benoit et al., 2015). Metabolomics focuses on the global profile of the low molecular weight (<1000 Da) metabolites which are the end products of metabolisms in biofluids, tissues and even whole organism (Brosché et al., 2005). Metabolomics has recently been utilized in an increasing number of applications to investigate plant metabolite responses to abiotic stresses, particularly drought, flooding, salinity, and extreme temperatures (heat and cold; Jorge et al., 2015;Jia et al., 2016). Obviously, a combination of OMICS techniques including genomics, transcriptomics, proteomics and metabolomics could could serve to validate and complement one another in order to provide an efficient way capable of improving stress tolerance in plants. Sugar beet is a good plant resource to explore and identify genes and proteins involved in stress resistance. Sugar beet is widely used in sugar industry . It is a source of the clean energy via hydrogen gas and bioethanol (Dhar et al., 2015). It contains abundant betaine and betalain metabolites. Betaine is used to improve the plant stress tolerance (Catusse et al., 2008). Betalains are natural pigments which have potential health benefits (anticarcinogenic and antioxidative) and have attracted both scientific and economic interest (Stintzing and Carle, 2007;Moreno et al., 2008). A rich and cheap source of betalains in red beet root (Beta vulgaris L.) is very attractive to the pharmaceutical and food industries (Wybraniec, 2005;Wybraniec et al., 2011Wybraniec et al., , 2013. In this review, we have summarized OMICS applications and covered the recent discoveries in sugar beet research including the M14 for identification of novel genes and proteins related to biotic and abiotic stresses, apomixes, and metabolites related to energy and food production. The knowledge gained is valuable for improving sugar beet and other crops tolerance to biotic and abiotic stresses as well as for enhancing the yield of sugar beet for energy and food production. OMICS OVERVIEW FOR DISCOVERING NOVEL GENES, PROTEINS AND METABOLITES IN SUGAR BEET In recent years, the use of OMICS tools has considerably increased for studying biotic and abiotic stresses in plants. The existing methods include genomics, transcriptomics, proteomics, metabolomics, and several others capable of discovering and characterizing the expression of genes or proteins during biotic and abiotic stresses with high efficiency shown in Figure 2. These highly sensitive tools can analyze plant tissues and help to improve our understanding of the tolerance mechanisms utilized by sugar beet. GENOMICS STUDIES IN SUGAR BEET The Whole Genome Sequence of Sugar Beet The whole genome sequence of sugar beet has been reported by Dohm et al. (2014). A total of 27,421 protein-coding genes were predicted based on transcription data and annotated on the basis of sequence homology (Dohm et al., 2014). Compared to other flowering plants with the genome information, the sugar beet has a small number of genes encoding transcription factors. It has been suggested that the sugar beet may contain unknown genes associated with transcriptional control, and that the genetic interaction network of sugar beet may have evolved in unique ways compared with other species. Using the sugar beet genome sequence and related resources, we expected to find the molecular mechanisms underlying gene regulation and gene environment interaction. In addition, this information can help to develop crops with improved sugar and natural substance production and have an important role in future plant genomic research. SMRT of the Sugar Beet Chloroplast Genome SMRT (Single Molecule Real-Time) is a third generation sequencing method, which offers much longer read length compared to NGS methods. It is well suited for de novo-or resequencing projects. It not only contains reads originating from the nuclear genome, but also lots of reads from the organelles of the target organism (Sanger et al., 1977;Liu et al., 2012). Stadermann et al. described a workflow for de novo assembly of the sugar beet chloroplast genome based on data originating from a SMRT sequencing dataset targeted on nuclear genome (Stadermann et al., 2015). They identified a total of 114 individual genes. Of these, 79 genes encode mRNA (i.e., proteins), 7 encode rRNA and 28 are tRNAs. Nine genes are located within the inverted repeat (IR) regions which encode 5 mRNAs, 1 rRNA, and 3 tRNAs. In comparison to the Illumina assembly, the annotation showed some differences due to changes in the underlying sequences. miRNAs Involved in Tolerance to Abiotic Stresses miRNAs are small 19-23 nucleotides short non-coding RNAs, which play regulatory roles in many processes . miRNAs can act both at the transcriptional or post-transcriptional levels. miRNA mediated gene-silencing mechanism regulates the expression of transcription factors, phytohormones, and other developmental signaling pathways (Llave et al., 2002;Dalmay, 2006;Sunkar et al., 2007). Earlier studies have shown that miRNAs mainly target transcription factors, but recent studies have revealed that miRNAs also target other development/stress signaling pathways, which are involved in various physiological processes, including root growth and development, response to stress, signal transduction, leaf morphogenesis, plant defenses, and biogenesis of sRNA (Curaba et al., 2014). Li J. L. et al. (2015) reported 13 mature miRNAs from 12 families using an in silico approach based on 29,857 expressed sequence tags and 279,223 genome survey sequences in B. vulgaris. The psRNA target server predicted 25 target genes for the 13 miRNAs. The target genes shown in Table 1 appeared to encode transcription factors or were involved in metabolism, signal transduction, stress response, growth, and development. However, there were no targets predicted from the current database of sugar beet for Bvu-miR4, Bvu-miR9, Bvu-miR10, Bvu-miR11, and Bvu-miR12. Several miRNAs identified have been shown to have critical roles in plants. For example, the expression of Bvu-miR1 (Protein ARABIDILLO) in A. thaliana regulates multicellular root development (Moody et al., 2012). Bvu-miR2 regulates the expression of ATPase during plant development and coordinates its induction in response to high salinity (Lehr et al., 1999). Through transcriptional regulation, it also affects the ATPase activity of magnesium chelatase subunit I in Barley (Lake et al., 2004) and abscisic acid insensitive 5 required to delay growth of germinated seedlings under environmental stress (Liu and Stone, 2013). Bvu-miR7 targets the respiratory burst oxidase gene family, which encodes the key enzymatic subunit of the plant NADPH oxidase (Torres and Dangl, 2005). Bvu-miR8 activates the transcription of a histone acetyltransferase GCN5 in A. thalianaina (Benhamed et al., 2008). Another target protein MYB6 acts as an immediate and positive activation signaling component of the active state of MLA immune receptors during transcriptional reprogramming for defense responses (Chang et al., 2013). Dehydration-responsive element-binding proteins form a major AP2/ethylene-responsive element-binding protein family and play crucial roles in the regulation of abiotic stress responses . Bvu-miR13 targets WD-repeat proteins in this diverse family of regulatory proteins. To date, genome-wide characterization of this family has only been conducted in Arabidopsis and little is known about WD-repeat protein-coding genes in other species. Recently, it has become known that the WD-repeat protein plays an important role in cucumber stress resistance (Li et al., 2014). Other targets, e.g., leucine-rich repeat proteins, plays critical roles in both animal and plant signaling pathways regulating growth, development, differentiation, cell death, and pathogenic defense responses (Gou et al., 2010). These studies have provided insights into the molecular mechanisms of the miRNAs and may have great potential for sugar beet improvement. The functions of these interesting miRNAs in sugar beet need to be investigated in the future. QTL Mapping for Disease Resistance in Sugar Beet Leaf spot is one of the most serious and widespread foliar diseases of sugar beet. It causes necrotic lesions and progressive destruction of the plant's foliar structure and function (Holtschulte, 2000). The disease has greatly impacted on the yield and sugar contents of the crop. Doubled haploids (DHs), F2 populations of recombinant inbred lines (RILS), and near isogenic lines (NILS) are suitable populations for quantitative trait loci (QTL) mapping (Ibrahim et al., 2012). In order to deal with the complex inheritance of resistance to Cercospora leaf spot (CLS), Taguchi et al. (2011) used RILs of sugar beet, which were generated by a cross between a resistant line ("NK-310mm-O") and a susceptible ("NK-184mm-O") line. These RILs were then tested for their resistance to the CLS pathogen in the field (Taguchi et al., 2011). Composite interval mapping (CIM) showed four QTLs involved in CLS resistance that were consistently detected. There were two resistant QTLs (qcr1 on chromosome III, qcr4 on chromosome IX) that promoted resistance in the cross between lines ("NK-310mm-O"). There were two further QTLs (qcr2 on chromosome IV, qcr3 on chromosome VI) which promoted resistance in the susceptible line. In addition, a number of important resistance gene cluster have been mapped on the chromosome III in the sugar beet genome, for example: a CLS resistance QTL (Setiawan et al., 2000), the genes Rz1 toRz5 (Grimmer et al., 2007), gene Acr1 (Taguchi et al., 2010), gene RGAs (Lein et al., 2007), and gene X, a restorer of fertility for Owen CMS (Hagihara et al., 2005). These resistance-gene clusters on the chromosome III are mainly responsible for the disease resistance in sugar beet. BAC Library from the Genomic DNA of the No. 9 Chromosome in Sugar Beet M14 A plant-transformation-competent binary BAC library was constructed from the genomic DNA of the No. 9 chromosome in sugar beet M14 (Fang et al., 2004). A total of 2365 positive clones were obtained and arrayed into a sublibrary specific for the B. corolliflora chromosome 9 (designated bcBAC-IX). The bcBAC-IX sublibrary was further screened with a subtractive cDNA pool generated from the ovules of M14 and the floral buds of B. vulgaris by the suppression subtractive hybridization (SSH) method. One hundred and three positive binary BACs were obtained, which may potentially contain the genes of the alien No. 9 chromosome that is specifically expressed during the ovule and embryo development of M14, and which may be associated with apomictic reproduction. The binary BAC clones are useful for the identification of the genes responsible for apomixes by genetic transformation. TRANSCRIPTOMICS STUDIES IN SUGAR BEET Suppression Subtractive Hybridization (SSH) Applications in Sugar Beet SSH is a technique used to identify differentially expressed genes in cells important for growth and differentiation (Lukyanov et al., 2007). This method has often been used to study molecular mechanisms of plants in biotic and abiotic stresses (Sahebi et al., 2015). The response to insect pests in the root of sugar beet is an interesting area of plant defense research. Puthoff et al have identified more than 150 sugar beet root ESTs enriched for genes that respond to feeding by the sugar beet root maggot in both the moderately resistant genotype F1016 and the susceptible F1010 using SSH [49]. The differential expression of the root ESTs was confirmed via RT-PCR. The ESTs were further characterized using microarray-generated expression profiles from the F1016 sugar beet roots following mechanical wounding and treatment with the signaling molecules methyl jasmonate, salicylic acid and ethylene. Of the examined ESTs, 20% were regulated by methyl jasmonate, 17% by salicylic acid and 11% by ethylene, suggesting that these signaling pathways are involved in sugar beet root defense response. Identification of these sugar beet root ESTs provides knowledge concerning plant root defense and will likely lead to the development of novel strategies for the control of the sugar beet root maggot (Puthoff and Smigocki, 2007). SSH was applied to isolating taproot expressed genes from sugar beet as well (Kloos et al., 2002). The taproot of sugar beet (Beta vulgaris L.) undergoes a specific developmental transition in order for it to function as a storage organ. SSH was utilized to isolate cDNA fragments of genes expressed in the taproot. Molecular analysis of six cDNAs that encoded complete gene products revealed that these genes comprise homologs of a drought-inducible linker histone, a major latex-like protein, a phosphoenolpyruvate carboxylase kinase, a putative vacuolar processing enzyme, a thaumatin-like protein and an alanine-and glutamic acid-rich protein. All of these genes are transcribed in taproots, while the expression in leaves is low or undetectable. SSH had also been used in the sugar beet M14 to identify differentially expressed genes. A subtractive cDNA library was prepared by SSH between the flower organ of M14 and that of B. vulgaris (Ma et al., 2011). A total of 190 unique sequences were identified in the library and their putative functions were analyzed using Gene Ontology (GO). All of the ESTs provide information about candidate genes useful for studying M14 reproductive development. One of the genes, designated as BvM14-MADS box, encodes a MADS box transcription factor. It was cloned from M14 and over-expressed in transgenic tobacco plants. Overexpression of BvM14-MADS box led to significant phenotypic changes in tobacco (Ma et al., 2011). Li et al. (2009) reported a comparative proteomic and transcriptomic study of the sexual and apomictic processes in sugar beet. The cDNA libraries were constructed using SSH with the apomictic monosomic addition line M14 as the tester and B. vulgaris as the driver. Comparative analyses of proteomic data and transcriptomic data showed that eight proteins had significant agreement between protein and mRNA expression levels. Most of the matched proteins were associated with metabolism. Interestingly, two of the matched proteins, cystatin, and thioredoxin peroxidase, were found to be associated with disease and defense response, indicating that defense-related proteins may participate in the apomictic reproductive process. Yang et al. (2012) reported transcriptomic analysis of sugar beet M14 leaves and roots that were treated with 500 mM NaCl for 7 days. The SSH technology was used to produce a high quality subtractive cDNA library. A total of 600 positive clones were randomly selected and subjected to DNA sequencing, and 499 non-redundant ESTs were obtained. After assembly, 58 unigenes including 14 singletons and 44 contigs were obtained. Some salt-responsive genes were identified as important in metabolism (e.g., sadenosylmethionine synthase 2 (SAMS2) and nitrite reductase), photosynthesis (e.g., chloroplastic chlorophyll a-b binding protein 8), energy (e.g., phosphoglycerate kinase), protein synthesis (e.g., 60S ribosomal protein L19-3), and degradation (e.g., cysteine protease and carboxyl-terminalprocessing protease), and stress and defense [e.g., glutathione S-transferase (GST)]. This study has revealed candidate genes for detailed functional characterization, and has set the stage for further investigation of salt tolerance mechanisms in sugar beet. Transcriptomics of Sugar Beet in Response to Low Temperature Stress Comparative transcriptomics is used to identify differences in transcript abundance between different cultivars, organs, development stages and/or treatment conditions (Mardis, 2008;Schuster, 2008;Bräutigam and Gowik, 2010). Low-temperature stress is a significant factor effecting of crop quality and causing production losses in agriculture. The survival of young sugar beet seedlings and the subsequent sugar yield of mature plants are often seriously limited by low temperature, especially when the plants are exposed to freezing temperatures at early developmental stages. Moliterni et al. (2015) determined the transcriptomic changes using high-throughput sequencing of the leaves and root RNAs (RNA-Seq) from sugar beets which had been exposed to cold stress which mimicked the conditions of spring nights sometimes experienced by young seedlings (Moliterni et al., 2015). In the root tissue, CBF3 is up-regulated within a few minutes of cold stress. The authors suggested that CBF3 transcription in the stressed plants is either maintained for a longer period or begins earlier in roots compared to leaves. The AP2/ERF family genes were also found to be either activated or up-regulated in all the organs by cold stress. This is an expected result, as it is known that these TFs are rapidly induced upon exposure to low temperature in Arabidopsis (Lee et al., 2005). AP2/ERF TFs are involved in the regulation of primary and specialized metabolism and in a number of JA responses (Licausi et al., 2013). It has been reported that the lack of ADA2b TFs leads to an increase of freezing tolerance by affecting nucleosome occupancy in Arabidopsis (Vlachonasios et al., 2003). In addition, a putative histone acetylase and a lysinespecific demethylase are strongly up-regulated in the leaves under cold stress, implicating chromatin remodeling, and modification in the response. These studies suggested that the metabolic pathway most affected by low temperature was carbohydrate metabolism. In addition, the authors found 13 differentially expressed sequences related to phospholipid secondary metabolism, none of which were common to leaves and roots, implicating this pathways as another important component in early cold signaling in sugar beet roots. The high degree of organ specificity is probably due to the repertoire of compounds synthesized by the two organs upon stress. This data has illuminated the transcriptome of young sugar beet during cold stress at night, and has detailed both organ-specificity and shared pathways in the physiological response to low temperatures. These RNA-Seq based transcriptomics techniques are an effective and powerful tool, with the analyses identifying novel genes for future studies. PROTEOMICS STUDIES IN SUGAR BEET Proteomic analysis has been carried out to address several important questions in many processes, such as: signaling, regulatory processes, and transport in plants . Important knowledge of the proteomic response to stress has been mainly derived from studies of the model plants A. thaliana and rice (Janmohammadi et al., 2015;Liu et al., 2015;Xu et al., 2015). Proteomic analysis provides an important way to test the changes in protein levels to help identify novel proteins. Zhu et al. (2009) compared the proteomes of the monosomic addition line M14 and B. vulgaris using 2-DE (two-dimensional gel electrophoresis). They have identified 27 protein spots using MALDI-TOF MS. Among them, only two protein spots were found in B. vulgaris and five protein spots were unique to M14. These proteins were involved in many biological pathways. The results may be useful for us to better understand how genotype differences relate to proteome and phenotype differences. Li et al. (2009) reported a comparative proteomic and transcriptomic analysis of the sexual and apomictic processes in sugar beet. A total of 71 differentially expressed protein spots from the floral organs of the M14 were identified in the course of apomictic reproductive development using 2-DE and MS analysis. The differentially expressed proteins were involved in several processes which may work cooperatively to promote apomictic reproduction, generating new potential protein markers important for apomictic development. Proteome Changes in Response to Salt Stress of Sugar Beet To date, some proteomic studies concerning the response of sugar beet to salt stress have been reported. Wakeel et al. identified six proteins from sugar beet shoots and three proteins from roots that significantly changed under 125 mM salt treatment (Wakeel et al., 2011). Our group has performed proteomic analysis of the monosomic addition line M14 under 500 mM salt stress for 7 days. A total of 71 differentially expressed protein 2D spots were identified using LC-MS/MS. The largest functional group is represented by metabolism (28%), followed by energy (21%), protein synthesis (10%), stress and defense (10%), destination proteins (8%), unknown proteins (8%), secondary metabolism (5%), signal transduction (4%), transporters (1%), and cell division (1%). Of the identified proteins, only eight had corresponding transcriptomic data. This highlights the importance of expression profiling at the protein level (Li et al., 2009). On this basis, we focused on the functions of cystatin , glyoxalase I (Wu et al., 2013), CCoAOMT, and thioredoxin peroxidase. All of these proteins showed increased protein levels under salt stress. Transgenic plants exhibited enhanced tolerance to salt stress. This research has directly improved our understanding of mechanisms underlying the M14's high salt tolerance. Another proteomics study aims to identify salt-responsive proteins in the M14 plants under 0, 200, and 400 mM NaCl mild salt stress conditions using 2D-DIGE to separate the proteins from control and salt-treated M14 leaves and roots (Yang et al., 2013). The differentially expressed proteins were identified using nanoflow liquid chromatography (LC)−MS/MS and Mascot database searching. As a complementary approach, iTRAQ LC−MS/MS was employed to identify and quantify differentially expressed proteins during salinity response in M14. We have identified 86 protein spots representing 67 unique proteins in leaves, and 22 protein spots representing 22 unique proteins in roots. In addition, 75 differentially expressed proteins were identified in leaves and 43 differentially expressed proteins were identified in roots, respectively. The proteins were mainly involved in photosynthesis, energy, metabolism, protein folding and degradation, and stress and defense. Compared to the transcriptomic data, 13 proteins in leaves and 12 proteins in roots showed significant correlation in gene expression and protein levels. These results suggest that there are several processes underlying the M14 tolerance to salt stress. Our group also reported the changes in membrane proteome of the M14 plants in response to salt stress (0, 200, 400 mM NaCl;. We have used an iTRAQ two-dimensional LC-MS/MS technology for quantitative proteomic analysis. In total, 274 proteins were identified and mostly of them were membrane proteins. A total of 50 differential proteins were identified, with 40 proteins showing increased expression and 10 with decreased expression. The proteins were mainly involved in transport (17%), metabolism (16%), protein synthesis (15%), photosynthesis (13%), protein folding and degradation (9%), signal transduction (6%), stress and defense (6%), energy (6%), and cell structure (2%). These results have revealed that membrane proteins contribute to the salt stress tolerance observed in M14. Hajheidari et al. (2005) studied the proteome changes of sugar beet in response to drought stress. Leaves from well-watered and drought treated plants at 157 days after sowing were collected. The changes of proteins were analyzed using 2D-DIGE followed by image analysis. More than 500 protein spots were detected, and 79 spots had significant changes under drought stress. Twenty protein spots were digested and subjected to LC-MS/MS, and 11 proteins involved in oxidative stress, signal transduction and redox regulation were identified. These proteins may be important targets for improving plant abiotic stress tolerance via breeding. METABOLOMIC STUDIES IN SUGAR BEET Metabolomics is an exciting technology, which was used to identify secondary metabolites important for physiological processes and different stress responses (Capuano et al., 2013). During plant development and interaction with the environment, the dynamic metabolome reflects the plant's physiological and biochemical processes, and can determines the phenotypes and traits (Fernie et al., 2004;Oksman-Caldentay and Saito, 2005). Now there are gas chromatography (GC)-MS, LC-MS, capillary electrophoresis (CE)-MS, and nuclear magnetic resonance (NMR) as the major analytical tools in metabolomics. Kazimierczak et al. (2014) determined the levels of metabolites in both raw beet root and naturally fermented beet root juices from organic (ORG) vs. conventional (CONV) products. The aim of the paper was to find out the value of the fermented beetroot juices in terms of anticancer properties. The results showed that ORG fresh beetroots contained more useful compounds than CONV beetroots, such as dry matter and vitamin C, more than CONV beetroots. Compared to the CONV juice, it was found that the ORG fermented juices have stronger anticancer activity. Metabolomics is still in its infancy with these analyses of sugar beet being rare, but future research can be expected to implement this powerful technology. IMPORTANT APPLICATIONS OF SUGAR BEET TO BE ENHANCED BY OMICS Many plants accumulate glycine betaine (betaine) to regulate biochemical and physiological processes under abiotic stresses (Takabe et al., 2006). For example, glycine betaine serves as a methyl donor in several biochemical pathways (Pummer et al., 2000). Sugar beet is a betaine-accumulating dicotyledonous plant with high economic value (Catusse et al., 2008). It has been reported that betaine is synthesized by the two-step oxidation of choline in which choline monooxygenase (CMO) catalyzes the first step, and betaine aldehyde dehydrogenase (BADH) performs the second step (Yamada et al., 2009). CMO is therefore a key enzyme to protect plants against abiotic stresses. It has been found in Chenopodiaceae and Amaranthaceae, but not in some betaine-accumulating plants such as mangrove (Bhuiyan et al., 2007). Unlike sugar beet, many plants do not have the betaine biosysthesis pathway. Therefore, genetic engineering of the betaine biosynthesis pathways represents a potential way to improve the plant stress tolerance (Hibino et al., 2001;Fitzgerald et al., 2009). In addition to betaine, betalains are rich in red beets and exist only in 10 families of plants of the Caryophyllale. Red beetroot (B. vulgaris) is widely used as a food ingredient because of its beetroot red color. Therefore, most studies on red beetroot constituents have focused on the betalains. Currently, the yield of betalains extracted and purified from the beetroot red is only about 10%. Phenolics (such as betalains) have been shown to have nutritional value and there has been an increasing interest utilizing these plant constituents to improve food ingredients and as antioxidants. Betalains are important plant phenolics with many attractive properties, such as: stability, antioxidant activity, antitumor properties, and reduction of blood lipid and sugar levels. In addition, betalains are effective free-radical scavengers, which help to maintain health and protect from diseases such as cancer and coronary heart disease (Kujala et al., 2000;Han et al., 2015;Mikołajczyk-Bator et al., 2016). Additonally, sugar beet provides approximately 30% of the world's annual sugar production and is a source of both bioethanol and animal feed. Dhar et al. (2015) have developed two highly efficient methods to produce hydrogen gas from sugar beet juice as a clean energy source (Dhar et al., 2015). Sugar beet byproducts (SBB) generated during industrial sugar extraction are mainly composed of pulp and molasses and the use of SBB as a renewable energy resource could add additional economic and environmental benefits (Aboudi et al., 2015). OMICS research can greatly enhance potential applications of sugar beet in at least three ways. One is to improve our knowledge of molecular networks involving key metabolite synthesis, e.g., glyceine betaine and betalains. The knowledge will enable modeling and rationale engineering of the important metabolites. Additionally, we can utilize OMICS to investigate the global molecular changes that occur in response to stress and the tolerance of sugar beet to stress conditions. This information can help to improve stress tolerance and thereby yields of sugar beet, even under non-ideal conditions. Finally, research on unique sugar beet germplasms (e.g., M14 under salt stress) may be useful for enhancing yield, and food and bioenergy production in other crops. CONCLUSION In this review, we have summarized OMICS technologies and applications in sugar beet including: M14 for identification of novel genes, proteins related to biotic and abiotic stresses, apomixes, and metabolites related to energy, food and human health. Genomics is a powerful technology to provide the whole genome blueprint of sugar beet. Mechanisms underlying apomixes and stress tolerance have mainly been studied using transcriptomics and proteomics technologies, while metabolomics studies in sugar beet are still rare. To date, a lot of genes and proteins related to apomixes and salt stress were identified to reveal apomixes and salt tolerance mechanisms in a special germplasm sugar beet M14. The results have enhanced our understanding of the molecular mechanisms of sugar beet in response to tolerance to biotic and abiotic stresses and apomixes, which may be applied to improving stress tolerance of sugar beet and other crops to improve food production, energy output (e.g., hydrogen gas and bioethanol), and accumulation of health promoting chemicals (such as betalains). Despite the use of sugar beet to produce the clean energy of hydrogen gas and bioethanol and to isolate betalains used for natural food colorants, dietary supplements and medicines were not widely applied in market, sugar beet as a high economic value crop will have a prosperous perspective of application in the food, bioenergy and pharmacy industries. AUTHOR CONTRIBUTIONS YZ collected and analyzed references for this paper, and wrote the first draft, JN drew the figure and assisted in the reference organization. BY played a supervision role, and led the writing and organization. All three authors have edited the manuscript.
7,520.8
2016-06-22T00:00:00.000
[ "Biology", "Environmental Science", "Agricultural and Food Sciences" ]
BCJ duality and double copy in the closed string sector This paper is focused on the loop-level understanding of the Bern-Carrasco-Johansson double copy procedure that relates the integrands of gauge theory and gravity scattering amplitudes. At four points, the first non-trivial example of that construction is one-loop amplitudes in N=2 super-Yang-Mills theory and the symmetric realization of N=4 matter-coupled supergravity. Our approach is to use both field and string theory in parallel to analyze these amplitudes. The closed string provides a natural framework to analyze the BCJ construction, in which the left- and right-moving sectors separately create the color and kinematics at the integrand level. At tree level, in a five-point example, we show that the Mafra-Schlotterer-Stieberger procedure gives a new direct proof of the color-kinematics double copy. We outline the extension of that argument to n points. At loop level, the field-theoretic BCJ construction of N=2 SYM amplitudes introduces new terms, unexpected from the string theory perspective. We discuss to what extent we can relate them to the terms coming from the interactions between left- and right-movers in the string-theoretic gravity construction. Introduction The Bern-Carrasco-Johansson color-kinematics duality [1,2] implements in a powerful and elegant way the relationship between gauge theory and gravity scattering amplitudes from tree level to high loop orders [3][4][5][6][7][8][9][10][11][12][13][14][15][16]. At tree level, this duality is usually perceived in terms of the celebrated Kawai-Lewellen-Tye relations [17], but a first-principle understanding at loop level is still missing. 1 In this paper, we search for possible string-theoretic ingredients to understand the color-kinematics double copy in one-loop four-point amplitudes. The traditional "KLT" approach, based on the factorization of closed string amplitudes into open string ones: "open × open = closed" at the integral level, does not carry over to loop level. Instead, one has to look for relations at the integrand level. In this paper, adopting the approach of [18,19], we shall use the fact that the tensor product between the left-and right-moving sectors of the closed string, i.e. "left-moving × right-moving = closed", relates color and kinematics at the worldsheet integrand level. It is illustrated in table 1, Left-moving CFT Right-moving CFT Low-energy limit Closed string theory Spacetime CFT Color CFT Gauge theory Heterotic Spacetime CFT Spacetime CFT Gravity theory Type II, (Heterotic) Table 1: Different string theories generating field theories in the low-energy limit where "Color CFT" and "Spacetime CFT" refer to the respective target-space chiral polarizations and momenta of the scattered states. A gauge theory is realized by the closed string when one of the chiral sectors of the external states is polarized in an internal color space. This is the basic mechanism of the heterosis which gave rise to the beautiful heterotic string construction [20]. A gravity theory is realized when both the left-and right-moving polarizations of the gravitons have their target space in Minkowski spacetime, as it can be done both in heterotic and type II string. In the paper, we shall not describe the gravity sector of the heterotic string, as it is always non-symmetric. Instead, we will focus on symmetric orbifolds of the type II string to obtain, in particular, symmetric realizations of half-maximal (N = 4 in four dimensions) supergravity. In section 3, we review how the closed-string approach works at tree level with the fiveparticle example discussed in [18,19]. We adapt to the closed string the Mafra-Schlotterer-Stieberger procedure [21], originally used to derive "BCJ" numerators in the open string. The mechanism, by which the MSS chiral block representation, in the field theory limit, produces the BCJ numerators in the heterotic string, works exactly in the same way in gravity. However, instead of mixing color and kinematics, it mixes kinematics with kinematics and results in a form of the amplitude where the double copy squaring prescription is manifest. We outline a n-point proof of this observation. Then we thoroughly study the double copy construction in four-point one-loop amplitudes. First, we note that the BCJ construction is trivial both in field theory and string theory when one of the four-point gauge-theory copy corresponds to N = 4 SYM. Then we come to our main subject of study, N = 2 gauge theory and symmetric realizations of N = 4 gravity amplitudes in four dimensions. We study these theories both in field theory and string theory and compare them in great detail. The real advantage of the closed string in this perspective is that we have already at hand a technology for building field theory amplitudes from general string theory models, with various level of supersymmetry and gauge groups. In section 4, we provide a BCJ construction of half-maximal supergravity coupled to matter fields as a double copy of N = 2 SYM. Then in section 5, we give the string-based integrands and verify that they integrate to the same gauge theory and gravity amplitudes. Finally, we compare the two calculations in section 6 by transforming the field-theoretic loop-momentum expressions to the same worldline form as the string-based integrands, and try to relate the BCJ construction to the string-theoretic one. Both of them contain box diagrams, but the field-theoretic BCJ construction of gauge theory amplitudes has additional triangles, which integrate to zero and are invisible in the string-theoretic derivation. Interestingly, at the integrand level, the comparison between the BCJ and the string-based boxes is possible only up to a new total derivative term, which we interpret as the messenger of the BCJ representation information in the string-based integrand. However, we argue that, against expectations, this change of representation cannot be obtained by integrations by part, and we suggest that this might be linked to our choice of the BCJ representation. Therefore, it provides non-trivial physical information on the various choices of BCJ ansatzes. The square of the BCJ triangles later contributes to the gravity amplitude. String theory also produces a new term on the gravity side, which is due to left-right contractions. We manage to relate it to triangles squared and parity-odd terms squared, which is possible up to the presence of "square-correcting-terms", whose appearance we argue to be inevitable and of the same dimensional nature as the string-theoretic left-right contractions. We believe that our work constitutes a step towards a string-theoretic understanding of the double copy construction at loop level in theories with reduced supersymmetry, although some facts remain unclarified. For instance, it seems that simple integration-bypart identities are not enough to obtain some BCJ representations (e.g. ours) from string theory. Review of the BCJ construction In this section, we briefly review the BCJ duality and the double copy construction in field theory, as well as the current string-theoretic understanding of these issues (see also the recent review [22, section 13]). To begin with, consider a n-point L-loop color-dressed amplitude in gauge theory as a sum of Feynman diagrams. The color factors of graphs with quartic gluon vertices, written in terms of the structure constants f abc , can be immediately understood as sums of cubic color diagrams. Their kinematic decorations can also be adjusted, in a non-unique way, so that their pole structure would correspond to that of trivalent diagrams. This can be achieved by multiplying and dividing terms by the denominators of missing propagators. Each four-point vertex can thus be interpreted as a s-, t-or u-channel tree, or a linear combination of those. By performing this ambiguous diagram-reabsorption procedure, one can represent the amplitude as a sum of cubic graphs only: where the denominators D i , symmetry factors S i and color factors c i are understood in terms of the Feynman rules of the adjoint scalar φ 3 -theory (without factors of i) and the numerators n i generically lose their Feynman rules interpretation. Note that the antisymmetry f abc = −f bac and the Jacobi identity shown pictorially in figure 1 induces numerous algebraic relations among the color factors, such as the one depicted in figure 2. We are now ready to introduce the main constraint of the BCJ color-kinematics duality [1,2]: let the kinematic numerators n i , defined so far very vaguely, satisfy the same algebraic identities as their corresponding color factors c i : This reduces the freedom in the definition of {n i } substantially, but not entirely, to the so-called generalized gauge freedom. The numerators that obey the duality 2.3 are called the BCJ numerators. Note that even the basic Jacobi identity (2.2), obviously true for the four-point tree-level color factors, is much less trivial when written for the corresponding kinematic numerators. Once imposed for gauge theory amplitudes, that duality results in the BCJ double copy construction for gravity amplitudes in the following form: 2 A comment is due at loop level: the loop-momentum dependence of numerators n i ( ) should be traced with care. For instance, in the kinematic Jacobi identity given in figure 2, one permutes the legs 3 and 4, but keeps the momentum fixed, because it is external to the permutation. Indeed, if one writes that identity for the respective color factors, the internal line will correspond to the color index outside of the basic Jacobi identity of figure 1. In general, the correct loop-level numerator identities correspond to those for the unsummed color factors in which the internal-line indices are left uncontracted. Formulas (2.1) and (2.4) are a natural generalization of the original discovery at tree level [1]. The double copy for gravity (2.4) has been proven in [23] to hold to any loop order, if there exists a BCJ representation (2.1) for at least one of the gauge theory copies. Such representations were found in numerous calculations [2, 5-14, 24, 25] up to four loops in N = 4 SYM [4]. A systematic way to find BCJ numerators is known for Yang-Mills theory at tree level [26], and in N = 4 SYM at one loop [27]. Moreover, for a restricted class of amplitudes in the self-dual sectors of gauge theory and gravity, one can trace the Lagrangian origin of the infinite-dimensional kinematic Lie algebra [16,28]. The string-theoretic understanding of the double copy at tree level dates back to the celebrated KLT relations [17] between tree-level amplitudes in open and closed string theory, later improved with the discovery of monodromy relations and the momentum kernel in [18,19,[29][30][31]. In the field theory limit, these relations implement the fact that in amplitudes the degrees of freedom of a graviton can be split off into those of two gauge bosons. Recently, a new chiral block representation of the open-string integrands was introduced [21] to construct BCJ numerators at n points. All of this is applicable at tree level, whereas at loop level, the relationship between open and closed string amplitudes becomes obscure. At the integrand level, five-point amplitudes were recently discussed in [32] in open and closed string. The authors of that work studied how the closed-string integrand is related to the square of the open-string integrand, and observed a detailed squaring behavior. They also discussed the appearance of left-right mixing terms in this context. These terms are central in our one-loop analysis, even though at the qualitative level, four-points amplitudes in (N = 2) × (N = 2) are more closely related to six-point ones in (N = 4) × (N = 4). Review of tree level in string theory In this section, we review RNS string amplitude calculations at tree level in order to perform explicitly a five-point heterotic and type II computation, as a warm-up exercise before going to the loop level. Type I and II string amplitudes are known at n points from the pure spinor formalism [33][34][35][36] and their field theory limits were extensively studied in [36,37], as well as their α expansion in [37][38][39][40]. As observed in [18,19], the important point here is not to focus on the actual string theory amplitude, but rather to realize different field theory limits by plugging different CFT's in the left-and right-moving sectors of the string. In that context, an observation that we shall make is that the Mafra-Schlotterer-Stieberger openstring chiral block representation introduced in [21] to compute BCJ numerators can be used to construct directly gravity amplitudes and make the double copy property manifest. We perform this explicitly in the five-point case and briefly outline an n-point extension. Let us start from the integral for the five-particle scattering amplitude: where |z 14 z 45 z 51 | 2 is the classical cc ghost correlator, and we use the conformal gauge freedom to set z 1 = 0, z 4 = 1, z 5 → ∞. The unintegrated vertex operators have a holomorphic and an anti-holomorphic part: where V (L) and V (R) are the chiral vertex operators for the left-and right-moving sectors. 3 The notation for superscripts (L) and (R) coincides with the one used in [19]. Now, depending on what CFT we plug in these two sectors, different theories in the low-energy limit can be realized, as summarized in table 1. The anti-holomorphic vertex operators for the color CFT are gauge currents where the T a matrices are in the adjoint representation of the gauge group under consideration (for instance, E 8 × E 8 or SO(32) in the heterotic string or more standard SU (N ) groups, after proper gauge group breaking by compactification). The chiral vertex operators in the spacetime supersymmetric CFT have a superghost picture number, (−1) or (0), required to cancel the (+2) background charge: where ε µ (k) is the gluon polarization vector. Therefore, at tree level, exactly two vertex operators must be chosen in the (−1) picture. The anti-holomorphic vertex operators are then obtained from the holomorphic ones by complex conjugation. The total vertex operators of gluons and gravitons are constructed as products of the chiral ones in accordance with table 1, and the polarization tensor of the graviton is defined by the symmetric traceless part of the product ε µν (k) = ε µ (k)ε ν (k). The correlation function (3.1) can be also computed as a product of a holomorphic and an anti-holomorphic correlator thanks to the "canceled propagator argument". As explained in the classical reference [41, sec. 6.6], the argument is essentially an analytic continuation which makes sure that Wick contractions between holomorphic and antiholomorphic operators provide only vanishing contributions at tree level. 4 Therefore, the chiral correlators can be dealt with separately. Our goal is to write them in the MSS chiral block representation [21], in which (3.13) It actually corresponds to the color decomposition into (n − 2)! terms uncovered in [42]. Kinematic CFT Now let us compute the RNS 5-point left-moving correlator in the supersymmetric sector, where the chiral vertex operators for the kinematic CFT were defined in (3.4). In (3.14), we picked two vertex operators to carry ghost picture number (−1) in such a way that all double poles can be simply eliminated by a suitable gauge choice. The correlator (3.14) is computed using Wick's theorem along with the two-point function (3.8) and For a completely covariant calculation, we refer the reader to [21], whereas here for simplicity we restrict ourselves to the MHV amplitude A(1 + , 2 − , 3 − , 4 + , 5 + ) with the following choice of reference momenta: In combination with the ghost picture number choice, this gauge choice eliminates a lot of terms and, in particular, all double poles. We end up with only ten terms of the form 5 To reduce them to the six terms of the MSS chiral block representation, one could apply in this closed-string context the open string technology based on repeated worldsheet IBP's described in [35][36][37]39]. However, the situation is greatly simplified here, since we have already eliminated all double poles. Thanks to that, we can proceed in a pedestrian way and only make use of partial fractions identities, such as where we take into account that z 41 = 1. Our final result, similarly to the one in appendix D of [21], contains two vanishing and four non-vanishing coefficients. In the spinor-helicity formalism, they are a (L) Low-energy limit Before specializing to a particular theory (gauge theory or gravity), let us review the general low-energy limit mechanism at tree level. In the open string, very efficient procedures have been developed for extracting the low-energy limit of n-points amplitudes in a systematic way [36,37]. The essential point, common to both open and closed string procedures, consists in the observation that a pole 6 in the channel s ij s kl comes from integrating over the region of the moduli space where z i and z k collide to z j and z l , respectively, provided that the integrand contains a pole in the variables z ij z kl . In these regions, the closed string worldsheet looks like spheres connected by very long tubes (see figure 3), and we simply have to integrate out the angular coordinates along the tubes to obtain graph edges. This is the basic mechanism of the tropical limiting procedure reviewed in [43] (see section VI.A for four-tachyon and four-graviton examples). A slight subtlety to take into account is that the s 45 -channel pole, for instance, is not due to the pole 1/z 45 , as both z 4 and z 5 are fixed and z 5 = ∞ is already absent from the expressions. Rather, it is created when both z 2 , z 3 → z 1 , i.e. it appears as a s 123 pole. Moreover, the details of the double limit matter as well: suppose we want the kinematic pole 1/(s 12 s 123 ), it is generated by the successive limit z 2 → z 1 and then z 3 to the cluster formed by z 1 and z 2 . In other words, it arises from regions of the moduli space where |z 12 | |z 23 | ∼ |z 13 | 1. We can describe the geometry of the worldsheet in this limit by going to the following tropical variables: where X and Y are the lengths (or the Schwinger proper time variables) of the edges of the graph as pictured in figure 3. The phases θ and φ are the cylindrical coordinates along the tubes that need to be integrated out in order to recover a purely one-dimensional variety corresponding to a Feynman graph. Accordingly, the integration measure produces a Jacobian: Then one can check that the exponential factor transforms as follows: e α (k 1 ·k 2 ) ln |z 12 |+α (k 1 ·k 3 ) ln |z 13 |+α (k 2 ·k 3 ) ln |z 23 | = e −Xs 123 /2−Y s 12 /2 + O(α ), (3.21) where the phase dependence in θ and ψ is trivial, granted that Y and X are greater than some UV cutoff of order α . At this point, we have integrands of the form To make them integrate to the expected double pole 1/s 12 s 123 , we need the Jacobian to be compensated by the z ij poles in such a way that we can integrate out the phases θ and φ. This is carried out by writing the amplitude (3.1) in the MSS chiral block representation: It is not difficult to convince oneself that the only terms that do not vanish in this particular limit, where |z 12 | |z 23 | 1, are exactly the products of 1/|z 12 | 2 with any of the following: 1/|z 23 | 2 , 1/(z 23z13 ), 1/(z 13z23 ), or 1/|z 13 | 2 , since 1/z 13 = 1/z 23 + O(e −Y /α ). Any of these terms obviously cancel the Jacobian. Moreover, they do not vanish when the phase integration is performed. If instead one had a term like 1/(z 12z 3 12 ) = e 2X/α e iθ , it would cancel e −2X/α in the Jacobian but would vanish after integration over θ. It is a characteristic feature of the MSS representation that only the terms with the correct weight are non-zero upon phase integration. That is why it is particularly suitable for the analysis of the low-energy limit of the closed string. In other words, the phase dependence is trivial by construction, which means that the level-matching is automatically satisfied. To sum up, to obtain a pole in 1/s 12 s 123 , we have to pick up exactly two chiral blocks 1/z 12 z 23 and 1/z 12z23 in (3.23) which come with a factor of a . One can then repeat this operation in the other kinematic channels. For instance, 7 the region z 23 z 34 ∼ z 24 1 receives non-zero contributions both from 1/(z 23 z 34 ) and 1/(z 23 z 24 ) (and their complex conjugates). This results in the following contribution to the low-energy limit of the amplitude: By repeating this operation in all other kinematic channels, one can generate the 15 combinatorially-distinct trivalent graphs of the low-energy limit and thus obtain the full field theory amplitude, valid in any dimension: The channels generated by z2 and/or z3 → z5 = ∞ are dealt with by introducing a + sign in the exponential in (3.19). Then the pole is generated by a similar procedure in terms of the following numerators: It is now trivial to check that, by construction, n i 's satisfy the Jacobi identities, which we recall in appendix B. The linear relations (3.27) between n i 's and a i 's coincide with those derived for gauge theory amplitudes in [21], where covariant expressions for the kinematical numerators at any multiplicity were obtained. The crucial point here is that we have not referred to the actual expressions of the n i 's derived in the previous sections but simply started from the MSS representation (3.23) for the string amplitude. Therefore, the final result (3.26) can be either a gauge theory or a gravity amplitude, depending on the type of string theory in which the low-energy limit is taken, as indicated in table 1. In heterotic string theory, if n to be color factors c i , in which case (3.26) would correspond to the scattering amplitude of five color cubic scalars. From the perspective of the low-energy limit of string theory, this corresponds to compactifying both sectors of the bosonic string on the same torus as the one of the heterosis mechanism and then choosing external states bipolarized in the internal color space. This string theory, of course, suffers from all known inconsistencies typical for the bosonic string. However, at tree level, if one decouples by hand in both sectors the terms which create non-planar corrections in the heterotic string, the pathological terms disappear. Therefore, the formula (3.26) can be extended to produce the tree-level five-point amplitudes of the three theories: gravity, Yang-Mills and cubic scalar color. This is done by simply choosing different target-space polarizations for (L) and (R), as in table 1, to which, in view of the previous discussion, we could now add a new line for the cubic scalar color model. The point of this demonstration was to illustrate the fact that the product of the leftand right-moving sectors produces in the low-energy limit the form of the amplitude in which the double copy construction is transparent and is not a peculiarity of gravity but rather of any of the three theories. This suggests that both the BCJ duality in gauge theory and the double copy construction of gravity follow from the inner structure of the closed string and its low-energy limit. Furthermore, the MSS chiral block representation exists for n-point open string amplitudes [21,35,36], so to extend those considerations to any multiplicity, one would only need to rigorously prove that any open string pole channel corresponds to a closed string one and verify that level matching correctly ties the two sectors together. Then the MSS construction would produce the BCJ construction at any multiplicity, and this would constitute a string-theoretic proof that the BCJ representation of Yang-Mills amplitudes implies the double copy construction of gravity amplitudes at tree level. Finally, note that this procedure is different from the KLT approach [17] in that it relates the numerators of cubic diagrams in the various theories, rather than the amplitudes themselves. All of this motivates our study of the double copy construction at higher loops in the purely closed string sector. We conclude this section by the observation that, in the recent works related to the "scattering equations" [44][45][46][47][48][49][50], there appeared new formulas for tree-level scattering amplitudes of cubic color scalars, gauge bosons and gravitons, in which color and kinematics play symmetric roles. It was also suggested that this approach might be generalizable to higher-spin amplitudes. Naturally, it would be interesting to make a direct connection between the scattering equations and the approach based on the low-energy limit of the closed string. One loop in field theory In this section, we turn to the study of the BCJ duality at one loop. Here and in the rest of this paper, we will deal only with amplitudes with the minimal number of physical external particles in supersymmetric theories -four. 8 At one loop, a color-dressed fourgluon amplitude can be represented as Recall that the color factors can also be written in terms of color traces, for example: In this way, one can easily relate the color-kinematics representation (2.1) to the primitive amplitudes that are defined as the coefficients of the leading color traces [53]. Double copies of one N = 4 SYM The maximally supersymmetric Yang-Mills theory has the simplest BCJ numerators. At four points, they are known up to four loops [2,6,54], and only at three loops they start to depend on loop momenta, in accordance with the string theory understanding [55][56][57]. For example, the one-loop amplitude is just a sum of three scalar boxes [58], which is consistent with the color-kinematic duality in the following way: the three master boxes written in (4.1) have the same trivial numerator 12 2 [34] 2 = istA tree (1 − , 2 − , 3 + , 4 + ) (which we will always factorize henceforward), and all triangle numerators are equal to zero by the Jacobi identities. Thanks to that particularly simple BCJ structure of N = 4 SYM, the double copy construction for N ≥ 4 supergravity amplitudes simplifies greatly [7]. Indeed, as the second gauge theory copy does not have to obey the BCJ duality, one can define its box numerators simply by taking its entire planar integrands and putting them in a sum over a common box denominator. Since the four-point N = 4 numerators are independent of the loop momentum, the integration acts solely on the integrands of the second Yang-Mills copy and thus produces its full primitive amplitudes: The N = 8 gravity amplitude is then simply given by the famous result of [58] in terms of the scalar box integrals I 4 , recalled in appendix A: For a less trivial example, let us consider the case of the N = 6 gravity, for which the second copy is the contribution of a four-gluon scattering in N = 2 SYM. It is helpful to use the one-loop representation of the latter as where the last term is the gluon amplitude contribution from the N = 2 hyper-multiplet (or, equivalently, N = 1 chiral-multiplet in the adjoint representation) in the loop. This multiplet is composed of two scalars and one Majorana spinor, so its helicity content can be summarized as 1 1 . If we use eq. (4.3) to "multiply" eq. (4.5) by N = 4 SYM, we obtain a similar expansion for the gravity amplitudes: where "N = 6 matter" corresponds to the formal multiplet which contains a spin-3/2 Majorana particle and can be represented as 1 3 . Its contribution to the amplitude can be constructed through eq. (4.3) as (N = 4) × (N = 2 hyper), where the second copy is also well known [59,60] and is most easily expressed in terms of scalar integrals I n : This lets us immediately write down the result from [7]: A comment is due here: here and below, we use the scalar integrals I n recalled in appendix A, just as a way of writing down integrated expressions, so the scalar triangles in eq. (4.8) do not contradict with the no-triangle property of N = 4 SYM. As explained earlier, the BCJ double copy construction behind eq. (4.3), and its special case (4.8), contains only the box topology with all the scalar integrals in eq. (4.7) collected into non-scalar boxes. In the former case, one only needs the full amplitudes from [60] to obtain the following result [7,61]: (4.10) which is valid to all orders in . All of one-loop constructions with N > 4, as we discuss in section 6, fit automatically in the string-theoretic picture of the BCJ double copy. This is due to fact that, just as the field-theoretic numerators are independent of the loop momentum, the N = 4 string-based integrands do not depend on Schwinger proper times. Double copy of one N = 2 SYM The second option to compute M 1-loop N =4,matt (1 − , 2 − , 3 + , 4 + ) requires the BCJ representation for the N = 2 hyper-multiplet amplitude. The latter can also be used to construct gravity amplitudes with N < 4 supersymmetries [12], such as (N = 1 gravity) = (N = 1 SYM) × (pure Yang-Mills). However, we will consider it mostly in the context of obtaining the BCJ numerators for N = 2 SYM: whose double-copy square N = 4 supergravity coupled to two N = 4 matter multiplets: (4.12) As a side comment, the problem of decoupling matter fields in this context is analogous to the more difficult issue of constructing pure gravity as a double copy of pure Yang-Mills [62]. Most importantly for the purposes of this paper, A 1-loop N =2,hyper is the simplest four-point amplitude with non-trivial loop-momentum dependence of the numerators, i.e. O( 2 ), which is already reflected in its non-BCJ form (4.7) by the fact that no rational part is present in the integrated amplitudes. The rest of this paper is mostly dedicated to studying both from the BCJ construction and field theory the double copy (N = 4 matter) = (N = 2 hyper) 2 . (4.13) Here, the left-hand side stands for the contribution of vector matter multiplets running in a four-graviton loop in N = 4 supergravity, while the right-hand side indicates multiplets running in a four-gluon loop in SYM. In the rest of this section, we obtain the field-theoretic numerators for the latter amplitude contribution. In the literature [12,59,63,64], it is also referred to as the contribution of the N = 1 chiral multiplet in the adjoint representation and is not to be confused with the N = 1 chiral multiplet in the fundamental representation, the calculation for which can be found in [62]. By calling the former multiplet N = 2 hyper, we avoid that ambiguity and keep the effective number of supersymmetries explicit. Ansatz approach The standard approach to finding kinematic numerators which satisfy Jacobi identities is through an ansatz [5,12], as to our knowledge, there is no general constructive way of doing this, apart from the case of N = 4 SYM at one loop [27]. Recently, however, there has been considerable progress [12,13] in applying orbifold constructions to finding BCJ numerators. In [12,13,64], several types of ansatz were used for one-loop four-point computations, starting from three independent master box numerators from which all other cubic diagrams were constructed through Jacobi identities. In comparison, our ansatz starts with two master boxes, n box (1 − , 2 − , 3 + , 4 + ) and From Feynman-rules power-counting, string theory and supersymmetry cancellations [59] we expect numerators to have at most two powers of the loop momentum. Moreover, Figure 4: Box graph symmetries the denominator of (4.7) contains s and s 2 , but only t and u. Thus, it is natural to consider the following minimal ansatz: In eq. (4.14), P 2 (s, t) is a homogeneous polynomial of degree 2 and P 4;2;1 (s, t; τ 1 , τ 2 , τ 3 ; λ) is a homogeneous polynomial of degree 4, but not greater than 2 for arguments τ 1 , τ 2 and τ 3 and at most linear in the last argument λ. The 84 coefficients of these polynomials are the free parameters of the ansatz, that we shall determine from the kinematic Jacobi identities and cut constraints. Following [12], we introduced in (4.14) parity-odd terms which integrate to zero in gauge theory but may contribute to gravity when squared in the double copy construction. The first constraints on the coefficients of the ansatz come from imposing the obvious graph symmetries shown in figure 4 given by (4.16) after which 45 coefficients remain unfixed. Another set of constraints comes from the cuts. In particular, the quadruple cuts provide 10 more constraints on the master boxes alone. As we define triangle and bubble numerators through numerator Jacobi identities, such as the one shown in figure 2, 35 remaining parameters propagate to other numerators and then define the full one-loop integrand. Note that whenever there are multiple Jacobi identities defining one non-master numerator, the master graph symmetries (4.16) guarantee that they are equivalent. Double cuts are sensitive not only to boxes, but also to triangles and bubbles. Imposing them gives further 18 constraints. As a consistency check, we can impose double cuts without imposing quadruple cuts beforehand and in that case double cuts provide 28 conditions with the 10 quadruple-cut constraints being a subset of those. In any case, we are left with 17 free parameters after imposing all physical cuts. For simplicity, we choose to impose another set of conditions: vanishing of all bubble numerators, including bubbles on external lines (otherwise rather ill-defined in the massless case). This is consistent with the absence of bubbles in our string-theoretic setup of section 5. Due to sufficiently high level of supersymmetry, that does not contradict the cuts and helps us eliminate 14 out of 17 free coefficients. Let us call 3 remaining free coefficients α, β and γ. For any values of those, we can check by direct computation that our solution integrates to (4.7), which is a consequence of the cut-constructibility of supersymmetric gauge theory amplitudes. However, there is still one missing condition, which we will find from the d-dimensional cuts in section 4.4. Double copy and d-dimensional cuts The double copy of the gluon amplitude with N = 2 hyper multiplet in the loop naturally produces the graviton amplitude with N = 4 matter multiplet in the loop, as in (4.13). First, we check that the gravity integrand satisfies all cuts. So far we have been considering only four-dimensional cuts and cut-constructible gauge theory amplitudes for which it does not matter if during integration 2 term in the numerator is considered as 4-or (4 − 2 )-dimensional. After all, the difference will be just µ 2 = 2 (4) − 2 (d) which integrates to O( ). Note that we consider external momenta to be strictly four-dimensional, thus the scalar products with external momenta k i like The issue is that now N = 4 gravity amplitudes are not longer cut-constructible, so the fact that double copy satisfies all four-dimensional cuts is not enough to guarantee the right answer. This is reflected by the fact that the difference between 4 (4) and 4 (d) now integrates to O(1) and produces rational terms. It seems natural to treat in (4.14) as strictly four-dimensional. Then our gravity solution integrates to where r Γ the standard prefactor defined in (A.4). That coincides with the known answer from [61] and the truncated version of (4.10) [7], if γ = −3/2. For the double copy to have predictive power beyond the cut-constructible cases, one should start with gauge theory numerators that satisfy all d-dimensional cuts. For N = 2 SYM, the difference should just be related to the µ 2 ambiguity mentioned above. As we already know that we miss only one extra condition, it suffices to consider the simplest cut sensitive to µ 2 terms, i.e. the s-channel cut for A 1-loop N =2,hyper (1 − , 2 − , 3 + , 4 + ) that vanishes in four dimensions ( figure 5). We can either construct this cut from massive scalar and fermion amplitudes provided in [60], or simply use their final d-dimensional expression for this color-ordered amplitude: Unifying all our gauge theory numerators into one box and making use of massive s-cut kinematics we retrieve the the following cut expression: 19) which coincides with the s-cut of (4.18) if γ = −3/2. Thus, we have reproduced the missing condition invisible to four-dimensional cuts. We preserve the remaining two-parameter freedom and write down the full set of numerators for the N = 2 hyper (or, equivalently, N = 1 chiral) multiplet amplitude as follows: where for brevity we omitted the trivial kinematic prefactor 12 2 [34] 2 . The numerators that we obtain are non-local, as they contain inverse powers of Mandelstam invariants on top of those already included in their denominators. This is a feature of using the spinor-helicity formalism for BCJ numerators [5,12,13,65] and is understood to be due to the choice of helicity states for the external gluons. Indeed, the numerators given in [66] in terms of polarization vectors are local while gauge-dependent. We first note that the box numerators (4.20) do not possess constant terms. Later, we will relate this to a similar absence of constant terms in the string-based integrand. Moreover, the triangles integrate to null contributions to gauge theory amplitudes (4.7). Nonetheless, they are necessary for the double copy construction of the gravity amplitude (4.17), where they turn out to integrate to purely six-dimensional scalar triangles I d=6−2 3 . The easiest way to check these statements is to explicitly convert the triangle numerators (4.21) to the Feynman parameter space, as explained in appendix C. We will use both of these facts later in section 6. Finally, there are conjugation relations that hold for the final amplitudes, but are not automatic for the integrand numerators: Although they are not necessary for the integrated results to be correct, one might choose to enforce them at the integrand level, which would fix both remaining parameters to and thus produce the unambiguous bubble-free BCJ solution. However, leaving two parameters unfixed can have its advantages to discern analytically pure coincidences from systematic patterns at the integrand level. One loop in string theory This section is mostly a review of a detailed calculation given in [67] in order to explain the string-theoretic origin of the worldline integrands of the N = 2 SYM and symmetric N = 4 supergravity, in heterotic string and type II string in d = 4 − 2 dimensions, respectively. The reader not familiar with the worldline formalism may simply observe that the general formula (5.13) contains a contribution to the gravity amplitude which mixes the left-and the-right-moving sectors and thus makes it look structurally different from the double copy construction. Then the N = 2 gauge theory and the N = 4 gravity integrands are given in eqs. (5.19) and (5.26), respectively, in terms of the Schwinger proper-time variables. They are integrated according to (5.16). These are the only building blocks needed to go directly to section 6, where the link between the worldline formalism and the usual Feynman diagrams is described starting from the loop-momentum space. Field theory amplitudes from string theory A detailed set of rules known as the Bern-Kosower rules was developed in [53,[68][69][70] to compute gauge theory amplitudes from the field theory limit of fermionic models in heterotic string theory. It was later extended to asymmetric constructions of supergravity amplitudes in [61,70] (see also the review [71] and the approach of [72][73][74] using the Schottky parametrization). One-loop amplitudes in the open string are known at any multiplicity in the pure spinor formalism [75]. Here we recall the general mechanism to extract the field theory limit of string amplitudes at one loop in the slightly different context of orbifold models of the heterotic and type II string. A general four-point closed-string theory amplitude writes The normalization constant N is different for heterotic and type II strings. We will omit it throughout this section except the final formula (5.16), where the normalization is restored. The z i are the positions of the vertex operators in the complex torus T , and z 4 has been set to z 4 = iτ 2 to fix the genus-one conformal invariance. On the torus, the fermionic fields ψ µ andψ ν can have different boundary conditions when transported along the A-and B-cycles (which correspond to the shifts z → z + 1 and z → z + τ , respectively). These boundary conditions define spin structures, denoted by two integers a, b ∈ {0, 1} such that In an orbifold compactification, these boundary conditions can be mixed with target-space shifts and the fields X and ψ can acquire non-trivial boundary conditions, mixing the standard spin structures (or Gliozzi-Scherk-Olive sectors) with more general orbifold sectors [76,77]. The vertex operator correlation function (5.1) is computed in each orbifold and GSO sector, using Wick's theorem with the two-point functions . The total correlation function can then be written in the following schematic form: where s and s correspond to the various GSO and orbifold sectors of the theory with their corresponding conformal blocks, and Z ss is defined so that it contains the lattice factor Γ 10−d,10−d or twistings thereof according to the orbifold sectors and background Wilson lines. 9 The exponent of the plane-wave factor e Q writes explicitly where the first term on the right-hand side gives a vanishing contribution due the canceled propagator argument in the same way as at tree level. The second term in eq. (5.6) was absent at tree level, see eq. (3.5), but now generates left-right mixed contractions in case the two sectors have coinciding target spaces, i.e. in gravity amplitudes. However, in gauge theory amplitudes in the heterotic string, the target spaces are different, and contractions like (5.6) do not occur. The main computation that we use in this section was performed in great detail in [79], and the explicit expressions for partition functions, lattice factors and conformal blocks may be found in the introductory sections thereof. The mechanism by which the string integrand descends to the worldline (or tropical) integrand is qualitatively the same as at tree level. 10 In particular, one considers families of tori becoming thinner and thinner as α → 0. On these very long tori, only massless states can propagate (massive states are projected out), so the level-matching condition of string theory associated to the cylindrical coordinate on the torus can be integrated out and the tori become worldloops. Quantitatively, one performs the following well-known change of variables: where T is the Schwinger proper time of the loop 11 and t i are the proper times of the external legs along it (see figure 6). We should also mention that to obtain a truly d-dimensional amplitude, one should not forget to decouple Kaluza-Klein modes of the compactified string by sending the radii R of compactification to zero, so that R ∼ √ α (for instance, in this way one sets the untwisted lattice factor Γ 10−d,10−d to 1). The field theory worldline amplitude is obtained after that -possibly lengthy -process of integrating out the real parts of τ and z's, and 10 However, it has to be adapted to the geometry of the genus-one worldsheet. In particular, phases of complex numbers on the sphere become real parts of coordinates on the complex torus C/(Z + τ Z). As explained in [43], this is due to the fact that the complex torus is the image by the Abel-Jacobi map of the actual Riemann surface. In the limit α → 0, this map simplifies to a logarithmic map, and coordinates on the surface z,z are related to coordinates on the complex torus ζ,ζ by: ζ = −2iπα ln z. The same is true for the modular parameter τ , whose real and imaginary parts are linked to phases and modulus of q, respectively. 11 Strictly speaking, as originally observed in [80] and recently reviewed in [43], one should cut off the region F by a parameter L, so that the region of interest for us is actually the upper part Im τ > L of F, which in the field theory limit gives a hard Schwinger proper-time cutoff T > α L. Here we trade this cutoff for dimensional regularization with d = 4 − 2 . one is left with an integral of the form 12 [81]: where u i = t i /T are rescaled proper times. As reviewed later in section 6, the exponential factor e −T Q can also be regarded as a result of exponentiating the loop-momentum denominator of the corresponding Feynman diagram, with Z ss W s,s coming from its numerator. Formula (5.8) can be written in terms of derivatives of the worldline Green's function [82,83] which descends from the worldsheet one and is defined by For example, eq. (5.5) becomes The partition function factor Z ss , in the field theory limit, just induces a sum over multiplet decompositions, as in eqs. (4.5), (4.9) and (4.6), but does not change the qualitative nature of the objects. Moreover, it is worth mentioning that the field theory limit of mixed contractions (5.6) produces only factors of 1/T : without further dependence on the positions t i of the legs on the worldloop. Note that in general, factors of 1/T k modify the overall factor 1/T d/2−(n−1) and thus act as dimension shifts d → d + 2k. Let us now discuss the differences between color and kinematics in the integrand of eq. (5.8). In heterotic string theory, the two sectors have different target spaces and do not communicate with each other. In particular, the right-moving sector is a color CFT: it is responsible for the color ordering in the field theory limit as demonstrated in the Bern-Kosower papers [53,[68][69][70], and its factor writes W (R, color) = S∈S n−1 tr(T a S(1) ...T a S(n−1) T an )Θ(u S(1) < ... < u S(n−1) < u n ) , (5.12) where the sum runs over the set S n−1 of permutations of (n − 1) elements. It is multiplied by a W (L, kin) which contains the kinematical loop-momentum information. In gravity, both sectors are identical, they carry kinematical information and can mix with each other. To sum up, we can write the following worldline formulas for gauge theory and gravity amplitudes: Besides the fact that these formulas are not written in the loop-momentum space, the structure of the integrand of the gravity amplitude (5.13b) is different from the doublecopy one in eq. (2.4): it has non-squared terms that come from left-right contractions. This paper is devoted to analysis of their role from the double copy point of view, in the case of the four-point one-loop amplitude in (N = 2) × (N = 2) gravity. The kinematic correlators W kin are always expressed as polynomials in the derivatives of the worldline Green's function G: where the factors of T take into account the fact that the derivative is actually taken with respect to the unscaled variables t i , where ∂ t i = T −1 ∂ u i . To illustrate the link with the loop-momentum structure, let us recall the qualitative dictionary between the worldline power-counting and the loop-momentum one [79,84,85]. For definiteness, in order to have well-defined conventions for worldline integration, we define a theory-dependent worldline numerator W X to be carrying only loop-momentum- 13 Therefore, qualitatively, double derivatives count as squares of simple derivatives. At one loop, an easy way to see this is to integrate by parts: when the second derivative ∂u i ofG(uij) hits the exponential e −T Q , a linear combination ofĠ comes down (see definition of Q in eq. (5.10)) and producesĠ 2 . In the non-trivial cases where one does not just have a singleG as a monomial, it was proven [53,[68][69][70] that it is always possible to integrate out all double derivatives after a finite number of integrations by parts. Another possibility is to observe that the factor 1/T present inG produces a dimension shift d → d + 2 in the worldline integrands, which in terms of loop momentum schematically corresponds to adding 2 to the numerator of the d-dimensional integrand. like information: In (5.16a), the sum runs over six orderings S ∈ S 3 , three out of which, (123), (231), (312), are inequivalent and correspond to the three kinematic channels (s, t), (t, u) and (u, s) Moreover, the tensorial dependence on the polarization vectors is factored out of the integrals. The field strength F µν is the linearized field strength defined by F µν = ε µ k ν − k µ ε ν and R µνρσ = F µν F ρσ . The tensor t 8 is defined in [86, appendix 9.A] in such a way that (2,3,4), where the traces are taken over the Lorentz indices. In the spinor-helicity formalism, we find The compactness of the expressions (5.16) is characteristic to the worldline formalism. In particular, the single function W X determines the whole gauge theory amplitude in all of its kinematic channels. Note that, contrary to the tree-level case, where integrations by parts have to be performed to ensure the vanishing of tachyon poles, at one loop, the field theory limit can be computed without integrating out the double derivatives. 14 N = 2 SYM amplitudes from string theory In this section, we provide the string-theoretic integrands for the scattering amplitudes of four gauge bosons in N = 2 SYM in heterotic string. Starting from the class of N = 2 four-dimensional heterotic orbifold compactifications constructed in [87,88] and following the recipe of the previous section, detailed computations have been given in [67] based on the previously explained method. We shall not repeat them here but simply state the result. First of all, we recall that the expansion (4.5) of the N = 2 gluon amplitude into a sum of the N = 4 amplitude with that of the N = 2 hyper-multiplet. The corresponding worldline numerators for the color-ordered amplitudes of eq. (5.16) are: 18) and, according to eq. (4.5), combine into W N =2,vect as follows: 14 At least when there are no triangles. Figure 7: s 12 -channel "would-be" worldline triangle The polynomial W 3 , derived originally in the symmetric N = 4 supergravity construction of [67], is defined by where we introduce the shorthand notation G ij = G(u i , u j ) and, accordingly,Ġ ij are defined in eq. (5.14). In spinor-helicity formalism, for the gauge choice it writes explicitly as follows: It is of the formĠ 2 , so according to the dictionary (5.15), it corresponds to fourdimensional box numerators with two powers of loop momentum. This statement is coherent with the results of the field-theoretic calculation of section 4, namely, with the box numerators (4.20). Moreover, it obviously has no constant term originating from (sign(u ij )) 2 , which is consistent with its absence in the loop-momentum expressions. We also checked that this worldline numerator integrates to the correct field theory amplitudes (4.7). Absence of triangles A direct application of the Bern-Kosower formalism immediately rules out the possibility of having worldline triangles in the field theory limit, however it is worth recalling the basic procedure to show this. On the torus, trees attached to loops are produced by vertex operators colliding to one another, exactly as at tree level. For instance, consider an s 12 -channel pole, as drawn in figure 7. It originates from a region of the worldsheet moduli space where z 12 1. Locally, the worldsheet looks like a sphere, and in particular the short distance behavior of the torus propagator is as on the sphere: Repeating the same reasoning as at tree level, a pole will be generated if and only if a term like 1/|z 12 | 2 is present in the numerator factor W (L) ZW (R) . In the gauge current sector, this requires a term like S a,b (z 12 ) that comes along with a single or double trace, like tr(. . . T a 1 T a 2 . . . ) or tr(T a 1 T a 2 )tr(T a 3 T a 4 ), which causes no trouble. However, in the supersymmetric sector, this term has to be a ∂G(z 12 ) which amounts to extraction from W 3 of the following term: which obviously does not provide the expected 1/z 12 behavior. Note that (∂G(z 12 )) 2 does not work either, as it is killed by the phase integration. It is not difficult to check that no triangles are generated in the other channels, and this is independent of the gauge choice. As we shall explain later in the comparison section, our BCJ triangles (4.21) are invisible in the worldline formulation, which is consistent with the previous observation. We could also try to observe Jacobi identities on W 3 directly on the worldline. A possible natural way to do so is to consider the following difference: W 3 u 1 <u 2 − W 3 u 2 <u 1 and try to associate it to a BCJ triangle. This quantity, when it is non-zero, can be found to be proportional to u i − u j . This definitely vanishes when considering a triangle-like configuration with coinciding points u i → u j . (2, 2) N = 4 supergravity amplitudes from string theory The four-graviton amplitudes in (2, 2) string theory models have been studied in [67] using type II symmetric orbifold constructions of [89]. Here we shall not recall the computation but only describe the structure of the numerator W (L) ZW (R) . In the symmetric (2, 2) constructions, both the left-moving and the right-moving sectors of the type II string have the half-maximal supersymmetry. Therefore this leaves room for internal left-right contractions in addition to the usual chiral correlators when applying Wick's theorem to compute the conformal blocks. Schematically, the integrand can be written as follows: where the partition function has explicitly produced a sum over the orbifold sectors to give 1 and −2W 3 . After taking the field theory limit, one obtains the following worldline numerators for N = 4 supergravity coupled to two N = 4 vector multiplets: where W 3 is the same as in eq. (5.20) and the polynomial W 2 writes 27) in the gauge choice (5.21), its explicit expression is According to the dictionary (5.15), in the field-theoretic interpretation, W 2 3 corresponds to a four-dimensional box numerator of degree four in the loop momentum, whereas W 2 can be interpreted as a degree-two box numerator in six dimensions, due to its dimension-shifting factor 1/T characteristic of the left-right-mixed contractions, see eq. (5.11). Following the supersymmetry decomposition (4.9), we can rewrite eq. (5.26) as where the integrands are given by: These numerators respectively integrate to the following expressions: which match to the field theory amplitudes from section 4 (µ being an infrared mass scale). Comparison of the approaches In this section, we compare the field-theoretic and the string-based constructions for gauge theory and gravity amplitudes. We start with the simplest cases of section 4.1 in which the BCJ construction contains at least one N = 4 gauge theory copy. Looking at the string-based representations for N > 4 supergravity amplitudes in eqs. (5.30a) and (5.30b), one sees that they do verify the double copy prescription, because the N = 4 Yang Mills numerator W N =4,vect is simply 1. Therefore, regardless of the details of how we interpret the worldline integrand in terms of the loop momentum, the the double copy prescription (2.4) is immediately deduced from the following relations which express the gravity worldline integrands as products of gauge theory ones: These N > 4 cases match directly to their field-theoretic construction described in section 4.1. Unfortunately, they do not allow us to say anything about the string-theoretic origin of kinematic Jacobi identities, as there are no triangles in both approaches, therefore they require only the trivial identity 1 − 1 = 0. We can also derive, without referring to the full string-theoretic construction, the form of the N = 6 supergravity amplitude, simply by using its supersymmetry decomposition (4.6): which, according to eq. (5.19), can be rewritten as 3) The first really interesting case at four points is the symmetric construction of N = 4 gravity with two vector multiplets, whose string-based numerator was given in eq (5.26). This numerator is almost the square of (5.19), up to the term W 2 which came from the contractions between left-movers and right-movers. Due to the supersymmetry expansion (4.9), the same holds for the string-based numerator of M 1-loop N =4,matt . In the following sections, we will compare the integrands of that amplitude coming from string and field theory, and see that the situation is quite subtle. The aim of the following discussion (and, to a large extent, of the paper) is to provide a convincing illustration that the presence of total derivatives imposed by the BCJ representation of gauge theory integrands in order to obtain the correct gravity integrals has a simple physical meaning from the point of view of closed string theory. As we have already explained, in the heterotic string construction of Yang-Mills amplitudes, the left-and right-moving sector do not communicate to each other as they have different target spaces. However, in gravity amplitudes, the two sectors mix due to left-right contractions. Our physical observation is that these two aspects are related. To show this, we will go through a rather technical procedure in order to compare loop-momentum and Schwinger proper-time expressions, to finally write the equality (6.37) of the schematic form left-right contractions = (BCJ total derivatives) 2 + ( . . . ) . (6.4) We shall start by the gauge theory analysis and see that, despite the absence of leftright contractions, the string theory integrand is not completely blind to the BCJ representation and has to be corrected so as to match it at the integrand level, see eq. (6.21). On the gravity side, the essential technical difficulty that we will face is the following: in the two approaches, the squaring is performed in terms of different variables, and a square of an expression in loop momentum space does not exactly correspond to a square in the Schwinger proper-time space. This induces the presence of "square-correcting terms", the terms contained in ( . . . ) on the right-hand side of eq. (6.4). Going from loop momenta to Schwinger proper times In principle, there are two ways to to compare loop-momentum expressions to worldline ones: one can either transform the loop-momentum into Schwinger proper times, or the converse. We faced technical obstacles in doing the latter, mostly because of the quadratic nature of the gauge theory loop-momentum polynomials, so in the present analysis we shall express loop-momentum numerators in terms of Schwinger proper-time variables. We use the standard exponentiation procedure [90,91] 15 which we review here. First of all, let us consider the scalar box: (6.5) We exponentiate the four propagators using and obtain which gives: In this expression, the scalar Q is the second Symanzik polynomial and is given by while the shift vector K = ( Of course, the expressions for Q and K change with the ordering in this parametrization. If we go to the worldline proper times t i , or rather their rescaled versions 15 See also [84,85] for an n-point review of the procedure in connection with the worldline formalism defined as sums of the Feynman parameters, as pictured in figure 8, one obtains a parametrization valid for any ordering of the legs, in which the vector K writes [84,85] The scalar Q also has an invariant form in these worldline parameters, already given in (5.10). Finally, the Gaussian integral in˜ (d) is straightforward to perform, and we are left with: (6.14) In (6.14), the integration domain {0 < u 1 < u 2 < u 3 < 1}, gives the box (6.5) ordered as (k 1 , k 2 , k 3 , k 4 ), whereas the two other orderings are given by the integration domains {0 < u 2 < u 3 < u 1 < 1} and {0 < u 3 < u 1 < u 2 < 1}. Comparison of gauge theory integrands Now we can repeat the same procedure for a box integral I[n( )] with a non-trivial numerator. Our BCJ box numerators (4.20) are quadratic in the four-dimensional loop momentum 16 and can be schematically written as: where the label S refers to one of the inequivalent orderings {(123), (231), (312)}. One can verify that the quadratic form A µν does not depend on the ordering. Note that we did not write the constant term in eq. (6.15), because there are none in our master BCJ boxes (4.20). The exponentiation produces an expression which depends both on Schwinger proper times and the shifted loop momentum: The linear term in˜ integrates to zero in the gauge theory amplitude, but produces a nonvanishing contribution when squared in the gravity amplitude. Lorentz invariance projects A µν˜ µ˜ ν on its trace, which turns out to vanish in our ansatz: Then we define n (S) box to be the result of the Gaussian integration over˜ : Note that here and below, for definiteness and normalization, we use the bracket notation ... for integrand numerators in terms of the rescaled Schwinger proper times u i so that I[n] can be written in any integration-parameter space: where the integration domain in u i corresponds to the momentum ordering in the denominator. From the previous reasoning, it is easy to show the following dictionary: a polynomial n( ) of degree k in the loop momentum is converted to a polynomial n of the same degree in Schwinger proper times: where the inverse powers of T p correspond to terms of the form˜ 2p and both consistently act as dimension shifts, as it can be seen on the standard replacement rules given later in eq. (6.34). This is consistent with (5.15). We can recast the previous procedure in table 2, to summarize the common points between the worldline formalism and usual Feynman diagrams. Table 2: Basic ingredients of the loop integrand expressions in field theory and the field theory limit of string theory. Field theory Worldline Parameters We apply this method to the BCJ box numerators, in order to compare them to the string-based numerator W 3 . These two quantities have equal integrated contributions, as was noted before. However, at the integrand level, they turn out not to be equal. We denote their difference by δW 3 : By definition, δW 3 integrates to zero separately in each sector. Making contact with the tree-level analysis, where the integrands had to be put in the particular MSS representation in string theory to ensure the manifest BCJ duality, one can wonder if this term δW 3 has a similar meaning at one loop. We note that the information that it carries, of order 2 , is not trivial and is sensitive to the BCJ solution, since the quadratic terms in the box numerators (4.20) are fixed to comply with the kinematic Jacobi identities. Therefore, δW 3 seems to be a messenger of the BCJ representation information and indicate a specific worldline representation of the string integrand. In order to be more precise on this statement, let us first rewrite δW 3 in terms of worldline quantities, i.e. as a polynomial in the worldline Green's functions. As it is of order u 2 i , it has to come from a polynomial with at most binomials of the formĠ ijĠkl . By a brute-force ansatz, we have expressed δW 3 as a function of all possible quantities of that sort. Imposing the defining relation (6.21) in the three sectors results in a three-parameter space of possibilities for δW 3 (see the full expression (D.1) in the appendix). All consistency checks were performed on this numerator. At this level, the parameters α and β of the BCJ numerators (4.20), (4.21) are still free. It turns out that they create a non-locality in the worldline integrand, of the form tu/s 2 . To cancel it, one has to enforce the condition 1 − α + β = 0 , (6.22) consistent with the choice (4.23). Below we provide a representative element of the family of δW 3 's that we obtained from our ansatz: . (6.23) In order to safely interpret δW 3 as a natural string-based object, it is important to verify that its string ancestor would not create any triangles in the field theory limit. We will refer to this condition as the "string-ancestor-gives-no-triangles" criterion. This is not a trivial property, and it can be used to rule out some terms as possible worldline objects (see, for example, the discussion in appendix E). In the present case, it was explicitly checked that the full form of δW 3 given in appendix D satisfies this property, following the procedure recalled in section 5.3. Now that we have expressed δW 3 in this way, let us look back at what is the essence of the tree-level MSS approach. It is based on the fact that the correct tree-level form of the integrand is reached after a series of integration by parts. 17 One might hope that the worldline numerator defined by W 3 + δW 3 is actually a result of application of a chain of integration by parts. Unfortunately, we have not found any sensible way in which the worldline numerator W 3 +δW 3 could be obtained from W 3 by such a process. The reason for this is the presence of squares in δW 3 , of the formĠ 2 ij , which are not possible to eliminate by adjusting the free parameters of eq. (D.1). These terms are problematic for basically the same reason as at tree level, where, to integrate them out by parts, you always need a double derivative and a double pole to combine together. This can be seen at one loop by inspecting the following identity: where we see that the squareĠ 2 12 always goes in pair with the double derivativeG 12 . A similar equation with ∂ 1 replaced with ∂ 2 does not help, as the relative signs between the double derivative and the square are unchanged. This kind of identities show that, in the absence of double derivatives in δW 3 , W 3 and (W 3 + δW 3 ) are not related by a chain of integration by parts. The reason why we cannot include these double derivatives in our ansatz for δW 3 is because they would show up as 1/T terms in eq. (6.18) which is impossible in view of the tracelessness of A µν , eq.(6.17). Therefore, the introduction of δW 3 in the string integrand to make it change representation, although not changing the integrated result and satisfying this "string-ancestorgives-no-triangles" property, appears to be a non-IBP process, in contrast with the MSS procedure. It would be interesting to understand if this property is just an artifact of our setup, or if it is more generally a sign that string theory does not obey the full generalized gauge invariance of the BCJ representation. Finally, we note that δW 3 is not directly related to the BCJ triangles. Recall that they are defined through the BCJ color-kinematics duality and are crucial for the double copy construction of gravity. But in section 5.3, we saw that there are no triangles in our string-theoretic construction. So even though we find total derivatives both on the field theory side: 18 n box + n tri , (6.25) and on the worldline side in the BCJ inspired form: where the BCJ triangles and δW 3 integrate to zero, they cannot be made equal by going to proper-time variables, as 19 n tri = 0 . (6.27) In addition to that, δW 3 is truly as a box integrand. In any case, the important point for the next section is that both δW 3 and the BCJ triangles contribute to the gravity amplitude when squared. We will try to relate them to the new term W 2 that appears in gravity from left-right mixing terms. Comparison of gravity integrands The goal of this final section is to dissect the BCJ gravity numerators obtained by squaring the gauge theory ones in order to perform a thorough comparison with the string-based result. In particular, we wish to illustrate that the role of the left-right contractions is to provide the terms corresponding to the squares of the total derivatives in the loop momentum space (the BCJ triangles and the parity-odd terms). String-based BCJ representation of the gravity integrand At the level of integrals, we can schematically equate the gravity amplitude obtained from the two approaches: where we omitted the integration measures and the factors of exp(−T Q). In order to relate the left-right contractions in W 2 to the triangles n 2 tri , we first need to consider the relationship between the squares W 2 3 and n 2 box via n box 2 , using the result of the previous section. From eq. (6.21), we know that at the gauge theory level, the integrands match up to a total derivative δW 3 . Therefore, let us introduce by hand this term in the string-based gravity integrand: . (6.29) 18 In eq. (6.25), we omitted the denominators for notational ease. 19 See appendix C for more details on eq. (6.27). The cost for this non-IBP change of parametrization is the introduction of a correction to W 2 , that we call δW 2 , in the string-based integrand: Note that this term is not a total derivative. The meaning of this correcting term is that, when we change W 3 to n box , we also have to modify W 2 . In this sense, it is induced by the Jacobi relations in the gauge theory numerators n box . Moreover, had we managed to do only integration by parts on W 2 3 , W 2 would have received corrections due to the left-right contractions appearing in the process. These would show up as factors of 1/T , as already explained below eq. (5.11). Again, to be complete in the interpretation of δW 2 as a proper worldline object, we should make sure that it obeys the "string-ancestor-gives-no-triangles" criterion, as we did for δW 3 . Since we have a symmetric construction for the gravity amplitude, it is natural to assume that both sectors would contribute equally to this string-theoretic correction: (6.31) Following the analysis of section 5.3, it is easy to convince oneself that since neither W 3 nor δW 3 gave any triangles in gauge theory, any combination thereof will not either. Therefore, it seems legitimate to interpret δW 2 as a string-based correction, and this lets us rewrite the worldline numerator of the gravity amplitude as n 2 box + n 2 tri = n box 2 + (W 2 + δW 2 )/2 . (6.32) Loop momentum squares vs. worldline squares The next step is to relate n 2 box to n box 2 . Let us first look at the gravity box numerator. As before, it can be written as a function of the shifted loop momentum˜ : 33) where we omitted the terms odd in˜ since they integrate to zero. Notice, however, that the terms of n box linear in˜ , which used to be total derivatives in gauge theory, now contribute to the integral, with the (k 1 , k 2 , k 3 , ) 2 terms inside them. To obtain the proper-time integrand n 2 box , we go again through the exponentiation procedure of section 6.1, followed by a dimension shift [60], together with the standard tensor reduction: 20 where η µ(ν η ρσ) stands for η µν η ρσ + η µρ η νσ + η µσ η νρ . We obtain: 35) or, equivalently, using (6.18), This formula describes precisely how squaring in loop momentum space is different from squaring in the Schwinger parameter space, so we will call the terms on the right-hand side of (6.36) square-correcting terms. Note that the fact that there are only 1/T k with k > 0 terms on the right-hand side of eq. (6.36) is not accidental, and would have hold even without the tracelessness of A, eq. (6.17). It can indeed be seen in full generality that squaring and the bracket operation do commute at the level of the O(T 0 ) terms, while they do not commute at the level of the 1/T k . Below we connect this with the structural fact that left-right contractions naturally yield 1/T terms. In appendix E, we also provide another description of these terms based on a trick which lets us rewrite the 1/T 2 terms as worldline quantities. Final comparison Using eq. (6.32), we rewrite the contribution of W 2 + δW 2 at the integrated level as follows: In total, we have argued that the total contribution on the left-hand side is a modification of W 2 generated by the BCJ representation of the gauge theory numerators in a non-IBP-induced way. This was supported by the aforementioned "string-ancestor-givesno-triangles" criterion satisfied by δW 2 . We are now able to state the conclusive remarks on our interpretation of eq. (6.37). Its right-hand side is made of two parts, of different physical origin: -the squares of gauge theory BCJ triangles, -the square-correcting terms. Note that some of the latter come from the contributions of the gauge theory integrand which were linear in the loop momentum, including the parity-odd terms µνρσ k ν 1 k ρ 2 k σ 3 present in B (S) . Formula (6.37) shows clearly the main observation of this paper: the squares of the total derivatives introduced into the gravity amplitude by the BCJ double copy construction physically come from the contractions between the left-and right-moving sectors in string theory. At a more technical level, the contribution of these contractions to the string-based integrand also had to be modified to take into account for the BCJ representation of the gauge theory amplitudes. This being said, the presence of the square-correcting terms on the right-hand side deserves a comment. They contain the dimension-shifting factors of 1/T , characteristic of the left-right contractions, as already mentioned. It is therefore not surprising, that the square-correcting terms show up on the right-hand side of eq. (6.37), since the left-hand side is the (BCJ modified) contribution of the left-right contractions. More interestingly, this seems to suggest that it should be possible to absorb them into the left-right mixing terms systematically by doing IBP's at the string theory level. However, if one considers the worldline polynomials corresponding to (2AK + B)/T , they imply a string-theoretic ancestor of the form ∂∂G × ∂G∂G which eventually does not satisfy the "string-ancestor-gives-no-triangles" criterion. 21 Therefore, not all of the squarecorrecting terms possess a nice worldline interpretation, and this makes the situation not as simple as doing IBP's. This fact is to be connected with the impossibility to obtain the BCJ worldline gauge theory numerator W 3 + δW 3 by integrating by parts W 3 in our setup. Perhaps the main obstacle here is that the vanishing of the BCJ triangles after integration does not exactly correspond to the absence of string-based triangles before integration. All of this suggests that there might exist BCJ representations which cannot be obtained just by doing integrations by parts. The characterization thereof in terms of the subset of the generalized gauge invariance respected by string theory would be very interesting. For instance, it might be that our choice to put all the BCJ bubbles to zero, allowed by the generalized gauge invariance, is not sensible in string theory for this particular amplitude with the gauge choice (5.21). Notwithstanding, we believe that the main observations of our paper concerning the observation that the BCJ representation can be seen in string theory and the physical origin of the squares of total derivatives and observation that the BCJ construction is related to the presence of left-right mixing terms in string theory holds very generally. Discussion and outlook In this paper, we have studied various aspects of the BCJ double copy construction. At tree level, we used the MSS chiral block representation both in heterotic and type II closed strings to rewrite the five-point field theory amplitudes in a way in which color factors can be freely interchanged with kinematic numerators to describe scattering amplitudes of cubic color scalars, gluons or gravitons. In this context, the Jacobi identities of [21] appear as consequences of the MSS representation and are on the same footing as the equivalence between color and kinematics. In particular, we did not have to use them to write down the final answer. Working out the n-point combinatorics in the lines of our five-point example would constitute a new direct proof of the color-kinematics duality at tree level. At one loop, we performed a detailed analysis of four-point amplitudes in N = 4 supergravity from the double copy of two N = 2 SYM theories, both in field theory and the worldline limit of string theory. This symmetric construction automatically requires adding two matter vectors multiplets to the gravity spectrum. Our choice of the BCJ ansatz for which the BCJ bubbles were all set to zero is an effective restriction of the full generalized gauge invariance of the BCJ numerators. We focused on the non-trivial loop-momentum structure of the BCJ gauge theory integrands, which we expressed as worldline quantities to make comparison with the string-based ones. The major drawback of this procedure is that, in the process, one loses some of the information contained in the loop-momentum gauge theory numerators. For example, our BCJ gauge theory triangles turned out to vanish after integration in this procedure, so one could think that they are invisible for string theory. However, the box numerators match the string-based numerator up to a new term that we called δW 3 . This term integrates to zero in each kinematic channel, thus guaranteeing matching between the two approaches. This total derivative δW 3 shifts the string-based integrand to the new representation W 3 + δW 3 . We argued that this process is not IBP-induced, in the sense that W 3 + δW 3 cannot be obtained simply by integrating W 3 by parts. We gave a possible clue to interpret this puzzle, based on the fact that the restriction of the generalized gauge invariance might be incompatible with string theory in the gauge choice (5.21). It would be interesting to investigate this point further. At the gravity level, we wanted to argue that the characteristic ingredients of the BCJ double copy procedure, namely the squares of terms required by the kinematic Jacobi identities, are generated in string theory by the left-right contractions. The first observation is that, going to the non-IBP-induced representation W 3 → W 3 + δW 3 in the string-based integrand induces a modification of the left-right mixing terms, W 2 → W 2 + δW 2 , which can be safely interpreted as a worldline quantity, because it obeys the "string-ancestor-gives-no-triangles" criterion. Furthermore, the difference between squaring in loop momentum space and in Schwinger proper time space induces the square-correcting terms. We related them to W 2 + δW 2 and observed that they are of the same nature as the left-right mixing terms in string theory. Such terms are generically obtained from IBP's, which suggests that the right process (if it exists) in string theory to recover full BCJ construction makes use of worldsheet integration by part, just like in the MSS construction at tree level. However, these square-correcting terms do not obey the "string-ancestor-gives-no-triangles" property, which makes them illdefined from the string-theoretic point of view. We suppose that the issues of the non-IBP nature of δW 2 and δW 3 might come from the incompatibility between our restriction of generalized gauge invariance and our string-based computation in the gauge (5.21). In any case, this shows that string theory has something to say about generalized gauge invariance. We believe that this opens very interesting questions related to the process of finding BCJ numerators by the ansatz approach and to a possible origin of this generalized gauge invariance in string theory. Finally, we present the bottom line of our paper in the formula (6.37): we identified a representation of the left-right mixing terms in which they are related to the squares of the BCJ triangles and the squares of parity-odd terms (i µνρσ k µ 1 k ν 2 k ρ 3 σ ) 2 . Besides the previous discussion on the nature of the square-correcting terms in the right-hand side of eq. (6.37), we believe this sheds some light on the a priori surprising fact that total derivatives in the BCJ representation of gauge theory amplitudes play such an important role. The physical reason is deeply related to the structure of the closed string: in the heterotic string, the left-moving sector does not communicate with the right-moving one in gluon amplitudes, while this happens naturally in gravity amplitudes in the type II string and generates new terms, invisible from the gauge theory perspective. Concerning further developments, in addition to the open issues that we already mentioned, it would be natural to explore the possibility that the MSS chiral blocks might be generalized to loop level amplitudes and understand the role of δW 3 and generalized gauge invariance in this context. For that, one would have to account for the left-right mixing terms, generated naturally by worldsheet integration by parts, which must play a central role starting from one loop. Such an approach, if it exists, would amount to disentangling the two sectors in a certain sense, and a string-theoretic understanding of such a procedure would be definitely very interesting. The (2n − 5)!! color factors are obtained from the six ones of (3.12) plugged in (3.27) and give rise to the expected result: C. Integrating the triangles The BCJ triangle numerators (4.21) are linear in the loop momentum, so if we apply to them the exponentiation procedure of section 6.1, we get the following terms n tri (˜ + K) ∝ B µ (˜ µ + K µ ) + C , (C.1) where K = − u i k i . The linear term linear integrates to zero by parity and the constant term BK +C vanishes for each triangle numerator. For example, for the numerator (4.21c), B µ = −sk µ 3 + tk µ 1 − uk µ 2 + 4iu s k 1µ 1 k 2µ 2 k 3µ 3 µ 1 µ 2 µ 3 µ , C = su . (C.2) So it can be easily checked that Taking into account that u 4 = 1 and that the particular triangle (4.21c) is obtained from the worldline box parametrization by setting u 1 = 0, we indeed obtain BK = −su = −C. Moreover, in the gravity amplitude, the triangle numerators squared become simply: The standard tensor reduction transforms˜ µ˜ ν to 2 η µν /4, which is known to induce a dimension shift [60] from d = 4 − 2 to d = 6 − 2 . As a result, in the double copy construction the BCJ triangles produce six-dimensional scalar triangle integrals (A.6) with the coefficients (E.7). D. Explicit expression of δW 3 In section 6.2, we expressed δW 3 in terms ofĠ's: where α and β are the free parameters of the BCJ ansatz, and A 1 , A 2 and A 3 are those from matching to a string-inspired ansatz. E. Trick to rewrite the square-correcting terms In this appendix, we use a trick to partly rewrite the square-correcting terms (6.36) as string-based quantities. This section is mostly provided here for the interesting identity (E.9) which relates the BCJ triangles to the quadratic part of the box numerators. First, we introduce a new element in the reduction technique. Recall that factors of 1/T k modify the overall factor 1/T d/2−(n−1) and thus act as dimension shifts d → d + 2k. Therefore, (2A µν K ν + B µ ) 2 /(2T ) is the numerator of a six-dimensional worldline box. However, we choose to treat the 1/T 2 differently. Since A µν does not depend on the ordering, we can rewrite the 1/T 2 square-correcting term as a full worldline integral i (4π) where the proper-time domain in u i contains all three inequivalent box orderings. Now let us consider the second derivative of the worldline propagator to obtain a useful identity valid for any i, j, k, l: 3) The factors of 1/T 2 combine with delta-functions and thus properly change the number of external legs and dimensions, such that from the right-hand side of (E.3), we can read off the following integrals: a four-dimensional worldline box with numeratorG ijGkl , two six-dimensional scalar triangles and a four-dimensional scalar bubble. Since we are free to choose indices i, j, k, l, we can as well use a linear combination of the three several choices, as long as we correctly average the sum. For instance, we can now create s-, t-and uchannel six-dimensional scalar triangles (along with four-dimensional scalar bubbles), if we choose (i, j, k, l) ∈ {(1, 2, 3, 4), (1, 4, 2, 3), (1, 3, 2, 4)} and sum over them with coefficients λ s , λ t and λ u : This lets us carefully relate the scalar-triangle contributions (E.5) coming from the squarecorrecting terms (E.1) to be equal to the (−2) times the BCJ triangles squared. 22 This seeming coincidence deserves a few comments. We defined A µν as the coefficient of µ ν in the BCJ box numerators (4.20), but in principle, we know that the boxes could have been made scalar in the scalar integral basis, as in (4.7). To comply with the kinematic Jacobi identities, the BCJ color-kinematics duality reintroduces 2 into the boxes by shuffling them with the scalar triangles and bubbles. In our final BCJ construction, we set bubble numerators to zero, so the information that was inside the original scalar triangles and bubbles was equally encoded in the dependence of the BCJ box and triangle numerators on the loop momentum. This is why the coincidence between the A µν and λ c is not miraculous. Finally, we can rewrite eq. (6.37) using our trick: λ sG12G34 +λ tG14G23 +λ uG13G24 − 1 T 2 (λ s δ 12 δ 34 +λ t δ 14 δ 23 +λ u δ 13 δ 24 ) . (E.10) We could not apply the same trick to the 1/T square-correcting terms because they do not seem to have a nice string-theoretic interpretation with respect to the "string-ancestorgives-no-triangles"criterion. More precisely, we expressed it as a worldline polynomial by the same ansatz method that we used to determine the expression of δW 3 , and observed explicitly that this term does not satisfy this criterion, i.e. it creates triangles in the field theory limit. Moreover, we checked the non-trivial fact that the coefficients of these triangles cannot be made equal to these of the BCJ triangles.
20,542.2
2013-12-04T00:00:00.000
[ "Physics" ]
Age-Dependent Increase in Schmidt-Lanterman Incisures and a Cadm4-Associated Membrane Skeletal Complex in Fatty Acid 2-hydroxylase Deficient Mice: a Mouse Model of Spastic Paraplegia SPG35 PNS and CNS myelin contain large amounts of galactocerebroside and sulfatide with 2-hydroxylated fatty acids. The underlying hydroxylation reaction is catalyzed by fatty acid 2-hydroxylase (FA2H). Deficiency in this enzyme causes a complicated hereditary spastic paraplegia, SPG35, which is associated with leukodystrophy. Mass spectrometry-based proteomics of purified myelin isolated from sciatic nerves of Fa2h-deficient (Fa2h−/−) mice revealed an increase in the concentration of the three proteins Cadm4, Mpp6 (Pals2), and protein band 4.1G (Epb41l2) in 17-month-old, but not in young (4 to 6-month-old), Fa2h−/− mice. These proteins are known to form a complex, together with the protein Lin7, in Schmidt-Lanterman incisures (SLIs). Accordingly, the number of SLIs was significantly increased in 17-month-old but not 4-month-old Fa2h−/− mice compared to age-matched wild-type mice. On the other hand, the relative increase in the SLI frequency was less pronounced than expected from Cadm4, Lin7, Mpp6 (Pals2), and band 4.1G (Epb41l2) protein levels. This suggests that the latter not only reflect the higher SLI frequency but that the concentration of the Cadm4 containing complex itself is increased in the SLIs or compact myelin of Fa2h−/− mice and may potentially play a role in the pathogenesis of the disease. The proteome data are available via ProteomeXchange with identifier PXD030244. Supplementary Information The online version contains supplementary material available at 10.1007/s12035-022-02832-4. Introduction Schmidt-Lanterman incisures (SLIs), also known as myelin incisures or Schmidt-Lanterman clefts, are cytoplasmic channels of the Schwann cells in the myelin internodes. It is generally assumed that SLIs facilitate transport of metabolites, ions, and signaling molecules between peri-nuclear Silvia Jordans and Robert Hardt contributed equally to this work. and adaxonal cytoplasmic regions by reducing diffusion distances because of radial diffusion through gap junctions [1]. Galactosylceramide and its sulfated derivative sulfatide are abundant sphingolipids in the nervous system [2]. A large percentage of galactosylceramide and sulfatide in CNS and PNS myelin of mammals contains 2-hydroxylated fatty acyl residues [3,4]. In myelinating cells, the 2-hydroxylation reaction is exclusively catalyzed by the enzyme fatty acid 2-hydroxylase (FA2H), a cytochrome b5 domain-containing enzyme of the endoplasmic reticulum [5,6]. Although free fatty acids are substrates for the enzyme in an in vitro activity assay [7], X-ray structural analyses suggest that ceramides may be additional in vivo substrates [8]. The functional role of the 2-hydroxylation modification of sphingolipids is not fully understood. Hydroxylated sphingolipids appear to have unique roles in signal transduction [9] and may affect the turnover of membrane proteins by their influence on the mobility of lipids in membrane subdomains (or lipid rafts) [10][11][12]. Mutations in the FA2H gene that reduce or abolish activity of the enzyme cause a complicated form of hereditary spastic paraplegia type 35 (SPG35) associated with leukodystrophy, which is also known as fatty acid hydroxylaseassociated neurodegeneration (FAHN) and as a subtype of neurodegeneration with brain iron accumulation (NBIA) [13]. More than 40 disease-associated human FA2H mutations have been reported [14]. Fa2h-deficient (Fa2h −/− ) mice serve as animal model of SPG35/FAHN and develop a phenotype that is reminiscent of symptoms of the human disease [15,16]. In a previous study, we found evidence for alterations in the CNS myelin proteome of Fa2h −/− mice [17]. Although SPG35, like hereditary spastic paraplegias in general, is characterized by degeneration of upper motor neurons, peripheral neuropathy has been described in about 30% of the patients [14]. Karle et al. [18] estimated a prevalence of peripheral neuropathy of about 60% in all cases of hereditary spastic paraplegia together. In the present report, we performed a myelin proteome study of sciatic nerves, in order to examine possible molecular changes in the PNS myelin of Fa2h −/− mice. Antibodies Antibodies used in this study are listed in Table 1 and were kind gifts from Peter Prophy and Arthur M. Butt or were purchased from the following companies: Abcam (Cambridge, UK), Antibodies Incorporated (Davis, California, USA), Biorbyt (Cambridge, UK), GeneTex (Irvine, California, USA), Jackson ImmunoResearch (Philadelphia, Pennsylvania, USA), Merck (Darmstadt, Germany), and Thermo Fisher (Waltham, Massachusetts, USA). Purification of Myelin from Sciatic Nerves Myelin from sciatic nerves was isolated according to Caroni and Schwab [19] with the following modifications. Sciatic nerves were removed from mice that had been killed by cervical dislocation and stored at − 80 °C, before they were homogenized in isotonic 9.2% sucrose solution using a Dounce homogenizer (pooled nerves from one mouse). Myelin was then purified by sucrose density step gradient (9.2% and 28.4% sucrose) centrifugation. Myelin isolated from the interphase was washed with water, resuspended in 1-mM EDTA, and stored at − 80 °C. Lipid Extraction and Thin Layer Chromatography Total lipid extracts from sciatic nerves were prepared as described [20]. Briefly, nerves were homogenized in methanol using a Dounce homogenizer, and then chloroform and 1% HClO 4 were added to obtain a final ratio of 1:1:0.9 (chloroform/methanol/HClO 4 ; v/v/v). Samples were mixed and centrifuged to facilitate phase separation. The organic phase was dried in a vacuum centrifuge, dissolved in chloroform/ methanol (1:1; v/v), and sonicated (5 min) in a sonication water bath. Aliquots of the lipids were administered to silica gel 60 HPTLC plates (Merck) and separated in a solventsaturated chromatography tank using chloroform/methanol/ water (70:30:4; v/v/v) as solvent system. Lipids were stained by spraying TLC plates with a solution of 625-mM cupric sulfate, 8% phosphoric acid, followed by heating to 150 °C for 5 min [21]. Mass Spectrometry and Data Analysis Tandem mass tag 6-plex (TMTsixplex) labeling and liquid chromatography-tandem mass spectrometry (LC-MS/MS) measurements were performed as described previously [17]. Briefly, purified myelin samples were delipidated by acetone precipitation and then subjected to RapiGest (Waters, Milford, Massachusetts, USA) assisted tryptic digestion (enzyme to protein ratio = 1:100) including cysteine reduction and alkylation. Afterwards, peptides were labeled using TMTsixplex Isobaric Label Reagent (Thermo Fisher) and then combined into three labeling pools, with each pool containing one independent biological replicate of 6-, 13-, and 17-month-old wild-type and Fa2h −/− mice. Thereafter, RapiGest was precipitated, samples desalted by solid-phase extraction, and each sample pool subjected to 12 well OFFGEL fractionation. Finally, all peptide fractions were separated by reversed phase chromatography (self-packed column: 100 µm × 200 mm, Magic C18 AQ, 5 µm, Bruker, Bremen, Germany) using an Easy-nLC 1000 UHPLC (Thermo Fisher) and analyzed by a data-dependent TOP10 method using a LTQ Orbitrap Velos mass spectrometer (Thermo Fisher). Raw files were processed with Proteome Discoverer 2.5 (Thermo Fisher) in combination with a Mascot 2.6.1 (Matrix Science, London, UK) search engine. Initially, MS1 precursor masses were recalibrated with the Spectrum Files RC node (Tolerances MS1/2: 20 ppm/0.02 Da) using a non-linear regression model. Spectra were searched against a Swissprot Mus musculus proteome database (downloaded 03/2021, 17085 entries) and two common contaminants database, cRAP (https:// www. thegpm. org/ crap/) and MaxQuantcontaminants (https:// maxqu ant. org), in a reverse decoy approach. The enzyme specificity was Trypsin/P with up to two missed cleavages allowed. TMTsixplex was set as quantification method. Modifications were propionamide (C) and TMT (K, Peptide N-term) as fixed and oxidation (M) and acetylation (Protein N-term) as variable. Search tolerances were 20 ppm both for MS1 and MS2. Identified spectra (PSM) were validated by the Percolator node based on q-values to target false discovery rates (FDRs) of 1%/5% FDR (strict/relaxed). All spectra not passing the stricter FDR were submitted to a second-pass Mascot search employing relaxed parameters: Enzyme name: SemiTrypsin Max. missed cleavages: 1 Dynamic mods: oxidation (M), acetylation (Protein N-term), propionamide (C), TMT (K, Peptide N-term) After PSM validation by Percolator, the combined PSMs were aggregated to peptides and proteins according to the principle of strict parsimony and finally filtered at 1% peptide and protein FDR. In addition, the protein list was filtered to contain only master proteins. For quantification, TMT reporter ion signals were extracted at the MS2 level with 20 ppm tolerance using the most confident centroid. From this, the relative peptide/protein quantification was achieved using the following parameters: Peptides to use: unique + razor Reporter abundance based on intensity Co-isolation threshold: 30% Average reporter S/N: 10. The resulting protein list was filtered for master proteins only and exported to R Studio (R version 4.1.0) for data processing, differential expression analysis, and visualization using the following additional packages: BioVenn 1.1.3, dplyr 1.0.8, EnhancedVolcano 1.12.0, ggplot2 3.3.5, ggrepel 0.9.1, limma 3.50.0, and pheatmap 1.0.12. First, contaminating proteins were removed including proteins labeled "Ig." After log(2) transformation, data was filtered for proteins containing three intensity values for each age and genotype. The filtered data was then normalized by cyclic loess normalization, and differentially abundant proteins were determined by a moderated t test based on linear models (limma trend) [25]. To account for batch effects of individual TMT batches, batch was included as co-variate in the linear model. Finally, contrast for all relevant comparisons were extracted from the linear model and exported to individual result tables. Note that proteins with an absolute log(2)-fold change > 0.585 and a false discovery rate (FDR) < 0.1 were deemed significantly regulated. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE [26] partner repository with the dataset identifier PXD030244 and 10.6019/PXD030244. Teased Fiber Preparation and Immunostaining Sciatic nerves were immersion fixed in 4% paraformaldehyde in PBS, washed with PBS, and stored at 4 °C in PBS containing 0.02% sodium azide. Teased fibers were prepared as described [27] and dried overnight at room temperature. For quantification of SLIs, teased fibers were stained with Atto488-labeled phalloidin (Sigma-Aldrich, St. Louis, Missouri, USA). For immunofluorescence staining, teased fibers were post fixed and permeabilized in − 20 °C cold methanol for 15 min. After blocking with 2% bovine serum albumin and 0.3% Triton X-100 in PBS at 25 °C for 2 h, specimens were incubated with primary antibodies overnight at 4 °C in a moisturized chamber. After washing 6 times with PBS, specimens were stained with the appropriate secondary antibodies for 2 h at 25 °C (see Table 1 for antibodies and dilutions). Microscopic pictures were captured using an Axiovert 200 M microscope fitted with a Colibri LED system (Carl Zeiss, Jena, Germany). Length of nodes of Ranvier and paranodes in microscopic pictures were measured using the ZEN 3.2 software or Axiovision SE64 Rel. 4.9.1 (both from Carl Zeiss). Statistical Analysis Data are presented as mean ± standard deviation (SD). Data were tested for normal distribution by Shapiro-Wilk test. Normal distributed data were analyzed by Student t test. A p value < 0.05 was considered statistically significant. In case of multiple comparisons, the FDR was controlled according to Benjamini and Hochberg [28]. For the proteome analysis, a FDR of 0.1 was chosen because of the exploratory nature of the study; otherwise the FDR was controlled at level 0.05. Proteome Analysis of Sciatic Nerve Myelin from Fa2h −/− Mice Showed Increased Levels of Four Proteins that Are Known to Form a Complex in SLIs Myelin was isolated from sciatic nerves of Fa2h −/− and Fa2h +/+ mice (aged 6, 13 and 17 months) by sucrose density gradient centrifugation. Purification and comparability of myelin samples were monitored by lipid analysis (Fig. 1a) and SDS-PAGE followed by silver staining (Fig. 1b). In line with previous analyses of sciatic nerves [15], 2-hydroxylated galactosylceramide and sulfatide were absent from Fa2h −/− mice, whereas their non-hydroxylated isoforms were increased, resulting in only small changes in the levels of total galactosylceramide and sulfatide (Fig. 1a). In addition, levels of major myelin proteins were examined by Western blotting (Fig. 1c), which showed comparable concentrations of periaxin, 2',3'-cyclic nucleotide 3'-phosphodiesterase (CNP), large isoform of myelin-associated glycoprotein (L-MAG), and myelin basic protein (MBP). Mass spectrometry of three biological replicates per age group was performed as previously described [17]. In total, 1418 protein groups could be identified in at least one biological replicate, and 937 protein groups could be identified in at least two replicates per age group (data have been deposited at the PRIDE partner repository with the dataset identifier PXD030244 and 10.6019/PXD030244). Major myelin proteins showed high abundance, as expected (Fig. 2a). After filtering the data to remove all protein groups not identified in all data sets, 681 proteins remained for quantitative evaluation (Supplementary Table S1). Compared with a list of 90 well-known myelin proteins identified in previous PNS myelin proteome studies [29,30], our approach identified 59% of them in all samples. When compared with the myelin proteome data set published by Siems et al. [28], 74% of those proteins could be identified in our study and 47% were identified and quantified in all samples (Fig. 2b, c). Samples from 13-month-old mice were excluded from further analysis, because cluster analysis and principal component analysis revealed inconsistencies in the results for this age group with these samples for unclear reasons (supplementary Fig. S1). We then tested for significantly altered proteins using a moderated t-test based on linear models (limma). After correction for multiple comparisons, significant changes (using the criteria: |log(2)(fold change)|≥ 0.585 and a FDR of 0.1) were not observed in 6-month-old mice (Fig. 2D). In contrast, in 17-month-old Fa2h −/− mice, 21 proteins were significantly changed, among them only few established myelin proteins (Fig. 2E, Table 2). Notably, three myelin proteins showed a very similar, approximately 50% increase in 17-month-old Fa2h −/− mice: Cadm4 (SynCAM4, Necl4), protein band 4.1G (Epb41l2), and Mpp6 (Pals2). The protein Lin7 (Lin7c) showed a similar increase (Fig. 2E), though this was statistically not significant. These four proteins are known to form a tetrameric complex in the membrane cytoskeleton of SLIs [31,32]. The similar relative increase of all four proteins strongly suggests that they are mainly present in a complex in myelin, which is increased in the PNS of Fa2h −/− mice. Western blot analysis was used to confirm the mass spectrometry results using independent myelin samples isolated from sciatic nerves of young and old mice (Fig. 3). These experiments confirmed (1) unaltered levels of Cadm4, Lin7, and Mpp6 in young (4 to 6-month-old) Fa2h −/− mice (Fig. 3a) and (2) a significant increase of Cadm4, Lin7, and Mpp6 in 17-to 18-month-old Fa2h −/− mice (Fig. 3b) (we were unable to detect protein band 4.1G by Western blotting using commercially available antibodies). Increased Numbers of SLIs in Old but Not Young Fa2h −/− Mice Potentially, the results from the proteome analysis could indicate an increase in the number of SLIs. On the other hand, however, other proteins known to be present in SLIs and identified by in our mass spectrometry screen were not increased in Fa2h −/− mice (see Fig. 2E), and Western blot analysis showed no significantly altered levels of L-MAG (see Fig. 1c, d), which is also abundant in SLIs [33]. SLIs were quantified in teased fibers of sciatic nerves using fluorescently labeled phalloidin (Fig. 4a). The number of SLIs was significantly increased by 22% in 17-month-old (p = 0.0154, t-test) but not in young (4-month-old; p = 0.7873) Fa2h −/− mice (Fig. 4B). Only axons with a comparable internode width were evaluated (Fig. 4C). The small but significant increase of SLI frequency was lower than expected from the relative increase of Cadm4, Lin7, Mpp6, and band 4.1G protein observed by mass spectrometry or Western blot analysis (≥ 50% increase). In line with their presence in the (10) intensity for all 681 proteins that could be identified in all samples averaged over all samples. Known myelin proteins are highlighted in blue and a selection is labeled with their respective gene name. B Venn diagram comparing the PNS myelin proteins identified (in one replicate) in this study with the data published in Siems et al. [28] (dataset UDMS E , Fif1-data1-v1, 1). C Venn diagram comparing the PNS myelin proteins reliably identified (filtered IDs, abundance values in all replicates) in this study with the data published by Siems et al. [28] (dataset UDMS E , Fif1-data1-v1, 1). D, E Vulcano plot of myelin proteome data from 6-month-old (D) and 17-month-old mice (E). Data points show the -log 10 transformed p-value versus median of the log 2 (fold change) (+ = increased in Fa2h −/− ;-= decreased in Fa2h −/− ) for proteins identified in all replicates. Proteins known to be present in SLIs are highlighted in orange and other known myelin proteins (according to Patzig et al. [29]) in blue. Selected myelin proteins are labeled with the corresponding gene names. Vertical broken lines indicate the 0.666-or 1.5-fold change thresholds. The horizontal line in (E) indicates the significance threshold when the FDR was controlled at level 0.1 same molecular complex, Cadm4 and Lin7 colocalized in teased fibers and were mostly present in SLIs (Fig. 4D). We found no evidence for an altered distribution in Fa2h −/− compared to Fa2h +/+ mice. . 3 Cadm4, Mpp6 and Lin7 protein levels are in increased in old Fa2h −/− mice. Western blot analysis of purified myelin from 4 to 6-month-old A and 17-month-old B Fa2h +/+ and Fa2h −/− sciatic nerves. Blots were stained with the indicated antibodies and Western blot data from three or four independent samples (n = 3-4 mice per genotype) were evaluated by densitometry. Densitometric data were normalized to periaxin, except periaxin, which was not normalized. Data shown are the mean ± SD (n = 3-4) with the mean of wild type set to 1. p values in bold are significant (after control of the FDR at level 0.05) Normal Length of Nodes of Ranvier and Paranodes in Fa2h −/− Mice Because the paranodal structure is disturbed in mice lacking sulfatide [34], we wondered whether absence of 2-hydroxylated sulfatide may also affect the paranodes in older mice. Normal paranodal structure in young Fa2h −/− mice was already demonstrated in a previous report [15]. We determined length of nodes of Ranvier in 17-month-old mice and paranodes using Caspr as paranodal marker [35] ( Fig. 5a). No significant differences between genotypes were observed (Fig. 5b, c). Discussion To our knowledge, the currently most comprehensive data set of mouse PNS myelin proteome used a gel-and labelfree approach and identify 1083 proteins and could differentially analyze up to 700 proteins in the myelin proteome [29]. Using TMT-labeling for relative quantification, we were able to reproducibly identify and quantify 681 proteins (937 if proteins not detectable in all age groups were included). Thus, the number of identified and quantified proteins is comparable to previous proteome studies. The unique presence of several proteins in the proteome in our and the previous studies [29,30] may be due to the different methods used, but the different ages analyzed (6 months and older in our study; 3-4 weeks in Patzig et al. [30] and Siems et al. [29]) may also contribute to these differences. Inconsistencies in the data of the 13-month samples that have prompted us to exclude them from further analyses and could be the result of improper TMT-labeling, though other errors in sample preparation cannot be excluded. The differences between PNS myelin proteomes of Fa2h +/+ and Fa2h −/− mice were small in all age groups examined, and upon focusing on well-established myelin proteins, we observed a rather specific increase of the four proteins Cadm4, Mpp6 (Pals2), Lin7, and band 4.1G (Epb41l2). These four molecules are co-regulated at the protein and mRNA level [29], and there is clear evidence that they form a complex in SLIs [32,[36][37][38]. Localization of this complex in SLIs depends on band 4.1G protein [39]. The almost identical relative upregulation of all four proteins by 50%, as observed in the mass spectrometric analysis strongly suggests that the four proteins are mainly present in this complex in myelin. Because several other myelin proteins known to be present in SLIs [32] were not found to be significantly increased in Fa2h −/− sciatic nerve myelin, we assume that the increased level of the Cadm4 complex does not only reflect the higher SLI frequency. Whether increase in the membrane skeletal complex and the increase in SLI frequency are connected or independent events are unclear at present. A higher SLI frequency in Mpp6-deficient mice [31] indicates at least that changes in the level or localization of components of this membrane skeleton complex can affect SLI frequency, though the mechanism is currently not understood. An increase in the number of SLIs together with structural abnormalities of the paranodes has also been observed in Ugt8-and Gal3st1-deficient mice, both lacking sulfatide A Phalloidin staining of teased fibers from 17-month-old Fa2h +/+ and Fa2h −/− mice. B Quantification of SLI frequencies in teased fibers of sciatic nerves from 4-month-old (n = 5 mice per genotype; N = 50 fibers analyzed) and 17-month-old mice (n = 7 mice per genotype; N = 70 fibers). The number of SLIs was significantly increased in old but not young Fa2h −/− mice. p value in bold indicates significant difference (t-test). C The average internodal diameter of axons examined to determine the SLI frequencies was not significantly different between genotypes. All data are shown as mean ± SD (n = 5-7) of the average SLI frequency or internodal diameter per mouse. D Immunofluorescence staining of Cadm4 and Lin7 in teased fibers from 17-month-old Fa2h +/+ and Fa2h −/− mice. Both proteins co-localized and showed a similar distribution (mainly present in SLIs) in both genotypes [40][41][42][43]. In contrast to these mice, however, Fa2h −/− mice, which lack only the 2-hydroxylated species of these lipids and have only slightly reduced total sulfatide levels in PNS myelin [15], have apparently normal paranodes at young [15] and old ages (this report). This indicates that sulfatide can fulfill its role at the paranodes irrespective of its hydroxylation status. In addition, the increased SLI frequency in sulfatide-deficient mice is age-independent [43]. We therefore assume that different mechanisms are responsible for the increase of the SLI frequency in Fa2h −/− mice and such lacking sulfatide. The age-dependent increase in the number of SLIs correlates with the late onset of disease in Fa2h −/− mice [15,16]. Although PNS pathology is not a hallmark of hereditary spastic paraplegia, peripheral neuropathy has been observed in around 30% of SPG35 patients [14]. Because in remyelinated axons, the SLI frequency is increased [44,45], it is possible that increased number of SLIs in Fa2h −/− mice merely reflects remyelination. Peripheral nerves of 12-month-old Fa2h −/− mice, however, showed thin myelin in less than 1% of axons, suggesting only a low level of de-and remyelination [16]. Therefore, the increased SLI frequency at 17 months may indicate significant peripheral neuropathy and demyelination in older Fa2h −/− mice. In a previous study with Fa2h −/− CNS myelin, we could identify the oligodendrocytic myelin paranodal and inner loop protein (Opalin, Tmem10) to be significantly increased in myelin from old Fa2h −/− mice [17]. Opalin is exclusively found in CNS myelin but not in PNS myelin [46,47]. Furthermore, we found evidence for alterations in the transport and turnover of the protein, whereas expression of the Opalin gene was unaffected. These findings suggested that 2-hydroxylated sphingolipids may be required for correct sorting of Opalin and maybe other myelin proteins. Interestingly, Cadm4 was also increased in the CNS myelin proteome from 17-month-old mice according to our mass spectrometry analysis, although the null hypothesis could not be rejected in subsequent Western blot analyses [17]. Cadm4 also interacts with the choline transporter CTL1, and Cadm4-deficiency in Schwann cells leads to elevated levels of long chain and polyunsaturated phosphatidylcholine and phosphatidylinositol [48]. Therefore, elevated Cadm4 levels could potentially affect membrane lipid composition and membrane fluidity. Because of its membrane topology, only Cadm4 could potentially be directly influenced by changes in the properties of galactosylceramide and sulfatide caused by absent 2-hydroxylation, as both lipids are only found in the extracellular leaflet of the plasma membrane. Further studies should examine if 2-hydroxylated sphingolipids may directly affect turnover and/or sorting of Cadm4 in myelinating Schwann cells. Although the increase of the Cadm4 complex in older mice correlates with the late onset of pathology in Fa2h −/− mice [15,16], it has to be examined whether increased levels of the Cadm4 containing complex in old Fa2h −/− mice contributes to the pathogenesis of the disease and whether similar changes occur in human patients and may thus play a role in human SPG35. Funding Open Access funding enabled and organized by Projekt DEAL. This work was supported by a grant of the Deutsche
5,649
2022-04-20T00:00:00.000
[ "Medicine", "Biology" ]
Blind spots for neutralino dark matter in the NMSSM Spin-independent cross-section for neutralino dark matter scattering off nuclei is investigated in the NMSSM. Several classes of blind spots for direct detection of singlino-Higgsino dark matter are analytically identified, including such that have no analog in the MSSM. It is shown that mixing of the Higgs doublets with the scalar singlet has a big impact on the position of blind spots in the parameter space. In particular, this mixing allows for more freedom in the sign assignment for the parameters entering the neutralino mass matrix, required for a blind spot to occur, as compared to the MSSM or the NMSSM with decoupled singlet. Moreover, blind spots may occur for any composition of a singlino-Higgsino LSP. Particular attention is paid to cases with the singlet-dominated scalar lighter than the 125 GeV Higgs for which a vanishing tree-level spin-independent scattering cross-section may result from destructive interference between the Higgs and the singlet-dominated scalar exchange. Correlations of the spin-independent scattering cross-section with the Higgs observables are also discussed. Introduction After the recent discovery of the Higgs boson [1,2], probably the most wanted new particle is the one responsible for the observed dark matter (DM) in the Universe. Among extensions of the Standard Model (SM) that provide a candidate for a dark matter particle, supersymmetric models are most attractive. One of the main reasons that kept particle physics community interested in supersymmetric models for more than three decades is their ability to solve the hierarchy problem of the SM. Moreover, in the simplest supersymmetric extensions of the SM the lightest supersymmetric particle (LSP) is stable and generically neutral making it a good dark matter candidate. In most of the supersymmetry breaking schemes the LSP is a neutralino. One of the most promising ways to search for neutralino dark matter is through its direct interactions with nuclei. In the last couple of years sensitivity of direct dark matter detection experiments improved by several orders of magnitude. The best constraints for the spin-independent (SI) neutralino-nucleon scattering cross-section (for the DM masses above 6 GeV) are provided now by the LUX experiment [3]. In consequence, significant portions of the neutralino sector parameter space has been excluded by LUX. The constraints will become soon even stronger with the forthcoming experiments such as XEXON1T [4] and LZ [5]. Nevertheless, there are points in the parameter space, so-called blind spots, for which the neutralino LSP spin-independent scattering cross-section (almost) vanishes at the tree level. In the vicinity of such blind spots the neutralino LSP is not only consistent with the LUX constraints but, due to the irreducible neutrino background [6], might be never detected in direct detection experiments sensitive only to the SI scattering cross-section. When comparing with the results of DM detection experiments we assume that the considered particle is the main component of DM with the relic density obtained by the Planck satellite [7] (otherwise the experimental bounds on the cross-sections should be re-scaled by the ratio Ω observed /Ω LSP ). Conditions for the existence of blind spots have been already identified in the Minimal Supersymmetric Standard Model (MSSM). In Ref. [8] the conditions for MSSM parameters leading to a vanishing Higgs-neutralino-neutralino coupling were found in the limit of decoupled heavy Higgs doublet. Additional blind spots in the MSSM originating from destructive interference between contributions to the scattering amplitude mediated by the 125 GeV Higgs and the heavy Higgs doublet were found in Ref. [9]. However, the measured Higgs scalar mass strongly motivates extensions of the MSSM because the 125 GeV Higgs implies in the MSSM relatively heavy stops threatening naturalness of supersymmetry. Substantially lighter stops than in the MSSM can be consistent with the 125 GeV Higgs in the Next-to-Minimal Supersymmetric Standard Model (NMSSM) [10] which is the MSSM supplemented by a gauge singlet chiral superfield. The neutralino sector of the NMSSM is richer than that of the MSSM because it contains, in addition, the fermionic component of the singlet superfield -the singlino. In some part of the parameter space the LSP has a non-negligible singlino component and can be a good dark matter candidate [11,12,13] but with different properties than those of the LSP in the MSSM. There have been many studies of neutralino dark matter in the NMSSM including predictions for its direct detection, see e.g. refs. [14,15,16,17,18,19] and references therein. 1 However, conditions for blind spots in the NMSSM have not been discussed in the literature so far. The main aim of this paper is to investigate conditions for SI scattering cross-section blind spots for a singlino-Higgsino LSP in the NMSSM. We find a general formula for the blind spot condition and study it in the most interesting and phenomenologically relevant limiting cases, focusing both on small and large tan β regions. First of all, we identify blind spots analogous to those for a gaugino-Higgsino LSP in the MSSM originating from a vanishing Higgs-neutralinoneutralino coupling [8]. Such blind spots were also found in a general singlet-doublet DM model which mimics NMSSM with a Higgsino-singlino DM with a decoupled scalar singlet and heavy MSSM-like doublet [21] (see also Ref. [22] for a recent analysis). However, in our analysis we include also the effects of mixing among scalars. We find that inclusion of the mixing with the singlet introduces qualitatively new features to the conditions for blind spots, e.g. allowing certain signs of some parameters that would be forbidden if such mixing is neglected. Secondly, we find blind spots analogous to those in the MSSM with the effect of the heavy doublet taken into account [9] and generalize them to the case with the Higgs-singlet mixing included. Finally, we investigate in great detail the region of the NMSSM parameter space with the singlet-dominated scalar lighter than 125 GeV, which is entirely new with respect to the MSSM. This region is particularly interesting because the Higgs-singlet mixing can increase the Higgs boson mass by up to about 6 GeV [23]. While this enhancement of the Higgs mass by mixing effects can be present both for small and large tan β, it is worth emphasizing that for large (or moderate) values of tan β this is a unique way to have lighter stops than in the MSSM. Moreover, for large tan β the singlet-dominated scalar coupling to bottom quarks can be strongly suppressed relaxing the LEP constraints on scalars and allowing a substantial correction to the Higgs mass from mixing for a wide range of singlet masses between about 60 and 110 GeV [23] (for small tan β a sizable correction from mixing is allowed only for the singlet mass in the vicinity of the LEP excess at 98 GeV [24]). In the case of a light singlet-dominated scalar with sizable mixing with the Higgs scalar, the SI scattering cross-section is generically large, even for not too large values of λ. The main reason for this is that such a singlet-dominated scalar also mediates the SI scattering cross-section and the corresponding amplitude may even dominate over the one with the SMlike Higgs boson exchange due to the enhancement by a small mass of the singlet-dominated scalar. This phenomenon was identified long before the Higgs scalar discovery [14]. Recently, points in the parameter space of the NMSSM with strongly suppressed SI direct detection crosssection, consistent with LUX constraints and in some cases even below the irreducible neutrino background for direct detection experiments, were found using sophisticated numerical scans of semi-constrained NMSSM [25]. However, in Ref. [25] no explanation was given why such points exist and what are the conditions for the NMSSM parameters required for this suppression to occur. In the present paper we provide analytic understanding for the existence of blind spots in the NMSSM with light singlet-dominated scalar and a Higgsino-singlino LSP. Such blind spots follow from a destructive interference between the singlet and Higgs exchange in the scattering amplitude. We also discuss the influence of a strongly suppressed coupling of the singlet-dominated scalar to b quarks which is important at large tan β. In particular, we find that the presence of a light singlet-dominated scalar gives much more freedom in the LSP composition and, especially for a singlino-dominated LSP, in sign assignments of various NMSSM parameters required for obtaining a blind spot. The rest of the paper is organized as follows. In section 2 we review some features of the Higgs and neutralino sector of the NMSSM that are important for the analysis of blind spots. In section 3 SI scattering cross-section in the NMSSM is reviewed and general formulae for neutralino blind spots are derived. In the remaining sections blind spot conditions are analyzed in detail in several physically interesting cases and approximations. In section 4 only SM-like Higgs scalar exchange is taken into account. In section 5 the interference effects between two doublet-dominated scalars are analyzed, while section 6 is focused on the case with a light singlet-dominated scalar in which interference effects between such light scalar and the SM-like Higgs scalar become important. Our main findings are summarized in section 7. 2 Higgs and neutralino sector of the NMSSM Several versions of NMSSM has been proposed so far [10]. We would like to keep our discussion as general as possible so we assume that the NMSSM specific part of the superpotential and the soft terms have the following general forms: where S is an additional SM-singlet superfield. The first term in (1) is the source of the effective Higgsino mass parameter, µ eff ≡ µ HuH d + λv s (we drop the subscript "eff" in the rest of the paper). Using the shift symmetry of S we can put µ HuH d = 0. In the simplest version, known as the scale-invariant NMSSM, There are three neutral CP-even scalar fields, H u , H d , S which are the real parts of excitations around the real vevs of the neutral components of the doublets H u , H d and the singlet S (we use the same notation for the doublets and the singlet as for the real parts of their neutral components). It is more convenient for us to work in the basis ĥ ,Ĥ,ŝ , whereĥ = H d cos β + H u sin β,Ĥ = H d sin β − H u cos β andŝ = S. Theĥ field has exactly the same couplings to the gauge bosons and fermions as the diagonalization matrix elements: 3 where j = 1, 2, 3 and |m χ 1 | ≤ |m χ 2 | ≤ |m χ 3 |. Later we will be interested mainly in the LSP corresponding to j = 1, so to simplify the notation we will use m χ ≡ m χ 1 . Notice that the physical (positive) LSP mass equals to m LSP ≡ |m χ |. The sign of m χ is the same as that of the diagonal singlino entry ∂ 2 S f in the neutralino mass matrix (11). For | ∂ 2 S f | < |µ| this is obvious. For bigger values of | ∂ 2 S f | it is also true. In this case the two lightest neutralinos are Higgsino-dominated corresponding to the mass eigenstates close to µ and −µ. The lighter of them is the one which mixes more strongly with the singlino, and generally the mixing is stronger between states with the diagonal terms of the same sign (unless the corresponding off-diagonal term is exceptionally small). Using eqs. (12) and (13) and the fact that the gauginos are decoupled, we can express the ratio of the Higgsino to the singlino components of the LSP as the following function of the LSP mass and the ratio (λv)/µ: In our discussion we will consider only positive values of λ. The results for negative λ are exactly the same due to the invariance under the transformation λ → −λ, κ → −κ, ξ S → −ξ S , ξ F → −ξ F , S → −S with other fields and couplings unchanged. Spin-independent scattering cross-section The spin-independent cross-section for the LSP interacting with the nucleus with the atomic number Z and the mass number A is given by where µ 2 red is the reduced mass of the nucleus and the LSP. Usually, the experimental limits concern the cross section σ SI defined as the arithmetic mean of σ (p) SI and σ (n) SI . Thus, in the rest of the paper we will follow this convention. When the squarks are heavy the effective couplings f (N ) (N = p, n) are dominated by the t-channel exchange of the CP-even scalars [27]: The couplings of the i-th scalar to the LSP and to the nucleon are given, respectively, by and In the last equation we introduced the combinations F There is still some inconsistency in the literature regarding the values of these form factors. In our numerical calculations we will take them to be: f [30]. The couplings of the scalar particles in eqs. (17) and (18) are expressed in terms of the diagonalization matrices for the scalars and neutralinos (S and N , respectively) written in the usual weak bases. However, for our purposes it will be more convenient to use the scalar diagonalization matrixS defined in (10) for the rotated basis (ĥ,Ĥ,ŝ). Moreover, we are interested in the situation when the LSP is Higgsino-singlino like with negligible contributions from gauginos i.e. N 11 ≈ 0 ≈ N 12 . Then, the expressions (17) and (18) are approximated by: The formulae for the spin-independent cross-section in a general case are rather complicated so in order to make some expressions more compact it is useful to define the following parameters: This is the product of the coupling to a nucleon, the propagator and the value of the leading component for the scalar h i divided by the same product for h. Of course, A h = 1 and A H (A s ) vanishes in the limit m H → ∞ (m s → ∞). We define also some combinations of the above parameters: which encode the information on the scalar sector (mixing, masses and couplings to the nucleons). Using the above definitions we rewrite (16) in the form Blind spot conditions The blind spots are defined as those points in the parameter space for which the LSP-nucleon cross-section vanishes. From eq. (23) we obtain the following general blind spot condition This condition simplifies very much for the case of a pure Higgsino (N 15 = 0) or a pure singlino (N 13 = N 14 = 0) LSP. For such pure states the blind spot condition reads For a mixed Higgsino-singlino LSP it is convenient to introduce the parameter which is totally described by the neutralino sector and the dimensionless couplings of the singlet superfield in the superpotential i.e. λ and κ. 4 This parameter vanishes for neutralinos which are pure (Higgsino or singlino) states. Its absolute value grows with the increasing admixture of the sub-dominant components and has a maximum (or even a pole) for a specific highly mixed composition. The position and height of such maximum depend on the parameters of the model. Whether there is a pole or a maximum depends on the relative signs of some parameters. The details are given in the Appendix. The parameter η can be used to rewrite eq. (24) as After using eqs. (12) and (13), the above general blind spot condition may be cast in the form 4 Note that in Z 3 -NMSSM κ controls also the neutralino mass parameter. For a highly Higgsino-dominated LSP, for which N 15 and η have very small values, it is better to rewrite eq. (24) as: After applying eqs. (12) and (13), this blind spot condition for a highly Higgsino-dominated LSP takes the form (30) In many cases considered in this paper the contribution from BĤ may be neglected. Then the blind spot conditions simplifies to In the rest of the paper we will analyze in some detail the above blind spot conditions for several cases and approximations. ≈ 0 and result from an accidentally vanishing hχχ coupling. 5 Generically the contributions from s and H exchange are very small when these scalars are very heavy. Then, the quantities A H and A s defined in (21) are negligible and eq. (22) reduces to Bĥ i =S hĥ i /S hĥ . The situation is qualitatively different depending on whether the Higgs scalar mixes with other scalars or not so we discuss these cases separately in the following subsections. Without scalar mixing Without mixing with (heavy)Ĥ andŝ, the lightest scalar h has the same couplings as the SM Higgs. In our notation this corresponds to Bĥ = 1, BĤ = Bŝ = 0. The condition (25) is fulfilled so the SI scattering cross-section vanishes when the LSP is a pure singlino or pure Higgsino state. For a general Higgsino-singlino LSP the amplitude (23) results in the following approximate formula for this cross-section: where k depends on the value of tan β and typically is of order O(1). This implies that a highly mixed Higgsino-singlino LSP is strongly constrained by the LUX results unless λ is very small. For λ which is not small, these constraints may be avoided if there is some (partial) cancellation between the two terms in the bracket multiplying Bĥ in eq. (23) (which results in an unusually small value of k in (32)). Such cancellation is equivalent to vanishing of the parameter η (defined in (26)) and leads, according to eq. (31), to a blind spot. Therefore, highly mixed Higgsino-singlino neutralino dark matter with not very small λ may be viable only in very special parts of the parameter space, close to such blind spots. The blind spot condition (28) for the present values of the Bĥ i parameters, Bĥ = 1, BĤ = Bŝ = 0, simplifies to: This result is analogous to the one obtained in [8] for the Higgsino-gaugino LSP in MSSM, but with opposite sign between the two terms in the l.h.s. This difference stems from the fact that both off-diagonal terms, mixing the singlino with two Higgsinos, have the same sign while the two analogous terms, mixing any of the gauginos with the Higgsinos, have opposite signs. Notice that if tan β is not small, the blind spot condition implies a singlino-dominated LSP is suppressed anyway. Thus, for a Higgsino-singlino LSP and large tan β this kind of a blind spot does not help much in suppression of SI scattering cross-section. On the other hand, for small tan β and highly mixed singlino-Higgsino LSP the blind spot condition may be satisfied provided that µ ∂ 2 S f is positive 6 . This is illustrated in Fig. 1 where the SI scattering cross-section is plotted as a function of the diagonal singlino mass term ∂ 2 S f (equal to 2κv s in the scale-invariant NMSSM) for λ = 0.6, for two values of tan β and for both signs of µ. It can be seen that for small values of tan β (=2 in our example) the cross-section is substantially above the LUX limit 7 for µ ∂ 2 S f < 0. As expected, the largest cross-section is for ∂ 2 S f ≈ −µ corresponding to the maximal singlino-Higgsino mixing. Even in the region with ∂ 2 S f several times larger than |µ|, i.e. for a Higgsino-dominated LSP, a small singlino component is enough to push the cross-section above the LUX limit. The cross-section is below the LUX upper bound only for the LSP with a very tiny Higgsino admixture i.e. for very large values of ∂ 2 S f . The situation is drastically different for µ ∂ 2 S f > 0. The cross-section is substantially smaller in this case and the LUX limit is satisfied for a wide range of values of ∂ 2 S f . One can see that most of this region is within the reach of the XENON1T experiment. However, in the vicinity of the blind spot defined by the condition (33) (corresponding to m χ = 0.8µ for tan β = 2) none of the future SI direct detection experiments will be able to exclude (or discover) such a singlino-Higgsino LSP. On the other hand, this region may be probed with Figure 1: Lower panels: The solid lines show the LSP spin-independent cross-section as a function of the diagonal singlino mass term ∂ 2 S f for positive (red) and negative (blue) values of parameter µ. The dashed, dotted and dashed-dotted lines indicate the corresponding upper bounds from, respectively, LUX [3], XENON1T [4] and LZ [5] experiments. The colored areas at the bottom depict the neutrino background (NB) regions [6]. Upper panels: The solid lines show the LSP spin-dependent cross-section on neutrons (lower) and protons (upper) for positive (purple) and negative (cyan) values of parameter µ. The dashed and dotted lines denote the corresponding upper limits from, respectively, XENON100 [31] and IceCube [32] (see details in text). For all used experimental bounds we assume that the relic density of the LSP is equal to the observed value [7] (otherwise these bounds should be re-scaled by the ratio Ω observed /Ω LSP ). DM detection experiments sensitive to SD interactions. The most stringent model independent upper bound on SD cross-section is provided by XENON100 for neutrons [31]. The limits on the SD DM-proton cross-section, provided by the indirect detection experiment IceCube [32], depend strongly on assumed dominant annihilation channels of dark matter particles. Generically in NMSSM with small tan β and decoupled scalars the singlino-dominated LSP annihilates mainly into tt (if the LSP mass is above the top quark mass) while the Higgsino-dominated LSP annihilates mainly into W W and ZZ (if kinematically allowed). The IceCube limits for DM annihilating dominantly to W W , ZZ or tt are stronger than the XENON100 limits (on SD DM-neutron cross-section) for dark matter masses above about 100 GeV [32]. In the upper panels of Fig. 1 SD cross-sections are shown with superimposed XENON100 and IceCube limits. The IceCube limits are computed assuming the LSP annihilation channels as obtained from MicrOMEGAs [30] with the spectrum computed by NMSSMTools 4.8.2 [28,29] for the model parameters as in Fig. 1 and κ = A κ = m 2 S = ξ F = 0 as well as A λ , ξ S and m 2 3 chosen in such way thatS hŝ ≈ 0, m a 1 , m s , m H ≈ 3 TeV. The SD cross-sections we calculated using eqs. (74)-(76) (which, as we checked, give results in very good agreement with those obtained with the help of MicrOMEGAs). Note that for tan β = 2, λ = 0.6 and |µ| = 700 GeV in the vicinity of the SI cross-section blind spot the SD cross-section is not much below the current IceCube limit. Since the SD cross-section is larger for larger Higgsino-singlino mixing, which is proportional to (λv/µ), the SI blind spot is harder to probe by testing the SD cross-section if λ is smaller and/or |µ| is bigger (see eq. (76)). Moreover, for larger tan β the SI blind spot occurs for smaller values of |m χ /µ|, for which the SD cross-section is smaller (because the LSP is more singlino-dominated). Thus, for larger tan β smaller values of |µ| are consistent with the IceCube limits, as can be seen from the upper right panel of Fig. 1. We should note also that if LSPs annihilate mainly to bb, which may happen e.g. when there is a light sbottom in the spectrum, the IceCube limits are always weaker than the XENON100 ones. In such a case the SI blind spots are much harder to probe via SD detection experiments, though not impossible. We should also comment on the fact that for tan β = 1 and m χ µ > 0 the blind spot condition (33) is always satisfied as long as |µ| < ∂ 2 S f because in such a case the LSP has a vanishing singlino component so m χ = µ. Value of tan β = 1 is relevant in the context of λSUSY [33] and will be particularly hard to probe because in such situation also SD scattering cross-section vanishes, see eqs. (74)-(75). The properties of the LSP change with the increasing value of tan β. The difference between values of σ SI for two signs of µ decreases. As a result, already for tan β = 5, a substantial part of the parameter space with positive µ and ∂ 2 S f > |µ| is excluded by the LUX data. At the same time, the SI cross-section for negative µ decreases and goes below the LUX upper bound for the LSP with the Higgsino admixture bigger (i.e. for smaller values of ∂ 2 S f ) than in the case of smaller tan β. What does not change is that there is a blind spot only for positive µ. The position of the blind spot moves towards smaller ∂ 2 S f corresponding to a more singlinodominated LSP. As mentioned before, in our analysis we use the tree-level approximation for the SI crosssections. Inclusion of loop corrections does not affect our main conclusion that for m χ µ > 0 a blind spot for the SI cross-section exists. The loop effects may only change slightly the position of a given blind spot. The computation of even dominant loop corrections to the SI cross-section is quite involved. The results are known only for neutralinos which are pure interaction eigenstates [34]. For a pure Higgsino LSP the radiatively corrected SI cross-section is of order O(10 −49 ) cm 2 so below the irreducible neutrino background. One should, however, note that such a small SI cross-section is a consequence of quite strong cancellations between contributions from several different (gluon and quark, including twist-2) operators, some of which contribute as much as O(10 −47 ) cm 2 . Computation of the loop corrected SI cross-section for (highly) mixed Higgsino-singlino LSP is beyond the scope of this work. We conservatively estimate that in such a case the loop correction to the tree-level cross-section does not exceed a few times 10 −48 cm 2 i.e. the biggest twist-2 operator contribution for a pure Higgsino with appropriately reduced couplings to the EW gauge bosons. Loop corrections of this size would 12 result in a small shift of the position of a blind spot: by less than one per cent in terms of ∂ 2 S f . We checked (using MicrOMEGAs/NMSSMTools) that similar size of a shift of a blind spot position occurs when the gauginos are not completely decoupled but have masses of order 2 TeV. One should stress that the approximations used in our analysis result only in some small uncertainties of the exact positions of the blind spots but do not influence their existence. With scalar mixing, m s m h Next we consider the situation when the contributions to σ SI from the exchange of H and s may still be neglected (A H = A s = 0) but the mixing of h with other scalars may play some role because now Bĥ = 1, BĤ =S hĤ /S hĥ , Bŝ =S hŝ /S hĥ . The effective LSP-nucleon coupling is obtained by putting these expressions for the Bĥ i parameters into eq. (23). The fact that BĤ and Bŝ do not vanish implies that in the present case a blind spot may exist for η = 0. However, as we shall see the blind condition still requires η to be very small. In the rest of this subsection we discuss the blind spot conditions in some interesting limits. Purity limits Before analyzing the general mixed LSP let us discuss limiting cases of a pure Higgsino and a pure singlino for which the effective coupling to a nucleon (23) simplifies to: where C is equal to λN 13 N 14 (−κN 2 15 ) for the pure Higgsino (singlino). Note that, in contrast to MSSM where the effective tree-level coupling of the pure Higgsino to a nucleon vanishes [8], the effective coupling in NMSSM does not vanish as long as the singlet scalar mixes with the Higgs doublet i.e. whenS hŝ = 0. Similarly, such non-zero singlet-Higgs mixing implies a non-vanishing SI scattering cross-section also for a pure singlino. Notice that the magnitude of the effective coupling of the LSP to nucleons, hence also the SI scattering cross-section, is controlled by κ for the singlino and by λ for the Higgsino. In order to get a feeling about typical (i.e. without significant cancellations in the amplitude) magnitudes of the SI scattering cross-section it is enlightening to show simplified formulae assuming that theĤ component of the SM-like Higgs mass eigenstate is negligible 8 : for a Higgsino LSP, for a singlino LSP. It is clear from the above formulae that, unless the couplings and/or the singlet-Higgs mixing are very small, pure Higgsino and singlino neutralino dark matter is generically either excluded by LUX or is within the reach of the forthcoming direct detection experiments such as XENON1T (so it can be soon found or excluded). In particular, for widely considered small tan β and λ ∼ 0.6 the SI scattering cross-section for the Higgsino LSP is typically of order 10 −44 cm 2 , which is above the LUX limit for a wide range of its masses. General Higgsino-singlino LSP For the LSP which is a general Higgsino-singlino mixture there are several non-zero contributions to f including the one proportional to Bĥ (see eq. (23)) which on its own leads to SI scattering cross-section of order 10 −45 cm 2 for λ ≈ 0.1, as discussed in subsection 4.1. Thus, if those contributions add constructively in the amplitude the resulting cross-section is even bigger. On the other hand, if those contributions add destructively a new kind of a blind spot may appear. In the present case the blind spot condition (24) can be rewritten in the form (28) as: with η given by eq. (68). Notice that the term in the bracket cancels with the same term present in the numerator of (68). The r.h.s. of the above expression quantifies the correction to eq. (33), coming from the mixing among scalars. It is tempting to check whether adding this correction can change the conclusion of subsection 4.1. The first term, proportional toS hĤ , is typically very small sinceS hĤ is strongly constrained by the LHC measurements of the hbb coupling. This corresponds to BĤ ≈ 0. Thus, it cannot change qualitatively the conclusions of the case without scalar mixing. The situation differs greatly in the case of the second term on the r.h.s. of eq. (37) which may give important corrections to the simple blind spot condition (33). For the discussion of the corrections to the blind spot condition it is useful to expressS hŝ in terms of the NMSSM parameters (for m s m h assumed in this section): In the last approximate equality we introduced ∆ mix , defined as which parameterizes the correction to the Higgs scalar mass due to its mixing with the remaining scalars, mainly with the singletŝ. For m s > m h this correction is always negative so its magnitude is desired to be small. Notice that smallness of |∆ mix | usually requires some cancellation between the two terms in the bracket (especially for large λ) in the middle part of formula (38) which implies µΛ > 0. Notice also that the requirement of small |∆ mix |, say smaller than O(1) GeV, implies (S hŝ /S hĥ ) 0.1(m h /m s ). Therefore, in order to have a strong modification of the blind spot condition, at least one of the other factors in the second term of the r.h.s. of eq. (37) must be much larger than one. This sets the condition for the NMSSM parameter space which depends on the composition of the LSP. Because in the rest of this subsection we will neglect the term proportional toS hĤ in (37) our blind spot conditions will be of the form (31): One can see that for smallĥ −ŝ mixing we demand also small |η|. The dependence of η on the LSP composition and mass is explicit in eq. (68). Parameter η may be small either because the numerator in (68) is small or because the denominator is large. The first possibility corresponds to the standard blind spot (33). The second possibility requires (at least) one of the terms in the denominator to be large. In the case of a highly mixed LSP (1 − N 2 15 )/N 15 = O(1) and the denominator may be large only when |κ| |λ|. This, however, is limited by the perturbativity conditions. Moreover, both sides of eq. (40) must have the same sign which, using (38) and (68), gives the condition sgn (κ (m χ − µ sin 2β)) = −sgn(η) = sgn(S hŝ ) = sgn(Λ sin 2β − 2µ)) . It follows that for m χ µ < 0 a blind spot is possible only when the combination of the parameters κ Λ µ sin 2β − 2 is also negative. In addition, |η| is smaller (i.e. better for a blind spot with small |∆ mix |) when both terms in the denominator of eq. (68) are of the same sign which is the case when In the present case with a small value ofS hŝ it is easier to have a blind spot when the LSP is strongly dominated by the singlino (or Higgsino) component because then either N 2 15 /(1 − N 2 15 ) or (1 − N 2 15 )/N 2 15 in the numerator of (68) is large. Let us now discuss these two situations. Singlino-dominated LSP It has been already noted that for pure singlino η is exactly zero. However, a pure singlino can be obtained only for infinite value of |µ|. Very large |µ| is undesirable for multiple reasons, including naturalness arguments. For natural values of |µ| even if the LSP is singlino-dominated some Higgsino component is always present which may have non-negligible contribution to η, hence also to a blind spot condition. Notice also that for a given value of µ a minimal value of the Higgsino component of the LSP grows with λ since the latter controls the magnitude of the singlino-Higgsino mixing. In what follows we study the impact of a non-zero Higgsino component for the existence of a blind spot. The blind spot condition (31) with η given by eq. (69) takes the following form For a strongly singlino-dominated LSP its mass |m χ | is much smaller than |µ| so the first term in the l.h.s. of the above equation is rather small and the blind spot condition without the scalar mixing effects (i.e. with the r.h.s. neglected) can be fulfilled only for appropriately large tan β and for positive m χ µ. Now we will check whether the scalar mixing effects may lead to blind spots with smaller values of tan β and/or negative m χ µ. Such changes are possible only when the r.h.s of (43) is negative because decreasing of tan β and changing the sign of m χ µ both give negative corrections to the l.h.s of the above blind spot condition. This gives the condition ( Λ µ sin 2β − 2)κ < 0. In addition, the absolute value of the r.h.s of (43) should not be very small in order to give a substantial modification of the blind spot condition. The biggest such value is necessary when one wants simultaneously to decrease tan β and have negative m χ µ. Let us now discuss such an extreme modification of blind spots. In the region of large λ ∼ 0.6 and small tan β ∼ 2, the l.h.s. of (43) is O(1) while the r.h.s. is generically very small. The reason is that, in addition to the suppression by small |∆ mix |, the r.h.s. is suppressed also by the factor κ/λ because for λ ∼ 0.6 the perturbativity up to the GUT scale requires κ 0.4 [10]. The only way to enhance the r.h.s. would be by the factor 1/ 1 − N 2 15 . However, the r.h.s. could be of order O(1) only for extremely pure singlino corresponding to |µ| λv. For large λ this translates to extremely large, hence very unnatural, values of |µ|. For example, for |κ| = 0.1, |∆ mix | = 1 GeV and m s = 500 GeV, |µ| would have to be O(20) TeV. Thus, we conclude that for large λ and small tan β it is not possible to have a blind spot for a singlino-dominated LSP with m χ µ < 0, unless the Higgsino is extremely heavy. For m χ µ > 0 such a blind spot can occur only if the standard blind spot condition (33) is approximately satisfied. This can be seen in Fig. 2 (in all plots presented in this paper the LEP and LHC Higgs constraints (at 2σ level) are satisfied unless otherwise stated). The situation changes if λ is small. In such a case the r.h.s. of (43) can be enhanced both by κ/λ and by 1/ 1 − N 2 15 for not so huge values of |µ|. Then a blind spot may appear for m χ µ < 0 and/or small tan β provided that at least one of these factors is large enough (of course only when ( Λ µ sin 2β − 2)κ < 0). It can be seen from the left panel of Fig. 2 that for |µ| = 500 GeV a blind spot with m χ µ < 0 may appear for λ 0.2 without violating perturbativity constraints. For larger values of |µ| larger values of λ may allow for a blind spot due to decreasing of the Higgsino component with increasing |µ|. We note that it is easier to relax the IceCube constraints on the SD cross-section when |κ| is not small. This is because for big values of |κ| the LSP annihilates dominantly (via the s-channel exchange of a singlet-like pseudoscalar) into a singlet-like scalar and pseudoscalar (if the latter is light enough and LSP has non-negligible singlino component). We have verified with MicrOMEGAs that for |κ| ∼ O(0.1) this is indeed the dominant annihilation channel for a singlino-dominated LSP. The IceCube collaboration [32] does not provide limits on the SD cross-section with such an annihilation pattern. It is beyond the scope of the present paper to use the IceCube data to accurately calculate limits for such a case. However, we expect that such limits would be weaker than for DM annihilating into pairs of the SM Higgs bosons because a light singlet-like pseudoscalar decays much more often into the bottom quarks and does not decay into the gauge bosons. Hence, we expect such limit to be comparable to or only ) with the SI cross-section that can be below the neutrino background for m χ µ > 0 (red) and m χ µ < 0 (blue), while keeping 10 −3 ≤ |∆ mix | ≤ 1 GeV and 5 · 10 −3 ≤ |κ| ≤ 0.3. Right: The same as in the left panel but as a function of |µ| and fixed λ = 0.6. Green line correspond to the standard blind spot condition (33). Brown points on the green line for |µ| ≈ 120−250 GeV are excluded by the XENON100 constraints on the SD scattering cross-section [31] (see also fig. 6). All points are consistent with the LHC Higgs data at 2σ. slightly better than the one obtained by XENON100. Higgsino-dominated LSP As we discussed in subsection 4.2.1, for a pure Higgsino the SI cross-section is proportional to theĥ−ŝ mixing which for m s > m h is preferred to be small to avoid large negative ∆ mix . This implies that for small values of |∆ mix | the LUX constraints on a strongly Higgsino-dominated LSP are generically satisfied. However, this is not the case for future direct detection experiments so the discussion of blind spots is interesting also in this case. There are no blind spots for a strongly Higgsino-dominated LSP if the contributions from the mixing with H and s scalars are negligible. The reason is that for m χ ≈ µ the condition (33) could be fulfilled only for tan β very close to 1. Let us check whether this conclusion changes after taking into account the effects of mixing in the scalar sector. For a Higgsino-dominated neutralino the second term in the denominator in (68) may be neglected (unless κ λ). Then, substituting (38) and η given by eq. (70) into (31), we get the following blind spot condition the combination m χ /µ − sin 2β. So, there are two ways to fulfill the last equation: either both sides vanish or the factor multiplying m χ /µ − sin 2β on the r.h.s is close to 1. Thus, in the case of a Higgsino-dominated LSP there are two kinds of blind spots. First, like in the case without scalar mixing, is given by condition (33) and requires values of tan β very close to 1 and m χ of the same sign as µ. The second kind of blind spots is given by the condition which may be fulfilled only when (Λ/µ) > (2/ sin 2β). Notice that for a Higgsino-dominated LSP, i.e. small |N 15 |, it follows from the last equation that |∆ mix | is preferred to be small for a blind spot to occur. Thus, the tuning of parameters required to keep |∆ mix | small automatically gives some suppression of the SI scattering crosssection, provided that (Λ/µ) > (2/ sin 2β). However, the strength of this suppression depends on some other parameters. For example, for a fixed value of the singlino component in the LSP, N 15 , it depends on the sign of µ. This follows from the last factor in the r.h.s. of eq. (45) and is illustrated in Fig. 3 for λ = 0.6 and two values of tan β. The value of |∆ mix | is bigger when µ (and in this case also Λ) is negative. As usually, the dependence on the sign of µ is more pronounced for smaller values of tan β. For tan β = 2 the value of |∆ mix | for negative µ is about an order of magnitude bigger than for positive µ. So, for a given LSP composition, a blind spot with positive m χ µ is preferred because it has a bigger Higgs mass. Indeed, it can be seen in Fig. 2 that for |∆ mix | < 1 GeV and m χ µ > 0 a larger singlino component of the LSP would be allowed if constraints on the SI cross-section would reach the level of the neutrino background than for m χ µ < 0. This fact can be understood from eq. (45). Moreover, for a given admixture of the singlino in the LSP larger values of λ would be possible for m χ µ > 0. Let us also point out that for large λ ∼ 0.7, the perturbativity up to the GUT scale requires κ 0.3 which in the scale-invariant NMSSM implies that the diagonal singlino mass term is smaller than |µ|, hence the LSP would be dominated by the singlino. Therefore, the above situation can be realized only in general NMSSM in which the LSP can be Higgsino-dominated provided that µ parameter (defined below eq. (2)) is large enough. Blind spots with interference effects between h and H exchange Let us now consider the case in which f is not necessarily small but interferes destructively with the contribution f (N ) H mediated by the heavy Higgs doublet. This kind of blind spots in the context of MSSM was identified in [9] and can be realized if H is not too heavy and tan β is large. In such a case the coupling of H to down quarks, hence also to nucleons, may be enhanced by large tan β which could compensate the suppression of f H resulting in a non-negligible A H defined in eq. (21). In this section we neglect the contribution from the s exchange and set A s to zero. Without mixing with singlet In the case of negligible mixing of the scalar doublets with the scalar singlet the Bĥ i parameters are given by The mixing between the doublets is small and may be approximated as The last equality was obtained under two assumptions: we assumed that there is no mixing of the singlet scalar with the doublets 9 and that tan β 1. The former assumption is specific for the present subsection. The latter one is necessary because only then f When Bŝ = 0, the blind spot condition (28) can be written as In the case of large tan β and negligibleĥ-Ĥ mixing, the expression (21) for h i = H simplifies to (49) Then the blind spot condition (48) takes the form This is a similar result to the one obtained in MSSM [9], but for the singlino-Higgsino LSP, rather than the gaugino-Higgsino one. Note, that sgn(m χ µ) = 1 is required in contrast to MSSM. It follows from (50) We should also comment on the fact that NMSSM provides a framework for relaxing the experimental constraints on m H , hence also on A H . Namely, the mass of the MSSM-like pseudoscalar can be very different from m H if one admits mixing of the MSSM-like pseudoscalar with the singlet-dominated pseudoscalar (such mixing can be present even if mixing in the CPeven Higgs sector is strongly suppressed). In such a case, the lower mass limit becomes weaker if the mixing effects push up the MSSM-like pseudoscalar mass substantially above m H . While recasting the LHC constraints on such a scenario is beyond the scope of this work, it seems viable that this effect may allow for H light enough to have A H ∼ O (1). If this is the case, a blind spot at large tan β would exist also for a highly mixed Higgsino-singlino LSP. This would be in contrast to the case with only h exchange for which at large tan β a blind spot cannot exist with |m χ | ≈ |µ|, see eq. (33) and the green line in the left panel of Fig. 4. rest of this section the case of m s m H but with the term proportional toS hŝ in the r.h.s. of eq. (53) neglected. Then, the blind spot condition can be simplified using: Mixing with singlet, m s which is valid as long as λvΛ is small in comparison with m 2 s . From the above equation it should be clear that for large enough Λ and A H ∼ O(1) one can obtain A HS Hŝ S HĤ S hŝ S hĥ . In such a case the blind spot condition is well approximated by: As already noted in the previous subsection, for m χ µ > 0 it is easier to have a blind spot for a highly mixed Higgsino-singlino LSP. Indeed, it can be seen in Fig. 4 that at large tan β with light enough H a blind spot is possible for any composition of the LSP. For m χ µ < 0 the situation is different. If the mixing in the scalar sector is small, only the first term in the square bracket in (56) is relevant which makes it harder to obtain a blind spot. So in order to have a blind spot with m χ µ < 0 the second term in this bracket must be larger in magnitude. However, this term may be sizable only for small |η|, i.e. for the LSP which is either dominated by singlino or Higgsino. Therefore, there are no blind spots for a highly mixed Higgsino-singlino LSP with m χ µ < 0. Nevertheless, for large enoughĤ-ŝ mixing somewhat bigger Higgsino or singlino component may be possible for large tan β if H is light enough, as can be seen from Fig. 4. Notice, however, that for large tan β and relatively light H the value of λ exhibits a stronger upper bound. This follows from our requirement that negative ∆ mix should have rather small absolute value. Indeed, |∆ mix | is small if Λ ≈ µ tan β (in order to suppress M 2 hŝ ) which results in 22 very large, multi-TeV values of Λ. This in turn implies big M 2 Hŝ unless λ is small. Nevertheless, the upper bound on λ should not be considered problematic since there is no strong motivation for big λ when tan β is large, which is necessary for this kind of a blind spot. Blind spots with interference effects between h and s exchange Now we turn our attention to a case in which the contributions to the scattering amplitude from the Higgs scalar and the singlet-dominated scalar are comparable. This does not have its analog in MSSM so is particularly interesting. In the presence of non-negligible mixing between the singlet and the Higgs doublet f is generically large if m s < m h . Light singletdominated scalar with sizable mixing with the Higgs scalar is particularly well motivated since it can enhance the Higgs scalar mass even by 6 GeV as compared to the MSSM, allowing for relatively light stops in NMSSM, even for large tan β [23]. It was already noticed some time ago [14] that the contribution from the singlet-dominated scalar to the scattering amplitude can be significantly larger than the Higgs contribution. Nowadays, such a possibility is excluded by the current constraints from the direct detection experiments and it is more interesting to study the case in which f are similar in magnitude and interfere destructively. 10 We neglect the mixing with the heavy scalar H with one exception -we will keep the terms proportional to (tan β − cot β)S h iĤ in (20) for h i = s, h. 11 This approximation leads to the following relationsS In the last equation we introduced parameter γ which may be related to ∆ mix by the following equation For fixed m s and small γ one gets the proportionality ∆ mix ∝ γ 2 . From (57) we get the following values of the Bĥ i parameters: Our A s parameter can be expressed as where we introduced another convenient parameters Without mixing withĤ the above quantities would be equal 1. In the limit of large tan β the c s (c h ) parameter measures the ratio of the couplings, normalized to SM values, of the s (h) scalar to the b quarks and to the Z bosons. It is easier to make a light scalar s compatible with the LEP bounds when c s is small [23], especially for m s 85 GeV. We should note, however, that c s < 1 implies c h > 1 which in turn leads to suppressed branching ratios of h decaying to gauge bosons, so c h is constrained by the LHC Higgs data. Note that in contrary to A H parameter (see (52)), A s can have both signs depending mainly on the sign of γ. LEP and LHC constraints on γ, ranging from approximately 0.3 to 0.5 (corresponding to m s from m h /2 to about 100 GeV), imply that |A s | 1 (the bound is saturated for m s around the LEP excess). Because we assumed BĤ ≈ 0, the blind spot condition under consideration is of the form (31) and reads: It is qualitatively different from the corresponding conditions in (40). The main reason is that the l.h.s. of the above equation is not generically suppressed (in contrast to the cases considered in section 4.2). LEP and LHC constraints set upper bounds on |Bŝ/Bĥ|, nevertheless it can be as large as about 0.4 (0.3) for c s ≈ 1 (c s ≈ 0) 12 and therefore could be at least one order of magnitude larger than in the case with only h exchange taken into account (see (40)). The above blind spot condition may be rewritten in the form analogous to eq. (37): There is one crucial modification as compared to (37): 13 Since |Bŝ/Bĥ| does not have to be suppressed it is possible to have a blind spot for sizable values of |η| independently of the sign of m χ µ. This implies that a blind spot may occur for larger Higgsino-singlino mixing, even for λ larger than |κ|. In particular, it is now possible to have a blind spot for a singlino-dominated LSP for large λ and small tan β with sub-TeV |µ| for both signs of m χ µ without violating perturbativity up to the GUT scale. This is demonstrated in Figure 5: The LSP spin-independent cross-section (solid lines) for tan β = 2 as a function of κ which sign is chosen two provide the same signs for both sides of (64). The horizontal lines show the experimental limits as in Fig. 1. The colored regions depict the corresponding neutrino background levels. Plots for µ < 0 are very similar. The SD cross-section in the vicinity of blind spots is below the sensitivity of IceCube (independently of the assumed dominant annihilation channel). Fig. 5. As can be seen for λ = 0.6 and tan β = 2 the blind spots occur for |κ| 0.4 (which is necessary to avoid Landau poles below the GUT scale for this value of λ). This is in contrast to the case when σ SI is dominated by only h exchange, where for a singlino-dominated LSP a blind spots with large λ and small tan β were present only for m χ µ > 0. In Fig. 6 an analogous plots to those presented for the heavy singlet case in Fig. 2 are shown. It can be seen that, if the singlet-dominated scalar is light, blind spots can exist for large λ and tan β = 2 without violating the perturbativity bounds for m χ µ > 0 for (almost) any composition of the LSP. The case of m χ µ < 0 is also less constrained. Nevertheless, if the LSP is not Higgsino-dominated, blind spots can exist for large λ and m χ µ < 0 only for small range of N 15 (if κ is kept in the perturbative regime). For low tan β the most interesting region is for large λ so in the right panel of Fig. 6 we plot the regions where a blind spot can occur for fixed λ = 0.6 as a function of |µ|. It can be seen that for m χ µ < 0 a blind spot can occur for a singlino-dominated LSP if |µ| 800 GeV, and the range of possible values of N 15 grows with increasing |µ|. For m χ µ > 0 almost any LSP composition allows for existence of a blind spot except for some region of a strongly-mixed Higgsino-singlino LSP with |µ| 800 GeV (in that region a blind spot cannot occur because |η| is too large to satisfy the blind spot eq. (31) when the precision Higgs data, constraining the Higgs-singlet mixing, are taken into account). The fact that the blind spots can now occur for large λ and small tan β for much wider range of the LSP composition is not only due to the fact that the singlet-dominated scalar is light but also because of large Higgs-singlet mixing, hence also large ∆ mix . This is demonstrated by dashed contours in Fig. 6 which correspond to minimal value of ∆ mix 14 for which the SI ) with the SI cross-section that can be below the neutrino background for m χ µ > 0 (red) and m χ µ < 0 (blue), while keeping |κ| ≤ 0.3 and |∆ mix | small enough to avoid the LEP and LHC constraints and |µ| = 500 GeV. The solid contours correspond to maximal value of ∆ mix for which the SI scattering cross-section can be below the neutrino background -above these contours smaller ∆ mix is required for a blind spot to exist. The dashed contours correspond to minimal value of ∆ mix for which the SI scattering cross-section can be below the neutrino background -to the right of these contours larger ∆ mix is required for a blind spot to exist. Right: The same as in the left panel but as a function of |µ| for λ = 0.6. Black (brown) region is excluded by the XENON100 constraints on the SD scattering cross-section [31] for m χ µ < 0 (m χ µ > 0). All points are consistent with the LHC and LEP Higgs data at 2σ. scattering cross-section may be below the neutrino background. It follows from the comparison of these contours with the plot in Fig. 2 (for heavy singlet) that ∆ mix above few GeV is required to significantly extend the range of the LSP composition for which a blind spot can occur when λ is large. It is also interesting to check what happens if one demands large ∆ mix so that the Higgs scalar mass gets substantial enhancement from the Higgs-singlet mixing effects. In Fig. 6 we also present solid contours that correspond to maximal value of ∆ mix for which the SI scattering cross-section may be below the neutrino background. It can be seen that if one demands ∆ mix as small as 1 GeV then for large λ there are no blind spots for the LSP strongly dominated by the Higgsino component. This can be understood in the following way. For large ∆ mix and light singlet |Bŝ/Bĥ| is no longer close to zero so in order for the blind spot to occur |η| should not be close to zero. One can see from definition (26) that |η| ∼ |N 15 | for the Higgsino-dominated case so a lower bound on ∆ mix sets a lower bound on the singlino component of the LSP. Noting that |Bŝ/Bĥ| is in a good approximation proportional to |∆ mix |, we conclude that a lower bound on N 2 15 scales proportionally to ∆ mix . This is in agreement with the results in Fig. 6. given point in the N 15 -λ plane there might be several solutions with the SI scattering cross-section below the neutrino background with different values of ∆ mix . Since in this case a SI cross-section blind spot can occur also for a highly-mixed Higgsinosinglino LSP one may expect to probe this region with SD direct detection experiments. Indeed, XENON100 limits exclude some part of the parameter space with SI cross-section blind spots for large λ and small |µ| (black and brown points in Fig. 6). In this region of the parameter space the LSP annihilates dominantly to a light singlet-like scalar and a pseudoscalar, that typically decay to pairs of bottom quarks so the IceCube limits are not expected to be stronger than the XENON100 ones. We should emphasize that the effect of large Higgs-singlet mixing has particularly important implications for models with µ = 0 (i.e. with vanishing quadratic term in f (S)), including the Z 3 -invariant NMSSM, because in those models the LSP composition is related to the ratio κ/λ. Namely, the LSP is singlino-dominated if λ > 2|κ|. This implies that for large λ, the LSP is typically singlino-dominated and can be highly mixed Higgsino-singlino only if |κ| is close to the upper bound from the requirement of perturbativity up to the GUT scale. In consequence, in this class of NMSSM models with large λ and small tan β a blind spot may occur only for 2κ/λ ≈ m χ /µ ≈ sin(2β) if the Higgs-singlet mixing is small. On the other hand, for large Higgs-singlet mixing a blind spot can occur for much wider range of κ/λ (corresponding to different LSP compositions) for m χ µ > 0, while for m χ µ < 0 existence of a blind spot may be possible provided that |µ| is large enough. Large tan β region In models with large tan β, couplings of s and h scalars to b quarks may significantly deviate from the couplings to the massive gauge bosons which has important consequences for the SI scattering cross-section. From our perspective the most interesting situation takes place when ∆ mix , being now positive, is large. As stated above, for m s 85 GeV small |c s | and hence large tan β and small λ are preferred [23]. For definiteness, let us consider tan β = 10, λ = 0.1 and two representative values of m s , 70 and 95 GeV, for which the LEP bounds are, respectively, quite severe and rather mild. In Fig. 7 we present the points (for a few values of c s ) for which σ SI is smaller than the neutrino background for two signs of m χ µ. The most apparent difference between c s > 1 and c s < 1 is that in the first case there are no points with a Higgsino-dominated LSP, whereas in the second one there is a negative correlation between Higgsino admixture and ∆ mix (for N 2 15 0.1). In order to explain this behavior we rewrite the blind spot condition (64) in the form adequate for the Higgsino-dominated limit i.e. for |m χ /µ| → 1. The result reads: For specific values of c s and m s (chosen in our example) the l.h.s. of the above equation is proportional to γ (with a negative coefficient 15 ) and thus to √ ∆ mix (see (58)) -this explains why there is a correlation between ∆ mix and |N 15 |. To understand why for c s > 1 (c s < 1) there ) with σ SI smaller than the neutrino background [6] for m χ µ > 0 (red) and m χ µ < 0 (blue), while keeping |κ| ≤ 0.6. Upper (lower) plots correspond to m s = 70 (95) GeV whereas the left (right) to c s smaller (larger) than 1. are (no) points which fulfill (66) we should notice (see eqs (8), (9)) that for tan β 1 we have sgn(1 − c s ) = sgn(Λγ) = sgn(µγ) -the second equality holds because a partial cancellation between the two terms in M 2 hŝ is needed. 16 This is exactly what we wanted to show: for c s < 1 the l.h.s. of (66) has the sign equal to −sgn(µ) thus the equality cannot hold (and inversely for c s > 1). It can be shown (using relations (12) and (13)), that the above conclusions hold also for some part of a highly mixed LSP parameter space when |κ/λ| is smaller than | N 13 with unsuppressed |η| in eq. (64). For a singlino-dominated LSP we can always choose the sign and value of κ to fulfill relation (64). Let us finally comment on the fact that for large tan β the H exchange might be relevant if H is light enough. The presence of relatively light H usually results in stronger constraints on the parameter space, especially for large values of λ. This is because in this region of the parameter space |M 2 Hŝ | is well approximated by λvµ tan β so it is typically larger than the diagonal entries of the Higgs mass matrix, unless λ is small. As a result, large values of λ lead to tachyons, or at least the mixing effects that are too large to accommodate the LEP and/or LHC Higgs data. Summary We have investigated blind spots for spin-independent scattering cross-section for the Higgsinosinglino LSP in the NMSSM. If mixing between the (SM-like) Higgs scalar and other scalars is negligible, a blind spot can occur only if the ratio m χ /µ is positive and has value close to sin 2β. Then, blind spots exist only for singlino-dominated LSPs (unless tan β is very close to 1) with the amount of the Higgsino component determined by tan β. This changes a lot when mixing with the singlet scalar is taken into account. If the singlet-dominated scalar is heavier than the Higgs scalar, the Higgs-singlet mixing has to be quite small to avoid large negative correction to the Higgs scalar mass. But even for such small mixing new classes of blind spots appear. Blind spots for Higgsino-dominated LSPs become possible and the ratio m χ /µ may be also negative. The LSP composition is no longer so strongly related to tan β, especially for smaller values of λ. However, in most cases the LSP must be highly dominated either by Higgsino or by singlino. A blind spot for a highly mixed Higgsino-singlino LSP is possible only for small values of λ and tan β and positive m χ /µ. In addition, in the most often explored part of NMSSM parameter space with large (but perturbative) λ and small tan β, a blind spot for a singlino-dominated LSP can occur only if m χ µ > 0 and eq. (33) is approximately satisfied. If the singlet-dominated scalar is lighter than the Higgs scalar, large Higgs-singlet mixing is welcome because the contribution from such mixing to the Higgs scalar mass is positive. For small tan β, the LEP and LHC constraints allow for sizable mixing leading to the correction to the Higgs scalar mass ∆ mix ∼ 5 GeV for the singlet mass in the range of about 85÷105 GeV. For such big ∆ mix , a blind spot for large λ and tan β ∼ 2 may occur also for highly mixed Higgsino-singlino LSP if m χ µ > 0, which would not be possible otherwise. It should be noted, however, that not always large ∆ mix is beneficial for a blind spot occurrence. For example, for an LSP strongly dominated by the Higgsino a blind spot may occur only if ∆ mix is small. For light singlet scalar and big ∆ mix the region of moderate and large tan β is also interesting. In such a case the singlet coupling to bottom quarks may be significantly different than the one to gauge bosons. If the sbb coupling is suppressed, relatively large ∆ mix is allowed by LEP also for m s < 85 GeV. We found that for suppressed sbb coupling a blind spot may occur only for a singlino-dominated LSP. On the other hand, if the sbb coupling is enhanced a blind spot can exist for any composition of the LSP and for both signs of m χ µ. For large tan β one more class of blind spots may exist if the heavier scalar doublet H is light enough to mediate the LSP-nucleon interaction in a substantial way and the singlet-dominated scalar is rather heavy. In such a case, positive m χ µ is again preferred, allowing for blind spots for the LSP composition much less restricted than in the case with very heavy H. If the Higgssinglet mixing is present, m χ µ < 0 is also possible but in this case the influence of a relatively light H on possible blind spots is quite marginal. In addition, smaller values of m H result in stronger upper bounds on the coupling λ. There are several avenues for future studies where the results obtained in this paper can be used. For instance, it will be crucial to investigate how one can probe neutralino LSP with SI scattering cross-section below the neutrino background. Some possible ways to constrain blind spots may be to use the direct and indirect detection experiments sensitive to the SD cross-sections or dedicated collider searches which in the context of MSSM turn out to be complementary to direct dark matter searches, see e.g. [26,38,39] for some recent work on this topic. Some studies of the LHC sensitivity to Higgsino-singlino sector has already been done [40] but more effort in this direction is welcome. It will be also interesting to investigate whether the blind spots identified in this paper can exist in more constrained versions of NMSSM and in which scenarios it is possible to explain the observed abundance of dark matter assuming thermal history of the Universe. We plan to investigate these issues in the future. Useful formulae The parameter η defined in (26) may be expressed in terms of other parameters of the NMSSM model. With the help of eqs. (12) and (13) It will be helpful to consider a few limits of this parameter. Let us start with the situation when one of the terms in the denominator dominates over the other one. The first (second) term in the denominator may be neglected if | κ λ | is much bigger (smaller) than (mχ/µ+µ/mχ) sin 2β−2 (mχ/µ+µ/mχ)−2 sin 2β . The second factor in the last expression is always smaller than 1 and approaches 1 in the limit |m χ /µ| → 1 i.e. for a strongly Higgsino-dominated LSP. It may be very small if m χ µ > 0 and sin 2β ≈ 2/ (m χ /µ + µ/m χ ). i.e. we are considering a singlino-dominated LSP and/or |κ| much bigger than λ (for a not strongly Higgsino-dominated LSP), the parameter η is approximately given by: Comments on the spin-dependent scattering cross-section The only contribution at the tree-level to the spin-dependent scattering cross-section in our case comes from the t-channel Z exchange, so depends only on the Higgsino contribution to the LSP and reads: σ where C (p) ≈ 4, C (n) ≈ 3.1 [41]. Combining eqs. (12), (13) and (14) we can write: We can see immediately that the cross-section disappear in the limit of tan β = 1 or a pure singlino/Higgsino LSP. Using eq. (14) we may rewrite the last formula in the form showing the explicit dependence of the LSP-Z coupling on λ (there is also an implicit dependence via the LSP mass m χ ).
16,429
2015-12-08T00:00:00.000
[ "Physics" ]
Multi-server queue with batch arrivals A multi-server queueing system, that is loaded continuously in certain periods of time and which functions for a certain amount of time allocated for the functioning of the system, is considered. Based on the renewal theory, an expression is obtained for the distribution density of the number of arrivals served herewith the service time for each server can be different. In the numerical example, the distributions of the number of services for the systems consisting of one, two, five servers are obtained. The approach to optimization of the queue using the stochastic model of supply and demand is outlined. According to the model, the distributions of the number of services, the queue length as the number of unused arrivals, the number of idle servers as the number of unused services are calculated. Each of these values corresponds to the cost. Knowledge of the distribution functions of the model indicators makes it possible to calculate the cost parameters with dependence of unit costs on the number of servers. The optimal number of servers can be selected from the condition of the maximum of the total average cost. Introduction The queue theory is often used for functioning description of complicated systems.It is under consideration the multi-server queueing system which is contiguously downloaded in some periods of time, and can be considered as a particular case of a system with batch arrivals.Thus, in [1] the situation is considered when, under conditions of low loading, the service begins when a certain number of arrivals in the system is clustered and ends when the system is completely freed. The articles examine queue with different service disciplines, a specific input process and/or service time; the possibility of a failure of the serving device [2,3] is considered.In this case, at least one of these quantities has exponential distribution [4,5].In all articles, the characteristics of the system for the steady-state queue are studied.In [6,7] steady state probability distributions were obtained.Some important performance measures such as the average number of arrivals in the system and the mean sojourn time have also been obtained in [7].Arbitrary distributed times of arrivals and services are considered in [8], but only in the case of one server. In the works on optimizing queue, only Markov systems are considered.In [9], in the case of several criteria, the theory of decision-making was used.In [10,11], optimization was carried out by queue simulating. The paper studies the characteristics of a multi-server queue, at the input of which there is a batch arrivals.Service time is characterized by an arbitrary distribution and can be different on different servers.The service of arrivals is considered for horizontal time, which can be either deterministic tН, or random, ТН.The last case has not been treated in the references. An important characteristic of the general service process under consideration is the distribution of the number of services during the time ТН, tН.Distribution is the basis for optimizing the queue by establishing the optimal number of servers and (or) determining the rational service time due to the modernization of servers. Distribution density of the number of served arrivals When servicing in the one server, the number of services completed within the horizontal time will be random with the distribution density (according to the renewal theory) In the case when the time TH, during which the processes are studied is random, the integral of the "convolution type" Kn(TH) is computed as follows where EH(t) = P{TH ≤ t} -distribution function of the time TН.Simultaneous operation of m same servers during the time TН will be characterized, respectively, by the total flow of services.In general, we assume that each server is characterized by its service time. We obtain a formula for the distribution density of the number of arrivals served.The discrete analogue of (1) will have the following form where am(j) -distribution density of m-th service for TН, tН, ) ( j A m + -distribution function of summary completion of m services.The "+" sign will further indicate that the indicator refers to the summary completion of the services. The required distribution density Further we obtain Consider the multiplier for am(0), which is according to ( 4) etc.As a result, we obtain the following formula .) With a random number of servers M with the distribution the number of completed services will have a distribution Numerical example Consider a queue that is continuously loaded for 60 units of time.The service time by one server is distributed arbitrarily and does not exceed 10 units of time.The number of arrivals is not limited.; while the average service time is t = 5.58 units of time, coefficient of variation v = 0.23.On average, 10.57 arrivals will be served (calculation by formula tН / t ) simulating.The distribution varies from 9 to 13. If the service time is distributed according with average t = 2.54 units of time and v = 0.52, in the period of 60 units of time, 24.79 arrivals will be served.The distribution varies from 19 to 31. Comparison of the calculation results shows how much the distribution of service time is: it affects both the average number and the dispersion of a possible number of services.In this case, as indicated above, the maximum service time by the server does not exceed 10 units of time. Two server queue By the (5) we obtain the distributions of the number of services for three cases.If the number of arrivals at the entrance to the queue is not limited, then in the first case (E(t) = E1(t)) for time tН from 19 to 24 arrivals can be served; the average number of arrivals served is 21.15.In the second case (E(t) = E2(t)) two servers can serve from 42 to 58 arrivals; the average number of arrivals served -49.57. Let us consider a more interesting case, when the service time by one server is distributed according to the distribution function E1(t), the second -according to the E2(t).The average number of services calculated using the distribution ) (n a 2 + is 35.36 units and can no longer be determined by a calculation from the simple relation tН / t . The envelopes to the distribution densities for three cases are shown in Fig. 1. Five server queue By the (5), we obtain similarly the distributions of the number of services.In the first case (E(t) = E1(t)) five servers can serve from 49 to 58 arrivals; the average number of arrivals served is 52.86.In the second case (E(t) = E2(t)) five servers can serve from 112 to 136 arrivals; average number of arrivals served -123.94. Approach to optimizing queue Optimization of the queue can be carried out by applying to it a stochastic model of supply and demand, for example [12,13]. Since the number of arrivals served by servers within the horizontal time is random, the total number of arrivals Z will be divided by the number + Z of arrivals served and the number − Z of unserved arrivals by m servers during the time TH, tH.The value − Z determines the queue length.In turn, the possible number of services will be divided by the number + N of used services and the number − N of unused services. (7) The density qm(n) distribution of the number − N of unused services and the density gm(n) distribution of the number − Z of unserved arrivals, i. restrictions are placed on TH, tH duration, hence it allows to study both stationary and nonstationary system. Using the supply and demand system, the queue length and the number of unused services distributions are obtained, and also the method for determining the optimal number of servers is indicated. Further development in research of support servers reliability account will be conducted. The work was supported by Act 211 Government of the Russian Federation, contract № 02.A03.21.0011. Fig. 1 . Fig. 1.Envelopes to the distribution densities of the served objects. The random value − N is the number of downtime servers.The distribution density hm(n) of the numbers + N
1,871.8
2018-01-01T00:00:00.000
[ "Mathematics" ]
Poly-ε-Caprolactone/Gelatin Hybrid Electrospun Composite Nanofibrous Mats Containing Ultrasound Assisted Herbal Extract: Antimicrobial and Cell Proliferation Study Electrospun fibers have emerged as promising materials in the field of biomedicine, due to their superior physical and cell supportive properties. In particular, electrospun mats are being developed for advanced wound dressing applications. Such applications require the firers to possess excellent antimicrobial properties in order to inhibit potential microbial colonization from resident and non-resident bacteria. In this study, we have developed Poly-ε-Caprolactone /gelatin hybrid composite mats loaded with natural herbal extract (Gymnema sylvestre) to prevent bacterial colonization. As-spun scaffolds exhibited good wettability and desirable mechanical properties retaining their fibrous structure after immersing them in phosphate buffered saline (pH 7.2) for up to 30 days. The initial burst release of Gymnema sylvestre prevented the colonization of bacteria as confirmed by the radial disc diffusion assay. Furthermore, the electrospun mats promoted cellular attachment, spreading and proliferation of human primary dermal fibroblasts and cultured keratinocytes, which are crucial parenchymal cell-types involved in the skin recovery process. Overall these results demonstrated the utility of Gymnema sylvestre impregnated electrospun PCL/Gelatin nanofibrous mats as an effective antimicrobial wound dressing. Among the wide range of natural and synthetic biodegradable polymers, Poly-ε-caprolactone (PCL) and gelatin were most explored as wound dressing materials. PCL exhibits many advantages such as biocompatibility, biodegradability, excellent processability, and desired mechanical properties [39]. Gelatin has good porosity, biocompatibility, fluid retention properties, cell-specific binding sites and non-antigenicity [13]. However, both the polymers individually are inadequate to fulfil their role as dressing materials due to their certain disadvantages. PCL is hydrophobic, lacks cell-specific recognition sites [40] and degrades at a slower rate [41] whereas gelatin exhibits poor mechanical strength and degradability [31]. In addition, the exposure of electrospun fibers from synthetic polymers to human dermal fibroblasts results in immunogenic response but can be attenuated by co-axial electrospinning with gelatin [42]. Thus, blending PCL with gelatin can produce scaffolds that are mechanically strong and would have cell-specific motifs which would be suitable for accelerated wound healing. In our previous study, we have developed Gymnema sylvestre extract containing PCL nanofibers and investigated their antibacterial and biocompatibility properties [43]. Here, we report the effect of gelatin integration to the PCL/Gymnema sylvestre nanofibrous mats, investigated their physical properties, antimicrobial effectiveness and biocompatibility for human primary dermal fibroblasts (hDFs) and cultured keratinocytes (HaCaT cell line). The overall strategy employed to prepare Gymnema sylvestre and gymnemagenin infused PCL/Gel wound dressing was shown in Scheme 1. Nanomaterials 2019, 9, x FOR PEER REVIEW 3 of 21 Among the wide range of natural and synthetic biodegradable polymers, Poly-ε-caprolactone (PCL) and gelatin were most explored as wound dressing materials. PCL exhibits many advantages such as biocompatibility, biodegradability, excellent processability, and desired mechanical properties [39]. Gelatin has good porosity, biocompatibility, fluid retention properties, cell-specific binding sites and non-antigenicity [13]. However, both the polymers individually are inadequate to fulfil their role as dressing materials due to their certain disadvantages. PCL is hydrophobic, lacks cell-specific recognition sites [40] and degrades at a slower rate [41] whereas gelatin exhibits poor mechanical strength and degradability [31]. In addition, the exposure of electrospun fibers from synthetic polymers to human dermal fibroblasts results in immunogenic response but can be attenuated by co-axial electrospinning with gelatin [42]. Thus, blending PCL with gelatin can produce scaffolds that are mechanically strong and would have cell-specific motifs which would be suitable for accelerated wound healing. In our previous study, we have developed Gymnema sylvestre extract containing PCL nanofibers and investigated their antibacterial and biocompatibility properties [43]. Here, we report the effect of gelatin integration to the PCL/Gymnema sylvestre nanofibrous mats, investigated their physical properties, antimicrobial effectiveness and biocompatibility for human primary dermal fibroblasts (hDFs) and cultured keratinocytes (HaCaT cell line). The overall strategy employed to prepare Gymnema sylvestre and gymnemagenin infused PCL/Gel wound dressing was shown in Scheme 1. Scheme 1. Electrospinning setup used to prepare hybrid mats. Scheme 1. Electrospinning setup used to prepare hybrid mats. Gymnema sylvestre leaf extracts used in the current study were extracted using two different extraction techniques: ultrasound-assisted extraction (USE) and cold macerated extraction (CME). Processing of Gymnema sylvestre Leaves Fresh leaves of Gymnema sylvestre were obtained from Tamil University (Tamilnadu, India) and authenticated by scientist Dr. G.V.S Murthy, Southern Regional Centre, Coimbatore, Botanical Survey of India (BSI/SRC/5/23/2016/Tech/215). The methodology for processing the leaves via cold maceration and ultrasound assisted extraction was reported in our previous manuscript [43]. Briefly, the leaf powder was defatted using petroleum ether for 8 h in soxhlet apparatus prior extraction. To obtain cold macerated extracts, 20 g of defatted Gymnema sylvestre powder was soaked in 70% methanol (500 mL) for 24 h at 25 ± 2 • C in a rotary shaker. This procedure was repeated thrice and the solvent was filtered, pooled together, concentrated in rotary vacuum at 40 • C and lyophilized into fine powders. To achieve ultrasound assisted extracts, 20 g of powder was soaked in 70% methanol for 3 h and exposed to 40 kHz frequency of ultrasound waves in a digital ultrasonic bath at 50 • C for 50 min. The extracted solvent was filtered, concentrated and made into fine powders as mentioned above. Electrospinning of PCL/Gelatin Nanofibers For electrospinning, PCL (8 wt %) and gelatin (4 wt %) were dissolved separately in TFE, stirred for 5 to 6 h to get a homogenous solution and then mixed together. One hundred microliter of acetic acid was added to the PCL/Gel solution to improve the miscibility. The concentration of CME/USE was 25% and GYM was 0.5% (with respect to w/w of PCL). CME/USE/GYM was mixed separately to the PCL/Gel solution and stirred overnight. A syringe pump (KDS 100, KD Scientific., Holliston, MA, USA) was used to pump the overnight stirred solution into a 5 mL polypropylene syringe attached to a 23 G needle at a flow rate of 1 mL·h −1 . To generate electrospun mats, high voltage (Gamma High Voltage Research Inc., Ormond Beach, FL, USA) of 13 kV was applied to the needle tip, which results in the stretching of droplet created at the orifice of the needle and the drawn nanofibers were deposited on aluminum foil wrapped collector which was positioned 13 cm apart from the needle tip [21]. Relative humidity of 60% and a temperature of 22 ± 2 • C was maintained throughout the electrospinning experiments. Field Emission Scanning Electron Microscopy (FESEM) Analysis Prior SEM analysis, the prepared nanofibers were sputter coated with platinum to make them conductive using JFC-1600 auto fine coater (JEOL, Peabody, MA, USA). FESEM imaging of as-spun nanofibers was analyzed using JSM-6701F FESEM (JEOL, Peabody, MA, USA) at an accelerating voltage of 10 kV. Image J software (National Institute of Health, Bethesda, MD, USA) was used to calculate the average fiber diameter. Around 50 random fibers were selected for each sample to determine the average diameter. Fourier Transform Infra-Red Spectroscopy An FTIR spectrum of different electrospun samples was obtained using Alpha FTIR spectrometer (Bruker GmbH, Ettlingen, Germany). The spectrum was scanned at a resolution of 4 cm −1 over the range of 500-4000 cm −1 in attenuated total reflectance mode. Mechanical Properties of Hybrid Mats Mechanical properties such as ultimate tensile strength, tensile strain, tensile modulus and toughness of different electrospun mats were determined using tensile tester (Instron 5345, Instron Inc., Norwood, MA, USA). Briefly, nanofibers with an average thickness of 100 µm were cut into rectangular strips (40 × 10 mm). These rectangular strips were tested at a cross-head speed of 10 mm·min −1 . For each group, samples (n = 4) were tested and the average value was recorded. Wettability Studies Sessile drop water contact angle method was adopted to determine the wettability of electrospun mats using VCA optima surface analysis system (AST products, Billerica, MA, USA). Distilled water (1 µL) was used as the solvent to generate droplet on the nanofiber surface. The images were photographed and further processed to obtain the contact angle. For each sample, the testing was conducted in triplicates and the mean ± s.d. values were presented. Release Kinetics and Scaffold Degradation Studies The release kinetics studies of Gymnema sylvestre loaded mats were conducted by immersing the fiber mats (30 × 30 mm, an average weight of 40.8 ± 5.2 mg) in 3 mL of PBS in triplicate, maintaining them at 37 • C in a shaking incubator. The samples were withdrawn at different time points for analysis by UV spectrometry (OD 292 nm ). The entrapment efficiency was determined by dissolving the electrospun mats (average weight~20 mg) in TFE, followed by centrifuging at 5000 rpm. The supernatant was collected and analyzed using UV spectrometry (OD 292 nm ). The concentration of the extracts in the release medium was determined from the calibration plot. The experiment was conducted in triplicate. For scaffold degradation study, the electrospun scaffolds were cut into 2 × 2 cm samples, incubated in PBS at 37 • C for a period of 30 days. At different time intervals, the mats were removed from PBS, washed thrice with milliQ water and dried completely before SEM analysis. Biocompatibility Studies of USE/CME/GYM Nanofibers Dermal fibroblasts and keratinocytes play a vital role in various phases of wound healing, hence hDFs and HaCaT cells were selected for assessing the biocompatibility and cell proliferation capability of the prepared mats [44]. Cells were grown as described earlier [45]. For cell proliferation experiments, 8000 cells in 500 µL of medium were seeded into each well of 24 well plates containing various nanofibrous mats (average thickness of 4.3 ± 0.5 µm and weight of 1.6 ± 0.4 mg). Colorimetric CellTiter 96 ® AQueous One Solution Cell Proliferation Assay (Promega) was used to determine the cell proliferation of hDFs and HaCaT on the nanofibrous mats. On day 4 and day 7, cells grown on nanofibers were washed twice with phosphate buffer saline (PBS, pH 7.2) to remove debris, non-adherent cells and incubated with DMEM: MTS (4:1) for 3 h at 37 • C. Then the medium was transferred into 96 well plates and the absorbance OD 490 nm was measured using microplate reader (BioTek, Singapore). The OD readings were converted into cell number using a calibration curve [46] and average values from two independent triplicate experiments were presented as mean ± s.d. The hDFs and HaCaT cell phenotype on various nanofibrous mats at day 4 and day 7 were visualized using a laser scanning confocal microscope (Zeiss LSM800, Carl Zeiss Microimaging Inc., Thornwood, NY, USA). Briefly, cells grown on different nanofibrous mats were rinsed with PBS, followed by incubating with paraformaldehyde (4%) for 30 min at room temperature to fix the cells. The fixed cells were then stained with phalloidin (Thermo Fisher Scientific, Singapore) and Hoechst for 1 h to visualize the cell morphologies and nuclei respectively. The stained samples were washed thrice with PBS to remove the excess dyes. Then Flouromount TM (Sigma, Singapore) was used to mount the samples on glass slides and visualized under a confocal microscope using a 40× oil immersion objective lens. Five different spots were imaged for each sample. Cell morphology of the hDFs and HaCaT on various electrospun mats were analyzed by SEM. Briefly, the cells grown on various mats were washed twice with PBS and 500 µL of 3% glutaraldehyde was added into each well to fix the cells for 30 min. The fixed cells were washed with PBS to remove the glutaraldehyde residues and the samples were dehydrated with a series of ethanol (30% to 100%) followed by 200 µL of HMDS for 5 min. Finally, the dehydrated samples were sputter coated with platinum for visualizing via SEM. Collagen secreted by the hDFs after 10 days post seeding on different electrospun mats was determined by Picro Sirius Red staining. Briefly, cells were fixed with 4% formaldehyde, followed by staining with 0.1% Sirius red dye for 1 h. The stained cells were washed with 1× PBS and visualized under an inverted microscope. Radial Disc Diffusion Assay The antibacterial activity of PCL/Gel+USE/CME mats was assessed using the Kirby-Bauer radial disc diffusion method. Clinical and Laboratory Standards Institute (CLSI) guidelines were followed to carry out the experiments. Initially, the concentration of the bacterial cultures was adjusted to 0.5 McFarland standard, then using a cotton swab the adjusted bacterial cultures were spread uniformly onto the sterile Muller Hinton Agar (MHA, BD, Franklin lakes, NJ, USA) plates. Then PCL/Gel, PCL/Gel+USE, and PCL/Gel+CME mats (25 × 25 mm) weighing about 40.2 ± 4.8 mg were placed onto the center of the MHA plates and incubated at 35 ± 2 • C for 24 h. The zone of inhibition around the mats was photographed and the experiment was conducted in duplicate [47]. Bacterial Cell Viability Assay Briefly, different electrospun mats (weighing about 40.8 ± 2.5 mg) was placed in 24 well plates and incubated in bacterial suspension (~1 × 10 8 CFU/mL) for 24 h at 35 • C. One hundred microliter of the culture inoculum was retrieved from each well, serially diluted (10 −1 to 10 −8 ) and plated on a Mueller Hilton agar at 35 • C. The number of viable bacteria persisted in the plates were enumerated using colony counter after 24 h incubation. The experiment was conducted in duplicate. Statistical Analysis All the experiments were conducted at least in duplicates and the quantitative data was expressed as mean ± standard deviation (SD). For comparison between groups, one way ANOVA followed by Tukey's post hoc test was performed. p values ≥ 0.05 were considered to be statistically insignificant. FE-SEM Analysis to Visualise the Surface Morphology and Determining the Fiber Diameter Distribution SEM micrographs of the electrospun samples showed bead-free, smooth continuous nanofibers with narrow diameter distribution ( Figure 1). The absence of any additional particulate structures and aggregates on the mats clearly ascertain that during the electrospinning process, no phase separation of USE/CME/GYM occurred. The average fiber diameters of PCL, PCL/Gel, PCL/Gel+USE, PCL/Gel+CME and PCL/Gel+GYM were found to be in the range of 450 ± 98 nm, 234 ± 52 nm, 154 ± 21 nm, 176 ± 48 nm and 202 ± 49 nm respectively. The viscosity of the dope solutions containing different extracts was also determined before electrospinning. The solution viscosity of PCL/Gel, PCL/Gel+USE, PCL/Gel+CME and PCL/Gel+GYM were found to be 557 ± 6 cP, 237 ± 8 cP, 260 ± 3 cP and 532 ± 3 cP respectively. The results indicated a substantial decrease in the viscosity of dope solution upon addition of Gymnema sylvestre extracts. The obtained values indicate that the addition of Gymnema sylvestre (CME/USE) significantly reduced the viscosity of the spinning solution and hence leads to substantial reduction in the diameter of the resultant nanofibers. However, the addition of GYM to the spinning solution did not affect the viscosity and hence there were no significant changes to the diameter of the nanofibers. A similar decrease in the diameter of electrospun synthetic and natural polymers was reported upon addition of natural products such as honey, G. sylvestre, henna extracts and attributed to decrease in viscosity of the dope solution [18,43,48]. FE-SEM Analysis to Visualise the Surface Morphology and Determining the Fiber Diameter Distribution SEM micrographs of the electrospun samples showed bead-free, smooth continuous nanofibers with narrow diameter distribution ( Figure 1). The absence of any additional particulate structures and aggregates on the mats clearly ascertain that during the electrospinning process, no phase separation of USE/CME/GYM occurred. The average fiber diameters of PCL, PCL/Gel, PCL/Gel+USE, PCL/Gel+CME and PCL/Gel+GYM were found to be in the range of 450 ± 98 nm, 234 ± 52 nm, 154 ± 21 nm, 176 ± 48 nm and 202 ± 49 nm respectively. The viscosity of the dope solutions containing different extracts was also determined before electrospinning. The solution viscosity of PCL/Gel, PCL/Gel+USE, PCL/Gel+CME and PCL/Gel+GYM were found to be 557 ± 6 cP, 237 ± 8 cP, 260 ± 3 cP and 532 ± 3 cP respectively. The results indicated a substantial decrease in the viscosity of dope solution upon addition of Gymnema sylvestre extracts. The obtained values indicate that the addition of Gymnema sylvestre (CME/USE) significantly reduced the viscosity of the spinning solution and hence leads to substantial reduction in the diameter of the resultant nanofibers. However, the addition of GYM to the spinning solution did not affect the viscosity and hence there were no significant changes to the diameter of the nanofibers. A similar decrease in the diameter of electrospun synthetic and natural polymers was reported upon addition of natural products such as honey, G. sylvestre, henna extracts and attributed to decrease in viscosity of the dope solution [18,43,48]. FTIR Analysis of Composite Mats To confirm the integration of CME, USE and GYM into the PCL/Gel mats, the ATR-FTIR spectrum of different electrospun samples was recorded and shown in Figure 2A. The ATR-FTIR spectrum of individual compounds was shown in Figure S1. Characteristic peaks of PCL mat at 2865 and 2946 cm −1 correspond to -CH 2 symmetric and asymmetric vibrations; 1724 cm −1 represents C=O stretching of ester group; 1046 and 1240 cm −1 related to -C-O-C symmetric and asymmetric stretching vibrations. For gelatin, a peak at 3297 cm −1 corresponds to N-H stretching vibration; 1637 cm -1 represents the C=O stretching of amide I; 1535 and 1450 cm -1 related to the bending vibration of amide II (N-H and -CH 2 ); 1240 and 1080 cm −1 correspond to the N-H bending and C=O stretching vibration of amide III. The USE/CME extracts revealed -OH stretching at 3316 cm −1 related to alcoholic/phenolic groups; C=O stretching of ketones at 1711 cm −1 and -C-O stretching of primary and tertiary alcohols at 1034 and 1160 cm −1 . In gymnemagenin, peaks at 3297, 1603 and 1442 cm −1 correspond to -NH (amide) group; C=O stretching vibration of the ester group and -C-C-stretching of aromatic compounds respectively. The characteristic peaks of PCL, gelatin, Gymnema sylvestre and gymnemagenin were observed in the electrospun mats, confirming the successful incorporation of the various components into the mats. Mechanical Properties of Electrospun Hybrid Mats The electrospun mats must impart and retain sufficient mechanical support without causing new tissue deformation during wound healing [49]. The stress-strain curves of electrospun PCL, USE/CME/GYM loaded PCL/Gel mats are shown in Figure 2B and Table 1 summarizes the mechanical parameters of different electrospun nanofibers obtained from four independent experiments. Mechanical Properties of Electrospun Hybrid Mats The electrospun mats must impart and retain sufficient mechanical support without causing new tissue deformation during wound healing [49]. The stress-strain curves of electrospun PCL, USE/CME/GYM loaded PCL/Gel mats are shown in Figure 2B and Table 1 summarizes the mechanical parameters of different electrospun nanofibers obtained from four independent experiments. To infer the effect of G. sylvestre extract and gelatin inclusion into the nanofibrous mats, the mechanical properties obtained for different samples were compared against PCL mats. The PCL mats displayed greater plasticity and flexible behavior compared to the other mats. The inclusion of gelatin to the system did not alter the mechanical properties by much. Although, the introduction of USE and CME to the PCL/Gel mats resulted in a decrease in the tensile strain and toughness of the mats, however, a significant increase in the tensile strength and tensile modulus was observed. PCL/Gel+USE laden mats displayed maximum tensile strength and tensile modulus compared to all the samples. However, for PCL/Gel+GYM mats, we did not notice any statistical significant (p > 0.05) changes in the mechanical properties. The mechanical properties of the mats depend mainly on individual fiber microstructure, and, macroscopically on the porosity, and density of inter-fiber bonding sites. In general, for smaller sized fibers the tensile modulus and tensile strength will be higher and for larger sized fibers failure strain will be higher [50,51]. In the case of PCL/Gel+USE laden mats, the fiber diameter (154 ± 21 nm) was smaller and more densely packed compared to that of PCL/Gel fibers (234 ± 52 nm) as shown in Figure 1. Thinner nanofibers create more junction points between them, holding together the adjacent fibers and hence exhibit better mechanical properties compared to the thicker fibers. Moreover, the polar groups of USE/CME were capable of forming intra/inter hydrogen bonding between the fibers, thereby increasing their mechanical properties. Hence the PCL/Gel+USE laden nanofibers possessed superior mechanical strength and sufficient elasticity to function as wound dressing material. Wettability of Electrospun Mats Water contact angle measurements have been widely used to determine the hydrophobic effect of water/aqueous solution on surfaces and a useful method to study the effect of additives on surface properties. The biocompatibility and biodegradation of a biomaterial are governed by its surface wettability. To study the effect of gelatin and Gymnema sylvestre extracts on the wettability of pristine PCL nanofibers, we determined the contact angles of the mats. Figure 3 shows the images captured on various PCL/Gel mats at 10 s using water droplet to determine the contact angle. The contact angle for PCL, PCL/Gel, PCL/Gel+USE, PCL/Gel+CME and PCL/Gel+GYM were 137.3 ± 2.2 • , 49.3 ± 10.2 • , 17.3 ± 3 • , 15.9 ± 4.2 • and 38.3 ± 7.5 • , respectively. The increase in the wettability of PCL/Gel mats on adding USE, CME and GYM is due to the presence of polar phytochemicals in the Gymnema sylvestre extracts (USE/CME) and availability of the multiple hydrophilic -OH groups in the GYM structure. Wettability of the nanofibrous mats influences the absorption of excess wound exudates and transfer of nutrients. In general, mammalian cells had a better affinity towards the hydrophilic surface compared to its counterpart [52]. Thus the addition of Gymnema sylvestre and gelatin to PCL mats enhanced the wettability and making them more suitable for cells to adhere, spread and proliferate. Release Kinetics and Scaffold Degradation Studies The release profile of USE/CME mats was investigated to determine whether the active ingredient is released when immersed in buffer. PCL/Gel+USE and PCL/Gel+CME mats contained 187.5 ± 18.6 and 172.0 ± 11.2 µ g/mg of Gymnema sylvestre leaf extracts, thus having an encapsulated efficiency of 82.5% and 75.8%, respectively. The results indicated a burst release (>50%) of the extracts within the first 8 h after soaking in PBS ( Figure 4A). The results are in stark contrast to pristine PCL mats wherein no discernible release of the extract was observed as reported in our previous manuscript [43]. It is likely that extract present over the surface of the nanofibers was released through diffusion, resulting in initial burst release followed by the sustained release from the Gymnema sylvestre extracts present at the core of the fiber matrix. Release Kinetics and Scaffold Degradation Studies The release profile of USE/CME mats was investigated to determine whether the active ingredient is released when immersed in buffer. PCL/Gel+USE and PCL/Gel+CME mats contained 187.5 ± 18.6 and 172.0 ± 11.2 µg/mg of Gymnema sylvestre leaf extracts, thus having an encapsulated efficiency of 82.5% and 75.8%, respectively. The results indicated a burst release (>50%) of the extracts within the first 8 h after soaking in PBS ( Figure 4A). The results are in stark contrast to pristine PCL mats wherein no discernible release of the extract was observed as reported in our previous manuscript [43]. It is likely that extract present over the surface of the nanofibers was released through diffusion, resulting in initial burst release followed by the sustained release from the Gymnema sylvestre extracts present at the core of the fiber matrix. To discern the morphological changes of the mats after immersion in PBS, we imaged PCL, PCL/Gel, PCL/Gel+USE and PCL/Gel+CME mats after soaking for different time duration. Consistent with previous studies, PCL mats immersed in PBS did not show appreciable changes in morphology and remained intact throughout the study ( Figure 4B) [53]. However, PCL/Gel or mats containing Gymnema sylvestre extracts showed numerous swollen nanofibers with an increasing number of islands of fused fibrous bundles upon immersion in PBS, suggesting possible degradation of the hybrid mats [54]. There was a decrease in the overall porosity of the mats after PBS immersion. It should be noted that due to the presence of PCL, all the mats retained the fibrous structures with fused junctions, further reinforcing the importance of designing hybrid structures that help in enhancing the degradation stability of the wound dressings. Together with the release kinetics studies, the above results suggested that increased wettability and swelling characteristics of the USE and CME mats account for the burst release followed by the controlled release of the extracts. To discern the morphological changes of the mats after immersion in PBS, we imaged PCL, PCL/Gel, PCL/Gel+USE and PCL/Gel+CME mats after soaking for different time duration. Consistent with previous studies, PCL mats immersed in PBS did not show appreciable changes in morphology and remained intact throughout the study ( Figure 4B) [53]. However, PCL/Gel or mats containing Gymnema sylvestre extracts showed numerous swollen nanofibers with an increasing number of islands of fused fibrous bundles upon immersion in PBS, suggesting possible degradation of the hybrid mats [54]. There was a decrease in the overall porosity of the mats after PBS immersion. It Cell Proliferation Assessment on the Electrospun Nanofibers Fibroblasts and keratinocytes are the two major cell types of the skin and play a crucial role in the wound healing process by coordinating with each other to orchestrate a cascade of actions to restore normal tissue functions after injury [55]. To achieve a successful tissue repair, defects at the wound site must be replaced by new granulation tissue followed by the wound closure to restore the physical barrier functions [56]. At the onset of injury, neutrophils are recruited at the wound site to provide the first line of defense against the pathogens followed by monocytes, and macrophages. The growth factors and cytokines released by the neutrophils attracts dermal fibroblasts, which maintain the extracellular matrix [55]. The fibroblasts secrete paracrine factors, such as basic fibroblast growth factor (bFGF/FGF-2) and keratinocyte growth factor (KGF/FGF-7) that promotes keratinocyte growth and differentiation. The keratinocytes, in turn, stimulate the fibroblast to synthesize and crosslink the collagen to fill the damaged ECM. The paracrine signaling also recruits endothelial cells (ECs) which, aid in new vasculature. Finally, the keratinocytes, stratify to form the epithelial layer filling the defect area, thereby facilitating complete wound closure and providing mechanical integrity. The growth factors mediated cross-talk between fibroblasts and keratinocytes during the healing process restores the normal tissue function [57]. To evaluate the biocompatibility, cell proliferation properties of the nanofiber mats, metabolic activity and morphology of the cells (HaCaT and hDFs) were examined at various time points. MTS Assay To confirm the biocompatibility, cell adhesion and cell proliferation of hDFs and HaCaT cells on the PCL/Gel+USE, PCL/Gel+CME and PCL/Gel+GYM nanofibrous mats, they were examined by MTS-based cell viability assay and SEM at day 4 and day 7 post seeding (p.s.). The metabolic activity of seeded hDFs and HaCaT, determined by MTS assay is shown in Figure 5A,B. For hDFs, the cell number increased marginally when seeded on coverslips (used as a control); whereas, increased significantly when seeded on nanofiber mats ( Figure 5A). This could possibly due to the availability of more growth space in ECM mimicking 3D nanofibrous matrix compared to the 2D flat coverslip. At day 4 p.s., 2-3 fold increase in hDFs cell density was observed when seeded on PCL/Gel+CME/USE mats. A similar increase in cell density was observed for nanofiber mats at day 7 p.s., as well. Of the four mats, PCL/Gel+USE mats displayed the highest hDFs proliferation when compared to other nanofiber mats. A similar trend was observed for HaCaT cells seeded on nanofiber mats. PCL/Gelatin mats containing the USE displayed the highest HaCaT proliferation when compared to other groups. At day 7 p.s., an about 10-fold increase in cell density was observed on PCL/Gel+USE mats whereas a 6-fold increase was observed for the cells cultivated on coverslips. Thus, the proliferation results confirm that mats loaded with Gymnema sylvestre extracts were non-cytotoxic and supported hDFs and HaCaT proliferation, suggesting their excellent biocompatibility for skin tissue engineering. Taking all the results together, the MTS data demonstrated the cell proliferative properties of the Gymnema sylvestre loaded mats. We further analyzed the cell attachment and spreading on different electrospun nanofibers at day 4 and day 7 p.s. by SEM and the images were shown in Figure 5C,D. Even at Day 4 SEM micrograph showed that higher density of hDFs and HaCaT cells attached and spread on the electrospun mats. On day 7, a significant increase in cell number and the confluent layer was observed when seeded on mats containing Gymnema sylvestre extracts. The cells looked more spread and densely populated covering the whole area of the mats. The fibroblasts grown on the mats maintained its characteristic elongated spindle-shaped structures whereas HaCaT cells formed a thick layer of microcolonies covering the mats. F-Actin Staining The hDFs and HaCaT cell phenotypes were observed by staining of cell cytoskeleton and nucleus, imaged by confocal microscopy (Figure 6). A spindle-shaped morphology was clearly visualized for hDFs with intact cytoplasmic filamentous distribution (green) of F-actin and prominent nucleus (blue) with no structural abnormalities. Notably, for fibroblasts seeded onto PCL/Gel+USE F-Actin Staining The hDFs and HaCaT cell phenotypes were observed by staining of cell cytoskeleton and nucleus, imaged by confocal microscopy (Figure 6). A spindle-shaped morphology was clearly visualized for hDFs with intact cytoplasmic filamentous distribution (green) of F-actin and prominent nucleus (blue) with no structural abnormalities. Notably, for fibroblasts seeded onto PCL/Gel+USE and PCL/Gel+CME, increased alignments of the cells were observed as indicated by the higher aspect ratio of the nuclei at day 7 post seeding. Bashur et al. reported that an increased diameter and degree of orientation of electrospun fibers contributed to the enhanced adhesion and the aspect ratio of fibroblasts [58]. Together with the increased wettability and swelling of PCL/Gel+USE and PCL/Gel+CME mats after immersion in PBS, it is likely that the increased diameter of the nanofibers may contribute to the observed alignment of hDFs. For HaCaT cells, characteristic microcolonies were observed with F-actin stained red and nucleus in blue. The results of F-actin staining corroborates well with SEM micrographs, as day 4 images clearly revealed scattered well-aligned cells attached to the mats. The day 7 micrographs showed a confluent layer of cells spread throughout the mats. Z-section analysis revealed the thickest cell layers for PCL/Gel+USE for both cell lines at day 7. Overall, these results illustrated that PCL/Gel+USE mats could increase the attachment and proliferation of HaCaT and hDFs, further confirming the excellent biocompatibility of the mats. and PCL/Gel+CME, increased alignments of the cells were observed as indicated by the higher aspect ratio of the nuclei at day 7 post seeding. Bashur et al. reported that an increased diameter and degree of orientation of electrospun fibers contributed to the enhanced adhesion and the aspect ratio of fibroblasts [58]. Together with the increased wettability and swelling of PCL/Gel+USE and PCL/Gel+CME mats after immersion in PBS, it is likely that the increased diameter of the nanofibers may contribute to the observed alignment of hDFs. For HaCaT cells, characteristic microcolonies were observed with F-actin stained red and nucleus in blue. The results of F-actin staining corroborates well with SEM micrographs, as day 4 images clearly revealed scattered well-aligned cells attached to the mats. The day 7 micrographs showed a confluent layer of cells spread throughout the mats. Zsection analysis revealed the thickest cell layers for PCL/Gel+USE for both cell lines at day 7. Overall, these results illustrated that PCL/Gel+USE mats could increase the attachment and proliferation of HaCaT and hDFs, further confirming the excellent biocompatibility of the mats. Collagen Expression Finally, the expression of ECM by the cells seeded on various scaffolds was determined by picrosirius red staining for collagen expression. Figure S2 showed images of the scaffolds that were stained Collagen Expression Finally, the expression of ECM by the cells seeded on various scaffolds was determined by picro-sirius red staining for collagen expression. Figure S2 showed images of the scaffolds that were stained by picro sirius red stain for the expression of collagen by hDFs at Day 10 p.s. Among all the mats, PCL/Gel+USE containing mats exhibited a uniform distribution of red colouration whereas sporadic staining was observed for PCL/Gelatin and PCL/Gel+CME mats, suggesting that hDFs cultivated on PCL/Gel+USE mats produced a uniform distribution of the ECM. Collagen type I is one of the most abundant proteins of the human body which constitutes the majority of ECM. The structure of collagen consists of a triple helical structure of fibrils aligned in different orientations. Attachment of cells to collagen allows for various essential mechano-transduction signaling cascades to occur within the cell, as well as the creation of an avenue for cell migration due to the chemotactic role of collagen. During the remodeling phase of wound healing, fibroblasts secrete collagen which plays a critical role as their interwoven fibrils replace the provisional fibrin-based matrix, imparting improved mechanical strength and aiding wound contraction [21]. Taken together, the bioactive compounds present in Gymnema sylvestre could be a possible reason for better adherence and proliferation of the hDFs and HaCaT. The major bioactive compounds present in Gymnema sylvestre extracts were gymnemic acids and gymnemagenin [59][60][61]. The other phytoconstituents from Gymnema sylvestre reported were kaempferol, quercetin [62], triterpenoid saponin [63], lupeol and stigmasterol [64]. All the above-mentioned cell culture experiments confirmed that the Gymnema sylvestre and GYM loaded PCL/Gel mats were not toxic and promote the proliferation of hDFs and HaCaT, with PCL/Gel+USE laden mats demonstrating their excellent biocompatibility among all the mats. Antibacterial Activity of Electrospun Gymnema Sylvestre Mats The important prerequisite for modern wound dressings is to prevent the bacterial infection at the wound site from resident and external sources. Kirby-Bauer disc diffusion assay was used to assess the antimicrobial activity of USE/CME loaded PCL/Gel mats. As expected no zone of inhibition was observed around the electrospun PCL/Gel mats whereas mats containing the Gymnema sylvestre extracts displayed a clear zone of inhibition ( Figure S3). The zone of inhibition in mm for different strains was shown in Table 2. The values are lower for Gram-negative bacterial than Gram-positive bacteria, possibly due to the presence of an additional outer membrane in the Gram-negative bacteria. Consistent with these results, MIC determination by broth dilution suggested 2-3 fold higher MIC values against the tested E. coli/P. aeruginosa strains. The antibacterial activity of the electrospun mats was further evaluated using bacterial cell viability assay under proliferating conditions. Figure 7 shows the bacterial survivors expressed in terms of log CFU/mL values obtained for microbes exposed to various electrospun mats. For Gram-positive bacteria, PCL/Gel+USE and PCL/Gel+CME mats displayed > 2 log reduction (>99% decrease in bacterial viability) and for Gram-negative bacteria, they displayed a >1 log reduction (>90% decrease in bacterial viability) when compared to initial inoculum, whereas PCL/Gel mats did not display any antimicrobial activity. It should be noted that there was a significant decrease in the bacterial inoculum after exposure to PCL/Gel+USE and PCL/Gel+CME when compared to initial bacterial inoculum. Together with the disc diffusion assay, these results confirmed the potent bactericidal properties of PCL/Gel+USE/CME mats. Conclusions The present study demonstrated the utility of electrospun PCL/Gelatin mats containing antimicrobial Gymnema sylvestre extracts for skin tissue engineering. The inclusion of gelatin to the PCL/G. sylvestre system resulted in increased wettability and allowed the extract to leach out from the fibers. PCL/Gel+USE loaded mats possess all pre-requisite physical and biological properties to encourage the attachment, spreading and proliferation of the fibroblasts and keratinocytes with potent antimicrobial properties against commensal pathogens such as S. aureus or S. epidermidis. The initial burst release of extracts from the electrospun mats effectively could avert the bacterial colonization of the injured tissue, thereby eliminating the threat of infection which delays the healing process. The degradation study inferred that these hybrid PCL/Gel mats were structurally stable, thereby reducing the frequency of dressings and nursing costs. In conclusion, we have demonstrated a simplistic approach for the preparation of Gymnema sylvestre loaded PCL/Gel hybrid mats and their feasibility as a nanofibrous anti-infective wound dressing in near future. Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: FTIR spectrum of individual compounds, Figure S2: Collagen staining on various nanofibrous scaffolds, Figure S3: Disc diffusion images of Gymnema sylvestre loaded PCL/Gel mats respectively. Conclusions The present study demonstrated the utility of electrospun PCL/Gelatin mats containing antimicrobial Gymnema sylvestre extracts for skin tissue engineering. The inclusion of gelatin to the PCL/G. sylvestre system resulted in increased wettability and allowed the extract to leach out from the fibers. PCL/Gel+USE loaded mats possess all pre-requisite physical and biological properties to encourage the attachment, spreading and proliferation of the fibroblasts and keratinocytes with potent antimicrobial properties against commensal pathogens such as S. aureus or S. epidermidis. The initial burst release of extracts from the electrospun mats effectively could avert the bacterial colonization of the injured tissue, thereby eliminating the threat of infection which delays the healing process. The degradation study inferred that these hybrid PCL/Gel mats were structurally stable, thereby reducing the frequency of dressings and nursing costs. In conclusion, we have demonstrated a simplistic approach for the preparation of Gymnema sylvestre loaded PCL/Gel hybrid mats and their feasibility as a nanofibrous anti-infective wound dressing in near future.
8,668.8
2019-03-01T00:00:00.000
[ "Biology" ]
Report on scipost_201909_00002v1 We study the properties of the entanglement spectrum in gapped non-interacting non-Hermitian systems, and its relation to the topological properties of the system Hamiltonian. Two different families of entanglement Hamiltonians can be defined in non-Hermitian systems, depending on whether we consider only right (or equivalently only left) eigenstates or a combination of both left and right eigenstates. We show that their entanglement spectra can still be computed efficiently, as in the Hermitian limit. We discuss how symmetries of the Hamiltonian map into symmetries of the entanglement spectrum depending on the choice of the many-body state. Through several examples in one and two dimensions, we show that the biorthogonal entanglement Hamiltonian directly inherits the topological properties of the Hamiltonian for line gapped phases, with characteristic singular and energy zero modes. The right (left) density matrix carries distinct information on the topological properties of the many-body right (left) eigenstates themselves. In purely point gapped phases, when the energy bands are not separable, the relation between the entanglement Hamiltonian and the system Hamiltonian breaks down. I. INTRODUCTION Topology has become one of the main aspects of condensed matter physics over the last few decades [1][2][3][4][5][6] . The classification of topological phases led to numerous advances in the understanding of electronic condensed matter and to a plethora of new resilient phenomena 7-12 . One of the core principles of topology in condensed matter physics is the bulk-boundary correspondence 6,13,14 : topological properties in the bulk of the system lead to the appearance of particular edge states at its boundaries. As these states originate from these bulk properties, they are resilient to local perturbations that do not change the topological classification of the system-for instance by breaking the relevant symmetries. This bulkboundary correspondence also affects the entanglement properties of the different eigenstates, and in particular the ground state, of the Hamiltonian. Entanglement has proved to be an efficient probe of many-body physics. Entanglement entropy scaling laws are for example able to discriminate between different universality classes of gapless phases, in particular in one dimension [15][16][17][18] , but also can include terms that have a topological origin and characterize the fundamental topological excitations of the system 19,20 . Of relevance to this work is the notion of the entanglement Hamiltonianthe logarithm of the reduced density matrix of a subpart of the total system-and its eigenspectrum, the entanglement spectrum [21][22][23][24] . Due to the bulk-boundary correspondence, if the selected subsystem does not break any symmetry, the entanglement Hamiltonian in a topological system has similar properties and edge states as the original Hamiltonian with open boundary conditions, even when starting from a periodic system 21, [25][26][27] . As such, it has been a remarkably useful tool to characterize topological systems. Non-Hermitian Hamiltonians are an extension of standard quantum mechanics that describe dissipative systems in a minimalistic fashion. Instead of considering density matrix evolutions such as Lindbladian equations, dissipation is represented as non-Hermitian terms that either give a finite life-time or amplify the different eigenstates of the Hamiltonian 28 . Numerous experiments have been realized, showcasing the many differences between these systems and their Hermitian counterparts [29][30][31][32][33][34][35][36][37][38] . Similarly, the extension of the topological concepts developed for Hermitian quantum mechanics to these new systems has been a fruitful field of research 39 . Symmetry-based applications have been proposed [40][41][42][43] , but several notions are still actively discussed-the bulk-boundary correspondence being one 39,[44][45][46][47][48][49][50][51][52][53][54][55] . Indeed, the phase diagram of the same model can vary significantly depending on the choice of boundary conditions (open or periodic), a phenomenom dubbed the non-Hermitian skin effect. The correspondence can actually be redefined in two different ways: One can redefine an effective Brillouin zone for the periodic Hamiltonian where the momentum can take complex values 48,56 ; the topological invariants computed on this new Brillouin zone are then in agreement with the phase diagram of the open system. Conversely, the correspondence can be based on the singular value decomposition (SVD) of the Hamiltonian instead of the eigenvalue decomposition 40,42,43,57 . The SVD-based phase diagrams of the open and periodic systems coincide, and topological phases are characterized by the presence of edge-localized singular zero modes. In this article, we study the entanglement spectrum in non-Hermitian systems and its relation to the topology of the original Hamiltonian, as a first step towards a better understanding of non-Hermitian topology in manybody physics. After a quick reminder of the properties of the density matrix and the entanglement Hamiltonian in Hermitian systems, we propose two complementary definitions of the density matrix, depending on whether we want to focus on the biorthogonal interpretation of non-Hermitian quantum mechanics 58 , or if we are more interested in the structure of the right or left eigenstates of the Hamiltonian. We also show that Wick's theorem and Peschel's formula 59 are still valid in non-Hermitian systems which allows us to efficiently compute the entanglement spectrum of free fermionic theories. We then discuss the different symmetries that can protect the topology of non-Hermitian Hamiltonians, and how they translate into symmetries of the reduced density matrix and the entanglement Hamiltonian depending on the choice of many-body state. In particular, for right density matrices, symmetries of the Hermitian entanglement Hamiltonian might differ from the symmetries of the non-Hermitian system Hamiltonian, leading to a different topological classification of the former. After briefly introducing the non-Hermitian Su-Schrieffer-Heger (SSH) model 48,60-65 , we use it to exemplify how and when the entanglement spectrum inherits topological properties from the original Hamiltonian. We find that when bands can be separated, the biorthogonal entanglement Hamiltonian perfectly reproduces the physics of the corresponding periodic system Hamiltonian, with the presence of singular and energy edge modes accurately predicted by the bulk topological invariants. The right entanglement Hamiltonian describes the topology of the right eigenstates themselves, and its classification differs from the system Hamiltonian due to the emergence of different symmetries. Finally, we verify that our results are also valid on a variety of two-dimensional models. II. DENSITY MATRICES AND ENTANGLEMENT SPECTRUM IN NON-HERMITIAN SYSTEMS In this Section, we discuss the possible definitions of a density matrix in a non-Hermitian setting. Let us introduce the following notation: We denote by H the manybody Hamiltonian and assume it can be diagonalized, i.e, it has only 1 × 1 Jordan blocks. ψ R n ( ψ L n ) are the right (left) eigenvectors of the manybody Hamiltonian. Any many-body state φ R for such system can be decomposed into the eigenstates ψ R n , i.e., φ R = n φ n ψ R n . We define the corresponding left vector φ L ∝ n φ n ψ L n . For convenience, in the rest of this paper, we always take the following normalization convention: || φ R || 2 = 1 and φ L | φ R = 1. (2) In this paper, we focus on non-interacting fermionic models such that c † = (c † 1 , ..., c † N ) is a vector of N fermionic creation operators satisfying the usual anticommutation algebra H is the single particle Hamiltonian that can be diagonalized as with L n | R m = δ m,n and R n | R n = 1. We define d † n,R ( d † n,L ) as the creation operator related to the one-body eigenstate |R n (|L n ): They satisfy the modified fermionic anticommutation rule: The other anticommutators do not have a simple expression. A. Density matrices In Hermitian systems, the density matrix describing a system is the positive-definite Hermitian operator ρ that verifies that the expectation value of any observable O is given by where . is the expectation value. If the system is in a pure state |φ , the density matrix ρ is simply the projector |φ φ|, while a thermal state is given by ρ = Z −1 exp(−βH), with Z = Tr[exp(−βH)]. The time evolution of ρ is given by the Heisenberg equation (we set = 1) The reduced density matrix ρ A characterizing the state of a subsystem A can be obtained from ρ by taking the partial trace over all degrees of freedom not in A: In non-Hermitian systems, the difference between leftand right-eigenstates leads to different possible definitions of the density matrix. This definition choice depends on which properties we want to preserve or emphasize, even for a pure state. We focus in this paper on static properties, but we will mention some of the dynamical properties. Following the biorthogonal interpretation of non-Hermitian quantum mechanics 58 , observables are computed using both the left-and right-states of a system: This naturally leads to the biorthogonal density matrix The reduced density matrices can be obtained from Eq. (8), and the Heisenberg equation is left unchanged. The trace of ρ RL is conserved during time evolution. On the other hand, ρ RL is neither Hermitian nor positivedefinite. If we consider instead a more conventional approach where non-Hermitian systems are effective models for dissipative dynamics without quantum jumps [66][67][68][69][70] , the average values of observables are given by The natural density matrix is therefore the right density matrix By convention, we take φ R to be of norm 1 such that Tr φ R φ R = 1. Equation (8) is still valid, and ρ R and all associated reduced density matrices are Hermitian positive-definite operators. ρ R then satisfies the equation 71 Enforcing the constraint Tr ρ R = 1 leads to non-linearity in the time evolution of ρ R . If φ R is a right eigenstate, then ρ R is constant. We denote by ρ L the equivalent density matrix replacing right by left vectors. B. Entanglement spectrum The entanglement Hamiltonian H E of a subsystem A is given by The entanglement spectrum of ρ is the spectrum of H E . When the total system is in a pure state and we use ρ R as the density matrix, the entanglement spectrum of ρ R A is directly related to the Schmidt decomposition of φ R . Indeed, the Schmidt decomposition writes as: where λ n > 0 and Due to the orthogonality conditions, and consequently, the eigenvalues Ξ n of H E are nothing but −2 log λ n . For the biorthogonal density matrix ρ RL , there is no simple relation between the Schmidt decomposition of the eigenvectors and the eigenvalues of the entanglement Hamiltonian. If H E = c † H E c + zId, z ∈ C, the reduced density matrix is a generalized fermionic Gaussian state 72 (z is a irrelevant normalization factor that will not be discussed in the following). The eigenvalues ξ n of H E form the single particle entanglement spectrum, and its eigenvectors the entanglement modes. In the rest of the paper, as we only discuss such Gaussian states, we refer to ξ n and H E as the (single-particle) entanglement spectrum and Hamiltonian. III. ENTANGLEMENT SPECTRUM OF GAUSSIAN STATES AND THE WICK THEOREM In Ref. 59, Peschel derived a technique to efficiently compute the entanglement spectrum of eigenstates of quadratic Hermitian Hamiltonian (Slater determinants) or of Gaussian density matrices. It can be summarized as follows: any correlation function for such states can, according to Wick's theorem, be obtained from a combination of two-fermion correlation functions. Moreover, computing the correlation functions restricted to any subsystem A only requires two-fermion correlators restricted to that subsystem. Let C be the two-site correlation matrix defined by C i,j = c † j c i in such a state, and C A , the restriction of C to the subsystem A. C A can be diagonalized into N A is the number of fermionic modes in A. The Gaussian state defined through Eq. (16) with the (single-particle) entanglement Hamiltonian H E = n ξ n R A n R A n with ξ n = ln(s −1 n − 1) gives the same correlation matrix C A . Note that if s n = 0 or 1, ξ n is formally −∞ or +∞. In practice, this limiting case does not occur as long as A is not the entire system, though the smallest and largest values of s n get exponentially close to the extrema with increasing system size. Since the Gaussian state also satisfies Wick's theorem, all fermionic correlators have the same expectation value whether using ρ A or the above Gaussian state. Therefore, necessarily, and the entanglement spectrum can be directly obtained from the eigenvalues of the reduced correlation matrix, which can be computed polynomially in system size. To apply a similar trick to non-Hermitian systems, we need first to verify that Wick's theorem applies to both formulation of density matrices in Eqs. (12) and (14), as well as to non-Hermitian Gaussian states. Secondly, we should verify that fermionic Gaussian states generate all possible non-Hermitian correlation matrices. We start with the biorthogonal density matrix ρ RL and Wick's theorem. We consider eigenstates of the Hamiltonian that can be written as |φ R = n d †sn R |0 , with s n = 0 or 1. The corresponding left-eigenstate is |φ L = n d †sn L |0 . In the biorthogonal case, straightforward algebra mapping c † to d † R and c to d L leads to which has eigenvalues 0 or 1, i.e., the occupation numbers are the eigenvalues of C RL . |R n (resp. L n |) are the right (resp. left) eigenstates of the single-particle Hamiltonian H. This mapping also offers a proof of Wick's theorem: once expressed in the correct left and right basis, the correlators of the non-Hermitian system behave exactly as if the system was Hermitian. Similarly, non-Hermitian Gaussian states of the form ρ = e − c † H E c also verify Wick's theorem; if H E is diagonalizable, this follows trivially from the Hermitian case. By continuity of the matrix exponentiation and the trace, it is also true for non-diagonalizable H E . Now we need to prove that all non-Hermitian correlation matrices also admit a Gaussian antecedent. In Appendix A, we exhibit the antecedent of any correlation matrix that forms a single Jordan block of arbitrary size. The generalization to arbitrary correlation matrix is straightforward. Similarly to the Hermitian case, eigenvalues 0 or 1 of the correlation matrix correspond to divergent energies for the Gaussian states. If the correlation matrix is diagonalizable, the corresponding entanglement Hamiltonian is also diagonalizable, and its eigenmodes are the eigenvectors of the correlation matrix. If the correlation matrix is not diagonalizable, the entanglement Hamiltonian H E is also not diagonalizable and has the same number of Jordan blocks of identical size, though their canonical Jordan form bases differ. When considering the right density matrix ρ R , it is convenient to work in an orthonormalized basis of the occupied states. Let (i 1 , ...i m ) be the indices of the occupied modes, with m the number of occupied states. Further let Q = (|Q 1 , ..., |Q m ) be an orthonormal basis of Span(|R i1 , ..., |R im ) and such that We can complete Q into an orthonormal basis of the single particle space. φ R is then the ground state of From this follows that ρ R verifies Wick's theorem and that its reduced density matrices are Hermitian Gaussian states. Finally, the correlation matrix can be efficiently obtained from the eigenvalue decomposition Both definitions of the density matrices lead to Gaussian reduced density matrices. We can efficiently compute the two-site correlation matrix from the diagonalization of the single-site Hamiltonian, and thus the entanglement spectrum. where s A,n is an eigenvalue of the correlation matrix C A restricted to the subsystem A we consider. Since the entanglement Hamiltonian might have complex eigenvalues, the entanglement spectrum is only defined modulo 2iπ. We will choose the phases such that the symmetries of the correlation matrix are respected. If C A is diagonalizable, the left and right entanglement modes are its left and right eigenvectors. IV. SYMMETRIES AND ENTANGLEMENT HAMILTONIAN Symmetries play a fundamental role in the behavior of the entanglement spectrum in Hermitian systems 25,73 . A natural prescription to study topological effects on the entanglement spectrum for symmetry-protected topological phases is to select a (ground) state that does not break any of the protecting symmetries. The correlation matrix, and by extension all reduced density matrices, will have the same symmetries, and the entanglement Hamiltonian can potentially be in the same topological phase as the initial one. In this section we demonstrate that this prescription is still natural in the non-Hermitian case. More precisely, we discuss the effects of symmetries on the correlation matrix and reduced density matrices, in relation with the band structure of the eigenvalues. Indeed, two types of gaps can be defined in non-Hermitian systems 43 , as illustrated in Fig. 1. The system is said to be point gapped if it possesses no eigenvalues in the neighborhood of a single point of the complex energy plane, usually E = 0, as depicted in Fig. 1a. In sharp contrast with the (anti-)Hermitian case, bands need not be separable. Conversely, the system is said to be line gapped if there exists a one-dimensional manifold in the complex energy plane with no eigenvalues in its neighborhood, separating the energies into two sets or bands, as shown in Fig. 1b As δ m,n , the biorthogonal reduced density matrix is: The reduced density matrix therefore commutes with O A , whose eigenvalues are still good quantum numbers. Now we turn to the right density matrix. Schmidt decomposition applied to each (o A , o A ) sector ensures that The reduced density matrix is then given by where the ε's and η's can take values ±1. Ch is a chiral symmetry, T and P are two flavors of particle-hole (ε = −1) or time-reversal (ε = 1) symmetries and P H is pseudo-hermiticity. All unitary transformations (u c , u t , u p and u ph ) are required to be compatible with the subsystem A: If the correlation matrix C verifies some symmetry relations, there exists reduced unitaries defined on A such that C A also satisfies the same relation. For simplicity, we now assume that H has no degenerate eigenvalues. We use the short-hand notations |R n * for the eigenvector associated to E * n and |R −n to −E n , and similarly for all related quantities. |R * n is the complex conjugate of |R n . Depending on the state we consider, a symmetry in the Hamiltonian can translate into two different symmetries on the correlation matrix, and therefore on the entanglement Hamiltonian. Here we discuss explicitly the case of the pseudo-Hermitian P H − symmetry, the other cases following straightforwardly. The symmetry on the Hamiltonian translates into with eigenvalues coming by pairs (E n , −E * n ). For simplicity, we skip for now the case of purely imaginary energies. e iαn is a complex phase and N n is the normalization constant || |L n || −1 . Following Eq. (22), we obtain If s * n + s −n * = 1, we obtain This relation can be satisfied by simply occupying the states with negative (or positive) real part of the energy in the many-body state we consider. Such a choice coincides with the conventional choice of the ground state for Hermitian systems with particle-hole symmetry at halffilling, and is a consistent choice if the Hamiltonian admits a real line gap as in Fig. 1b. Correspondingly, if an entanglement Hamiltonian verifies the P H − symmetry, it will satisfy Eq. (32). Conversely, up to the 2iπ degrees of freedom in the definition of entanglement energy, assuming there are no degeneracies, if the correlation matrix verifies Eq. (32), the entanglement Hamiltonian is necessary P H − symmetric. Another interesting relation emerges if we take s * n = s −n * . In a Hermitian system, such a condition makes very little physical sense: it attributes the same occupancy to states with opposite energies. In the non-Hermitian case, it cannot be rejected a priori. If the spectrum has an imaginary line gap, such as shown in Fig. 1c, selecting the band with either positive or negative imaginary part results in such a relation. In other words, it corresponds to the natural occupation of the anti-Hermitian limit of the Hamiltonian. The correlation matrix then satisfies which is the P H + symmetry. Similarly, the corresponding entanglement Hamiltonian will have the same P H + symmetry, with eigenvalues coming in pairs (ξ n , ξ * n ). Finally, let us discuss the case of purely real or imaginary eigenmodes. If the Hamiltonian H admits some purely imaginary eigenvalues, then u ph maps the right eigenvectors to the corresponding left eigenvectors if there are no degeneracies. Then, Eq. (32) cannot be satisfied by any of the eigenstates of H as it requires s n + s * −n * = 1. The P H − symmetry is spontaneously broken. On the other hand, such a mode is still compatible with the emergent P H + symmetry. If the Hamiltonian now has purely real eigenvalues, then the relation s * n = s −n * requires to attribute the same occupancy to states with opposite energies, which is generally unphysical when studying half-filling properties. When the Hamiltonian has both purely real and imaginary eigenenergies, for example for the non-separable bands shown in Fig. 1a, then there is no natural choice of many-body state that leads to a surviving symmetry in the entanglement Hamiltonian. Note that in finite systems, picking adequate boundary conditions and system sizes can prevent the symmetry breaking, as we will exemplify in Secs. VI A 1 and VII B. Such a change of the symmetry representation occurs for most of previously considered symmetries. In Table I, we summarize the required conditions on the many-body state occupancies in order to have the exact same symmetry in the system Hamiltonian and the entanglement Hamiltonian. These conditions are generically compatible with (and natural in) the Hermitian limit. In each case, the corresponding entanglement Hamiltonian will have the same symmetry as the Hamiltonian if C and s n satisfy the indicated relation, and therefore the energy pair constraint is also valid for the entanglement Hamiltonian. In Table II, we summarize the required conditions to have the previously described change in the symmetry representation. With the exceptions of the Ch and P − symmetries, these conditions would be natural in the anti-Hermitian limit of the Hamiltonian. The choice of the more physically relevant many-body state depends on the band structure of the original Hamiltonian. C. Z2 unitary and anti-unitary symmetries for right density matrices ρ R We now turn to the right density matrices and investigate how symmetries of the system Hamiltonian can map to the entanglement Hamiltonian. Some non-Hermitian symmetries relate left and right eigenvectors of the Hamiltonian, while only the latter are involved in the computation of the density matrix and the associated correlation matrix. Additionally, the right eigenvectors do not form an orthogonal basis, which also affect some symmetry relations. Let us consider here the example of group BDI †43 (group 14 in Ref. 42), characterized by the presence of the symmetries P + , T − and P H − . In itself, this group is topologically trivial in dimension 1. The symmetries enforce the following relations on the eigenvectors of the Hamiltonian (assuming no energy de-generacies): with N n the normalization factor || |L n || −1 and the e iα 's are complex phases. Let us start with P H − and consider a state where all modes with negative real part of the energy are occupied. We assume that there are no purely imaginary modes. As eigenvalues come in pairs (E n , −E * n ), the system is at half-filling and the corresponding biorthogonal density matrix verifies all three symmetries. Let Q = {|Q n } n be the Schmidt orthonormalization of the family of occupied modes introduced in Section III. By construction Q spans half the singleparticle Hilbert space. The set u ph Q is orthogonal to Q as R m |L n = δ m,n (using u 2 ph = I) and is also an orthonormal family as u ph is unitary. It is therefore the orthogonal complement of Q such that (Q, u ph Q) forms a complete basis of the single-particle Hilbert space. The right correlation matrix associated to this eigenstate is and consequently C R is chiral symmetric: On the other hand, let us consider the effect of P + on the same state. u p Q is also an orthonormal family, but it is a priori neither orthogonal to Q nor generated by it, and we obtain no special relation on the density matrix. In this state, the P + (and therefore also the T − symmetry) is broken as it actually maps the right density matrix to the left. If there are no additional symmetries, the right-density matrix then falls into the Hermitian AI symmetry class, which is topologically non-trivial in one dimension. As we have seen, only considering either the right or left density matrices might lead to radically different symmetry properties of the entanglement Hamiltonian, and thus reveal different properties of the system Hamiltonian. In the presence of P H − , the natural choice of many-body eigenstate can lead to the emergence of a chiral symmetry in the right-density matrix, even though it is not present in the original Hamiltonian. The additional chiral symmetry may lead to topological signatures and features in the entanglement hamiltonian and It is interesting to note that the pseudo Hermitian symmetry P H− (resp. P H+) requires that the spectrum has no purely imaginary (resp. real) eigenvalues in the absence of spectrum degeneracies. consequently in left and right eigenstates of the original Hamiltonian even though the Hamiltonian is in principle trivial. This result is similar but not equivalent to the linegap classification obtained in Ref. 43. In particular, the T − and P + symmetries do not carry on the right density matrix even though they are relevant to the line gap classification. For example, in the case of T + , T − and Ch symmetry (group AI + S + ), the line gap classification predicts a Z topological invariant while the right density-matrix is only T + symmetric and therefore topologically trivial according the standard Hermitian classification. In Table III, we summarize how the different non-Hermitian symmetries can transform into a symmetry in the right entanglement Hamiltonian, and the conditions on the many-body states in order for such a symmetry to exist. This potential discrepancy between the topological properties of the entanglement Hamiltonian and of the system's Hamiltonian is in particular relevant when studying dissipative trajectories with post-selection [66][67][68][69][70] . The post-selection allows us to simplify the Lindblad evolution into a purely non-Hermitian Hamiltonian problems, and the density matrix of the system is exactly the right density matrix that we consider. While the topological properties of the Hamiltonian still matters as far as the existence of zero-modes are concerned 70 , the existence of topologically stable observables will be governed by the properties of the right eigenvectors only. V. THE NON-HERMITIAN SSH CHAIN The non-Hermitian Su-Schrieffer-Heeger 48,60-65 (SSH) model is an extension of the celebrated SSH model with additional non-Hermitian terms. Its Hamiltonian reads is an intra-(inter-) unit-cell coupling, γ is a nonreciprocal contribution to the hopping, and µ encodes alternating losses and gains. j denotes the unit-cell while A/B is the sublattice index. We consider a system of L unit cells. In the following, we denote with σ α with α = x, y, z the Pauli operators acting on the sublattice degrees of freedom. In the rest of the paper, we assume for simplicity t 1 , t 2 , µ, γ ≥ 0 and fix our energy scale to t 2 = 1. The non-Hermitian SSH model possesses topological and trivial phases that are directly connected to the corresponding phases in the Hermitian SSH model. More saliently, it hosts a topological phase specific to non-Hermitian models. When γ = 0, it exhibits the so-called non-Hermitian skin-effect 39,44-52 , i.e., a break-down of the conventional bulk-boundary correspondence of topological systems. The eigenvalues and eigenvectors of the system with open boundary conditions (OBC) strongly differ from the ones of the system with periodic boundary conditions (PBC). Consequently, the conventionnal phase diagram-where a phase transition is characterized by the closing of the gap in the energy spectrumdepends on the choice of boundary conditions. With OBC, eigenstates tend to localize towards one of the boundary of the system. On the other hand, the singular value phase diagram -when a phase transition is based on the closing of the gap in the singular value decomposition of the single-particle Hamiltonian H -does respect the bulk-boundary correspondence. We summarize here the phase diagram and the main properties of the model. The PBC phase diagram can be easily computed and is shown in Fig. 2. In the chiral limit µ = 0 48,64,65 , the Hamiltonian is time-reversal T + symmetric with u t = Id, particle-hole T − symmetric with u t = σ z and chiral Ch symmetric. It falls in the non-Hermitian AI+S + 43 class (group 36 in Ref. 42), with two Z topological invariants. Several formulations have been proposed for these invariants 40,43,49,57,64,79,80 . In this paper we use where BZ is the Brillouin zone and Q k is the singularflattened Hamiltonian 57 at momentum k. Namely, if the singular value decomposition of the Bloch Hamiltonian H k associated to the single-particle counterpart of H in Eq. (37) is H k = U k Λ k V † k , with Λ k a positive diagonal matrix and U k and V k two unitary matrices, then The phase "H-Topo" (resp. "nH-Topo") has non trivial winding number and is characterized by two (resp. a single) zero singular values when the system is open . "H-Topo" is adiabatically connected to the Hermitian topological phase, while "nH-Topo" is purely non-Hermitian, with (point-)gapped energy bands that are nonetheless non-separable. "H-Triv" is connected to the Hermitian trivial phase, while "nH-Triv" is connected to a trivial anti-Hermitian limit. In the pseudo-hermitian limit γ = 0 61-63 , the system is pseudo-time-reversal P + symmetric with u p = Id, particle-hole T − symmetric with u t = σ z and pseudohermitian P H − symmetric. The system now falls into the non-Hermitian class BDI †43 (group 14 in Ref. 42), which is trivial following point-gap classification, but has the Z topological invariant ν − for a real line gap. In this limit, the OBC and PBC phase diagrams coincide. "H-Topo" now admits purely imaginary edge modes, which are topologically stable (using the line gap criterium) and that partially survive in the gapless phase "Gapless" 64,81 . Finally, when both γ and µ are non-zero, the system is only particle-hole symmetric. It then falls into class D † (groupe 34 in Ref. 42) which admits ν + as a Z topological invariant following the point gap classification, and ν − /2 mod 2 as a Z 2 topological invariant following the line gap classification. The "nH-Topo b" phase, i.e., the extension of "nH-Topo" to non-zero µ, is non-trivial according to ν + . The "H-Topo" phase has non-trivial ν − . It is also characterized by non-separable energy bands surrounding E = 0. Finally, we introduce the real space formulation of the previous topological winding numbers: [82][83][84][85] where Q is the singular flattened Hamiltonian 57 (similar to Eq. (40) but in real space) and X is the position operator. AvTr l:L−l means that we compute the average of the diagonal elements between sites l and L − l. Note that these two formulations are subject to finite-size effects, caused by the presence of boundaries, and as such are not perfectly quantized in numerical computations. We generally take l to be L/4 to limit these boundary effects. VI. LOW-ENERGY ENTANGLEMENT SPECTRUM IN THE PERIODIC CHAIN In this Section, we explore the properties of the entanglement spectra defined in Section II B in the different phases of the extended SSH chain. In particular, we want to exemplify how the choice of either the biorthogonal or right reduced density matrix gives different insights into the topological properties of the Hamiltonian and the chosen many-body state. We consider a periodic system, and work with different many-body states at half-filling, depending on the structure of the energy bands in the complex plane. We compute both the eigenvalues and the singular values of the biorthogonal entanglement Hamiltonian, and compare them to the corresponding open Hamiltonian. While the open Hamiltonian can also present edge eigenstates, the conventional bulk-boundary correspondence holds for the singular value decomposition 40,42,43,57 We only study the eigenvalues of the right entanglement matrices as they coincide with singular values in Hermitian matrices. Diagonalization of a non-Hermitian Hamiltonian presents significant numerical noise, whose bound increase exponentially with the matrix size. In this paper, we present data from relatively small subsystems of 40 unit-cells for clarity. We performed a scaling analysis including subsystems of up to a 100 unit-cells to confirm our results. A. Chiral symmetric limit µ = 0 The different phases of the system are here characterized by the two Z topological invariants ν + and ν − in Eqs. (41) and (42). We investigate whether the entanglement Hamiltonians inherit the topological properties of their system Hamiltonian. Biorthogonal density matrix We focus first on the biorthogonal entanglement spectrum. The numerical results are summarized in Fig. 3. We use the inverse participation ratio (IPR) to visualize the spatial extension of the eigenstates. It is a measure of the support of the eigenmodes: a state perfectly localized to a single site of the lattice will have an IPR of 1, while a state fully delocalized on all unit-cells and both sublattices will have an IPR of 2L. The exact definitions employed are given in App. B. In the phases "H-Topo" and "H-Triv" of Fig. 2, the PBC energy bands form two disconnected ellipsoids separated by the imaginary axis as in Fig. 1b. It is therefore natural to compute the entanglement spectrum at half-filling from the state |φ R = n d †sn R |0 , with s n = δ Re(En)<0 . These two phases are adiabatically connected to the Hermitian phases, and this definition is compatible with their respective Hermitian limit. The entanglement Hamiltonian then also respects all three symmetries (T + , T − and Ch), and the entanglement spectrum is represented in Fig. 3. The biorthogonal entanglement spectrum reveals the phase transitions occurring in the periodic system, and, despite being effectively open, shows a phase diagram matching the PBC one, when considering either eigen or singular values. "H-Topo" is characterized by the presence of two zero singular value modes, as expected from the OBC Hamiltonian. We also observe two corresponding zero energy modes in the whole phase. Each of these modes is localized at one end of the wire, up to finite-size effects, with the corresponding left-and right-eigenvectors exponentially localized on the same end. "H-Triv" is a trivial phase, and as such, does not present any low entanglement energy excitation. We numerically compute the topological winding numbers from their real-space formula, and we show in Fig. 4 that, within numerical accuracy, the entanglement Hamiltonian indeed inherits the topological properties of the system Hamiltonian in these two phases . In "nH-Triv", the PBC bands form two disconnected ellipsoids now separated by the real axis as in Fig. 1c. This phase is in particular adiabatically connected to a purely anti-Hermitian trivial limit, which makes the more natural choice of occupation number in the many-body state to be s n = δ Im(En)>0 if one wants to probe the topological property of the imaginary bands. Following the discussion in Section IV B, this choice switches the roles The biorthogonal density matrix therefore still belongs to the same symmetry class. We observe no low energy or singular states and the topological invariants are zero. The modes with smallest absolute real part of the energy have an imaginary part close to iπ but have significant finite real part. For larger real parts, we expect a similar result, but we are limited by numerical accuracy and floating point precision. Finally, in the phase "nH-Topo" the two bands are not separated but form a single ellipsoid encircling E = 0 as in Fig. 1a. There is no longer any natural "ground state" allowing the study of a single band. We can either choose to select an arbitrary half-plane in energy space to populate, or to select states which can be smoothly deformed into each other. More precisely, choosing a mode |R k0,n at momentum k 0 , we select at k = k 0 + δk the eigenstate |R k,m that maximizes | L k0,n |R k,m |. In practice, these two definitions coincide. Here we select the energy modes with negative real part, but similar results are obtained by using the negative imaginary ones. Our choice protects the chiral symmetry. The other symmetries would break in the thermodynamic limit due to the presence of purely imaginary modes. By taking L odd (another possible choice is L even and antiperiodic boundary conditions), we prevent the spontaneously breaking of the symmetries using finite-size effects, without affecting our results. We observe in this phase that the entanglement Hamiltonian breaks bulk-boundary correspondence: it has no zero singular value instead of the expected one. This is not a finite size effect, and is stable to perturbations. In fact, both real space topological invariants in Eqs. (41) and (42) are no longer quantized as the entanglement Hamiltonian becomes long range (approximately power-law decay of the hopping terms with strong oscillations, that saturate at a finite value independent of the subsystem size). Such a breakdown of the bulk-boundary correspondence through the entanglement Hamiltonian is in sharp contrast with the ersatz of entanglement spectrum introduced in our own previous work 57 . This ersatz is based on the singular value decomposition of the singleparticle Hamiltonian instead of a many-body eigenstate. The single-particle entanglement spectrum built from this SVD perfectly reproduces the physics of both the open and closed system. Right density matrix We now focus on the right density matrix and perform a similar analysis. Studying the left density matrix leads to the same results. Its entanglement spectrum is represented in Fig. 5. Only the time reversal symmetry T + is preserved -when it is also preserved in the biorthogonal case (in phase "nH-Triv", it is the new T − symmetry that is preserved). The entanglement Hamiltonian therefore belongs to the Hermitian AI class. The breakdown of the particle-hole symmetry can be understood from the following simple argument: The non-Hermitian term γ favors concentrating the wave function to the right of each unit cell. This means that B sites tend to have larger occupancy number, hence breaking particle-hole and chiral symmetry. The class is trivial, and we do not observe any stable zero modes, whether in the singular or energy decomposition. In the "H-Topo" phase, the low singular or energy modes acquire a finite splitting in the presence of both t 1 and γ, though the low-energy modes stay localized on the boundaries. It can also be understood as a consequence of the larger occupancy of B sites compared to A sites. Note that this result means that line gap classification does not coincide with right density matrix classification. Indeed, the line-gap approach predicts a surviving Z classification, compatible with ν − , which is not observed here. The phase transitions are not characterized by a gap closing in the entanglement Hamiltonian. It is not just an effect of an ill-defined state in the intermediate phase. We performed a scaling analysis with respect to both L and the length of the subsystem A. Arbitrarily close to the transition in any line gapped phases, the entanglement Hamiltonian has a finite gap. Instead, the entanglement Hamiltonian transitions by becoming long-range. B. Pseudo-Hermitian limit γ = 0 For γ = 0, the system falls into the class BDI † , which is trivial following point gap classification but with the Z topological invariant ν − in the presence of a real line gap. The system with open-boundary conditions is argued to have topologically protected edge states with purely imaginary energies. We focus on the presence of such localized states directly in the entanglement spectrum. Biorthogonal density matrix Starting with the biorthogonal entanglement spectrum, we obtain similar results as in the previous section, as depicted in Fig. 6. In "H-Topo", the energy spectrum of the system Hamiltonian is fully real and gapped, forming two separable bands with a real line gap. We select the state where all negative energy modes are occupied, We also indicate the degeneracy of the lowest eigenvalues. In the chiral limit, phase transitions are no longer visible and we observe no protected low-energy mode in the topological phase "H-Topo". In the pseudo-hermitian limit γ = 0, in phase "H-Topo", the right entanglement Hamiltonian has two zero-energy singular and energy entanglement modes protected by an emerging chiral symmetry. Interestingly, the gapless phase "Gapless" is gapped for the entanglement Hamiltonian. Separation between phases "Gapless" and "H-Triv" is not marked by a gap closing but by the coalescence of the lowest energy modes. by analogy with the Hermitian limit. This choice preserves the three symmetries P + , T − and P H − . The entanglement Hamiltonian is trivial according to the point gap classification of Refs. 42 and 43. As such, the singular and energy spectra of the entanglement Hamiltonian have no zero modes. Nonetheless, the BDI † class admits the Z topological invariant ν − following line gap classification. As shown in Fig. 7, ν − is also quantized in the entanglement spectrum. Correspondingly, the singular spectrum admits two well separated low modes which correspond to two eigenmodes with purely imaginary energies. These two modes are exponentially localized at each edge of the subsystem, and match the corresponding edge modes observed in the OBC system. When increasing t 1 , we observe the transition to the gapless phase "Gapless". The spectrum of the PBC Hamiltonian now forms a cross on the real and imaginary axes. Selecting the many-body state following the deformation argument described in Section VI A 1, we take s n = 1 if E n is real negative or imaginary positive. This indeed allows us to select one state at each momentum, and while it breaks both T − and P H − symmetries, it preserves the pseudo-time reversal symmetry. Note that P H − cannot be recovered in any many-body eigenstate: the imaginary modes cannot be avoided using finite-size effects and it is then not possible to satisfy the relation s n + s * −n * = 1 (in the absence of degeneracies in the spectrum). The entanglement Hamiltonian then falls into the trivial class AI † (group 6). It is gapless, with extended eigen and singular modes. While in the OBC Hamiltonian the localized edge states survive in the gapless phase, they are not present in the entanglement Hamiltonian, indicating their more fragile nature as the edge modes can interact through the extended gapless modes. In the trivial phase "nH-Triv", the spectrum is again gapped and fully real, and we select the state with all negative modes occupied, respecting all symmetries. The entanglement Hamiltonian is correspondingly gapped, without low energy modes. Finally, in the anti-Hermitian phase "nH-Triv", the energy spectrum is purely imaginary and we select states with negative imaginary parts. As discussed in Section IV B, it transforms the symmetries T − and P H − into T + and P H + such that the entanglement Hamiltonian now verifies: It does not change the symmetry classification of the entanglement Hamiltonian and we observe no stable low singular or energy modes. Right density matrix We turn now to the right entanglement Hamiltonian. Similar to the previous limit, some symmetries are always spontaneously broken by our choice of states. As discussed in Section IV C, the pseudo-Hermitian symmetry of the Hamiltonian leads to an emergent chiral symmetry of the right density matrix in phases "H-Topo" and "H-Triv". The entanglement Hamiltonian then falls into the AIII Hermitian class, which is topologically nontrivial, with ν − the corresponding topological invariant. In the "H-Topo" region, we observe two exact zero modes localized at each side of the subsystem, shown in Fig. 5 and ν − is quantized to 2, as shown in Fig. 7. The entanglement Hamiltonian is consequently topologically nontrivial. This means that the eigenvectors of the PBC Hamiltonian have a doubly degenerate Schmidt decomposition even though the Hamiltonian is trivial following the point-gap classification. The emergent symmetry also explains the quantization and stability of the right or left Berry phase observed in this limit in the periodic Hamiltonian 63,79,80 . In the "Gapless" phase, the initial density matrix and the entanglement Hamiltonian break all symmetries and are therefore trivial. The entanglement Hamiltonian is nonetheless gapped while the original Hamiltonian is gapless, with low but finite eigen modes power-law localized at each extremities of the subsystem, and higher-energy extended states. Finally, in "H-Triv", the chiral symmetry is restored, but the entanglement Hamiltonian is trivial. 2 ) and "H-Triv" (t1 > 3 2 ) are separated by the gapless phase "Gapless". The biorthogonal entanglement Hamiltonian presents a similar phase diagram. The noise in entanglement values is characteristic of finite size-effects in gapless phases. In (c) and (d), we highlight the eigenvalues with lowest absolute real part. In the "H-Topo" phase, we observe purely imaginary eigenstates exponentially localized at each extremity of the subsystem. While the corresponding edge states survive in the gapless phase for the OBC system, this is not the case for the entanglement Hamiltonian. (group 34 42 ), which admits the Z topological invariant ν + following the point gap classification and the Z 2 topological invariant ν − /2 mod 2 in a presence of a real line gap. The features of the entanglement spectrum and the state selection are then straightforwardly inherited from the two previous limits. In the "H-Topo", "H-Triv" and "nH-Triv" phases, the spectrum is line-gapped leading to a natural choice for the many-body state. Results are shown in Fig. 8. For the biorthogonal entanglement spectrum, the "H-Topo" phase is characterized by the presence of modes with purely imaginary modes of the energy which are exponentially localized at the boundaries of the entanglement Hamiltonian (here localized), as in the open system (though phase boundaries do match the PBC phase diagram). The entanglement Hamiltonian correspondingly has non-trivial ν − . On the other hand, the right density matrix does not present any stable lowenergy mode. The γ term, which preserves the chiral symmetry of the Hamiltonian breaks the chiral symme- Topological invariants ν+ and ν− of the biorthogonal and the right entanglement Hamiltonian in the pseudo-Hermitian limit γ = 0 and µ = 0.5, as a function of the hopping t1. We consider a system of size L = 401 and a subsystem of size l = 40. The vertical dashed lines mark the PBC phase transitions. ν− is a good topological invariant for both the biorthogonal density matrix ρ RL and the right density matrix ρ R in the two line gapped phases "H-Topo" and "H-Triv". The results for ρ RL and ρ R exactly match in these two regions. try of the right density-matrix. The "H-Triv" and "nH-Triv" phases are topologically trivial and as such do not present any new features. Finally,the "nH-Topo b" phase which is topologically non-trivial, has non-separable bands. As was the case in the previous examples, the entanglement spectrum then behaves differently from the system Hamiltonian. The entanglement Hamiltonian is long-range, with a nonquantized ν + , using the real space formula. VII. TWO-DIMENSIONAL MODELS: FROM CHERN INSULATORS TO NON-HERMITIAN TOPOLOGY In this Section, we compute the entanglement spectrum of several two-dimensional non-Hermitian topological models in order to illustrate the properties and limits of our approach. Using three different models, we study the two entanglement spectra, obtained from ρ R and ρ RL , in different topological phases and discuss when they give insight on the properties of the system Hamiltonian. In all the following examples, the Hamiltonian is defined on a two-dimensional torus with periodic boundary conditions. The subsystem we use to define the entanglement spectrum is a cylinder, periodic in the x-direction, but finite in the y-direction. In simulations, we take systems with 100 × 100 unit cells, and the cylinder has a length of 40 unit-cells. This cylinder geometry is also what we denote by open boundary conditions in this section. In (cd), we have highlighted (orange) the modes with the lowest real energies in absolute values. In (c) and (d), we highlight the eigenvalues with lowest absolute real part. In the phase "H-Topo", we observe two localized edge states with purely imaginary energies. These states are nonetheless not topologically stable. The intermediate phase "nH-Topo" has non-separable energy bands, which leads to a non-local entanglement Hamitlonian and delocalized modes. Here µ corresponds to a Zeeman field, t a hopping between lattice sites, ∆ x and ∆ y are spin orbit couplings, and γ x and γ y are constant dissipative spin-flip terms, while δµ is a local source or drain coupled to the spin polarization. In the following, for simplicity, we take t = ∆ x = ∆ y = 1. In the Hermitian limit d( k) = 0, the system is topologically non-trivial for |µ| < 2t. Two topological phases with opposite Chern number ±1 are separated by a gapless line at µ = 0. These two phases are characterized by the presence of chiral edge-modes when considering open boundary conditions. Similar structures are observed in the entanglement spectrum 73,82,87,88 . When µ > 2t, the system becomes trivial. The topological phases are not protected by any symmetry, though the Hermitian model is particle-hole symmetric. When all parameters are non-zero, the system has no special symmetries and falls into class A (D † if δµ = 0), which is topologically trivial following point-gap classification, but admits a Z topological invariant following the line-gap classification 43 . This topological invariant is nothing but the Chern number, and the corresponding phases are the extension of the Hermitian phases. In this section, we therefore limit ourselves to this extension, i.e., we introduce non-Hermitian terms without breaking the line gap (and hence the point gap). Due to this line gap, the eigenvalues are well separated into two different energy bands. When we consider a cylinder geometry, the system still admits one localized chiral edge-mode at each edge. The two modes have opposite chirality, and one is amplified while the other is dissipated. The system presents a real line gap as shown in Fig. 9a. We therefore select the many-body state at half-filling where the levels with negative real part are occupied, and compute the entanglement spectrum over a cylinder periodic in the x direction. In the topological phases, the biorthogonal entanglement spectrum presents chiral edge modes as shown in Fig. 9c-d, and the entanglement spectrum has the same Chern number as its system Hamiltonian. The edge modes are dissipative, with finite imaginary part, similarly to the original Hamiltonian with open-boundary conditions. The chirality of the amplified and dissipated modes are the same in the entanglement Hamiltonian H RL E and the system Hamiltonian. The right entanglement Hamiltonian-whose spectrum is shown in Fig. 9b-also falls into class A, and has similar topological properties with the same Chern number as the initial Hamiltonian. In the trivial phase, the entanglement Hamiltonians do not have any special feature. Transitions occur as predicted by the PBC Hamiltonian. In this model, the entanglement spectrum is therefore able to correctly predict the properties of the line-gapped topological phases. B. Non-Hermitian Z topological phase We now turn to a simple model in class DIII † , whose Bloch Hamiltonian is parametrized by n( k) = (0, ∆ x sin k x , ∆ y sin k y , 0) (48) d( k) = (µ − t x cos k x − t y cos k y , 0, 0, δ(sin k x + sin k y )). using the notations of Eq. (45). t x , t y are dissipative hopping terms, ∆ x and ∆ y are normal spin-orbit hoppings, µ is a spin-dependent source and drain and δ is a dissipative spin-orbit contribution. The model has a T − symmetry with u t = σ x , P + symmetry with u p = σ y and a pseudo-Hermitian symmetry P H − . It admits a Z topological invariant following point gap classification 42,43 . DIII † is also non-trivial in the line gap classification. We discuss an example in the following section. We fix t x = t y = ∆ x = ∆ y = 1. This model was briefly discussed in Ref. 42 in the limit δ = 0. Then, for |µ| < 2, the Hamiltonian is topologically non-trivial. The twobands are not separable as shown in Fig. 10a. and the OBC Hamiltonian admits two degenerate singular zero modes, while nonetheless it has no edge modes in the energy spectrum, as shown in Fig. 10b-c. We compute the entanglement spectrum in the topological phase. We select the many-body state where all states with negative real energy are selected, to preserve the T − symmetry. The P H − symmetry can also be preserved by considering an antiperiodic torus, though this choice does not significantly affect the obtained entanglement spectra. In the following, we only show the entanglement spectrum computing the many-body state of the more conventional periodic torus geometry. In the limit δ = 0, the non-Hermitian terms are diagonal in momentum space and ρ RL and ρ R coincide as the many-body In (d-f), we observe low energy gapped edge modes which are caused by the presence of two set of Dirac cones with opposite chirality, but no stable zero modes as in the singular value decomposition of the Hamiltonian state is the ground state of a gapless Dirac Hermitian Hamiltonian. It has four Dirac cones at the protected momenta k = (0, 0), (0, π), (π, 0) and (π, π). The entanglement spectrum of such a many-body state does not present any stable zero modes, though it still supports some low-energy gapped modes due to the presence of the two sets of two Dirac cones with opposite chirality. For small non-zero δ, in the topological phase, this picture is still valid, as shown in Fig. 10d-f. C. Non-Hermitian pseudo-Hermitian Z2 insulator Finally, we introduce a non-Hermitian extension of a Z 2 insulator in the same DIII † class. We now focus on line gap classification and show that the topological properties of the two entanglement Hamiltonians can differ due to the presence of an emergent chiral symmetry in the right entanglement Hamiltonian. The class admits a Z 2 topological invariant in the presence of a real line gap 43 , which can be expressed as an extension of the Kane-Mele invariant 1 . By analogy with the Hermitian DIII class, we consider a model with four bands. The toy Hamiltonian reads where σ αβ = σ α ⊗ σ β , α, β = x, y, z, 0. The system is T − symmetric with u t = σ yy , P + symmetric with u p = σ xy and P H − symmetric with u ph = σ z0 . In the Hermitian limit γ = 0, it has been introduced in Ref. 89, and is topologically non-trivial for |µ| < 2|t x | + 2|t y |. In a cylinder geometry, it presents two free chiral edge modes with opposite chirality at each edge. Introducing a small anti-Hermitian parameter γ does not break the real line gap (Fig. 11a), and preserve the topological phases. Indeed, as shown in Fig. 11b-c, both singular and eigen decompositions of the Hamiltonian still present similar zero edge modes. Since this model has a real line gap, we compute the entanglement spectrum of the many-body state where all states with negative real part of the energy are occupied, in the topological phase. Results are shown in Fig. 11d-f. The biorthogonal entanglement Hamiltonian presents the same edge states as the open model, both in its singular value decomposition and it eigendecomposition. It therefore faithfully captures the topological properties of the initial Hamiltonian. On the other hand, the right entanglement Hamiltonian has gapped low-energy modes and is actually topologically trivial. Indeed, as discussed in Sec. IV C, the pseudo-Hermitian symmetry of the non-Hermitian Hamiltonian transforms into a chiral symmetry for the right entanglement Hamiltonian. On the other hand, our choice of non-Hermitian perturbation prevents the T − and P + symmetry to carry over to H R E . H R E then falls into the trivial Hermitian class D. VIII. CONCLUSIONS AND DISCUSSIONS In this work, we have discussed the properties of the many-body density matrices and entanglement Hamiltonian in topological non-Hermitian systems. After discussing two possible definitions of density matrices, we have shown that both Wick's theorem and Peschel's formula are valid in non-interacting non-Hermitian settings, even for non-diagonalizable Hamiltonians. We have then studied how the symmetries of the Hamiltonian maps onto the density matrices and the entanglement Hamiltonian. As opposed to Hermitian models, the choice of a many-body state, like a filled band for insulator, is not always unambiguous. We propose to base this choice on symmetry. For the biorthogonal density matrix, depending on the choice of many-body state, different symmetries can be realized at fixed half-filling. For the right (or left) density matrix, most of the symmetries of the starting Hamiltonian do not naturally carry on to the entanglement Hamiltonian, contrarily to what happens in Hermitian system. Nonetheless, the pseudo-Hermitian symmetry P H − : H = −u ph H † u † ph may lead to an emergent chiral symmetry which translates into topologically non-trivial right and left wave-functions. To exemplify these different approaches, we have studied the entanglement Hamiltonian of several archetypal models in one and two dimensions. Starting from the periodic Hamiltonian, we have found that the biorthogonal entanglement spectrum inherits the topological properties of the initial Hamiltonian as long as the system has separable bands. The singular and edge modes present in the open Hamiltonian are present in the entanglement Hamiltonian, and the corresponding topological invariants carry on. On the other hand, the right entanglement spectrum does not reproduce all the features of the original Hamiltonian. As symmetries of the system Hamiltonian do not straightforwardly carry to the right entanglement Hamiltonian, the latter can present topological features in phases that are trivial following the point gap classification, or conversely be trivial in topological phases. For non-separable bands, both entanglement Hamiltonians fail to reproduce the characteristic topological properties of the original Hamiltonian, in contrast with the singular value approaches discussed in Ref. 57. The singular zero-modes typically present in these phases are not present in the entanglement Hamiltonian, for all the many-body states we have considered. It appears then, that the bulk-boundary correspond holds for the entanglement spectrum in line-gapped Hamiltonians, when considering the biorthogonal density matrix. The right density matrix carries information on the topological properties (degeneracies and zero modes in the entanglement spectrum, Chern number of the corresponding entanglement Hamiltonian...) of the many-body right eigenstates themselves. The subject of the classification of these matrices following from the topological properties of the system Hamiltonian can be relevant to experiments with post-selection. The approach we develop in this paper is a first step towards the generalization of the non-Hermitian topological classifications to true many-body physics. Indeed, it is highly non-trivial to generalize the approaches introduced in Refs. 40, 42, and 43, as the point gap classification relies on the singular value decomposition of the single-body Hamiltonian, which cannot be simply related to the eigen or singular decomposition of the many-body Hamiltonian. Asking the question whether the manybody states have topological properties, characterized by their entanglement spectrum, allows us to circumvent this difficulty. Performing a similar analysis starting from an open system could further improve our understanding of the structure of these states. A complete study is left for future works due to more challenging numerics. Similarly, it would be interesting to generalize this approach to interacting systems 90 , either through standard exact computation or through modified MPS algorithm, though the numerical instabilities inherent to non-Hermitian system may limit these approaches. In this paper, we considered non-interacting fermionic models because it allowed us to use Peschel's formula and study much larger systems. The rest of our approach should be directly applicable to interacting systems. In this Appendix, we show how to find the Gaussian antecedent of a correlation matrix that forms an arbitrary Jordan block of size n. Generalization to an arbitrary correlation matrix is straightforward. We start by computing the correlation matrix obtained when the entanglement Hamiltonian is a single Jordan with m 1 = e ε 1+e ε and m 2 = − e ε (1+e ε ) 2 (the higher diagonals are generally non-zero, but they are not relevant to our discussion). As m 2 is non-zero, this matrix cannot be diagonalized and forms a single n−dimensional Jordan block. We denote by Q the invertible matrix such that M = QJ(m 1 )Q −1 . We now prove that any correlation matrix forming a single Jordan block admits a Gaussian antecedent. Let C be a correlation matrix, and P an invertible matrix be such that C = P J(s)P −1 . Using where f † R = c † P and f L = P −1 c, the non-Hermitian Gaussian state defined by the entanglement Hamiltonian H E = P Q −1 J(log s −1 − 1 )QP −1 . (A3) has C for its correlation matrix. Appendix B: Inverse participation ratio In this Section, we introduce the definitions of the inverse participation ratio (IPR) we use in the main text to visualize the spatial support of the eigenmodes of the entanglement Hamiltonian. In a Hermitian context, it is defined as follows IP R(|R n ) = j,σ=A/B | j, σ|R n | 2 2 j,σ=A/B | j, σ|R n | 4 , where {|j, σ } is the (canonic) real space basis of the single-particle Hilbert space, where j denotes the unitcell and σ = A/B the sublattice. The inverse participation ratio estimates the support of the mode |R n in the basis {|j }: It is equal to 1 for a perfectly localized state on a single site, and 2l for a state fully delocalized on l unit-cells and both sublattices. We use this definition for the eigenstates of the right entanglement Hamiltonian. When using the biorthogonal formulation of quantum mechanics, we evaluate observables by computing We are therefore interested more in the (bi)localization of the product φ L and φ R , i.e. in the localization of n j,σ RL . It is therefore more coherent to study the ratio IP R RL (|R n ) = j,σ=A/B | L n |j, σ j, σ|R n | 2 j,σ=A/B | L n |j, σ j, σ|R n | 2 . (B3) It coincides then with localization of the expectation values n j = n j,A + n j,B of the corresponding manybody wave-function, as defined in Eq. (37). Finally, when studying the singular value decomposition of the entanglement Hamiltonian H E = U ΛV † , we choose for similar reasons IP R SV D (|U n ) = j,σ=A/B | V n |j, σ j, σ|U n | 2 j,σ=A/B | V n |j, σ j, σ|U n | 2 . (B4) where |U n (|V n ) is the n th column of U (V ) respectively.
14,966
2019-08-26T00:00:00.000
[ "Physics" ]
Acupuncture on mild cognitive impairment: A systematic review of neuroimaging studies Mild cognitive impairment (MCI) is a multifactorial and complex central neurodegenerative disease. Acupuncture appears to be an effective method for cognitive function improvement in MCI patients. Neural plasticity remaining in the MCI brain implies that acupuncture-associated benefits may not be limited to the cognitive function. Instead, neurological alternations in the brain play a vital role in corresponding to the cognitive improvement. However, previous studies have mainly focused on the effects of cognitive function, leaving neurological findings relatively unclear. This systematic review summarized existing studies that used various brain imaging techniques to explore the neurological effect regarding acupuncture use for MCI treatment. Potential neuroimaging trials were searched, collected, and identified independently by two researchers. Four Chinese databases, four English databases, and additional sources were searched to identify studies reporting the use of acupuncture for MCI from the inception of databases until 1 June 2022. Methodological quality was appraised using the Cochrane risk-of-bias tool. In addition, general, methodological, and brain neuroimaging information was extracted and summarized to investigate the potential neural mechanisms by which acupuncture affects patients with MCI. In total, 22 studies involving 647 participants were included. The methodological quality of the included studies was moderate to high. The methods used included functional magnetic resonance imaging, diffusion tensor imaging, functional near-infrared spectroscopy, and magnetic resonance spectroscopy. Acupuncture-induced brain alterations observed in those patients with MCI tended to be observable in the cingulate cortex, prefrontal cortex, and hippocampus. The effect of acupuncture on MCI may play a role in regulating the default mode network, central executive network, and salience network. Based on these studies, researchers could extend the recent research focus from the cognitive domain to the neurological level. Future researches should develop additional relevant, well-designed, high-quality, and multimodal neuroimaging researches to detect the effects of acupuncture on the brains of MCI patients. Introduction Mild cognitive impairment (MCI), as a multifactorial and complex central neurodegenerative disease, is a phase of predementia with a high conversion rate to Alzheimer's disease (AD; Petersen, 2016;Anderson, 2019). The clinical feature of MCI is a progressive decline of specific cognitive function (such as memory function, language function, executive function, etc.) that is dependent on the location or cause of brain impairment (Gauthier et al., 2006;Petersen, 2011;Montero-Odasso et al., 2017). In addition, as the current global population is aging, the prevalence of MCI has tripled, increasing the economic burden of MCI control (Vos et al., 2015;Jia et al., 2020). Furthermore, this causes a heavy economic burden on MCI control (Pater, 2011;Lin and Neumann, 2013). Owing to the complex pathogenesis of MCI (Sultana et al., 2009), no disease-modifying therapy is available. Therefore, MCI is regarded as a serious healthcare and economic concern worldwide. Currently, there is no cure for MCI (Massoud et al., 2007;Wang et al., 2021;Masika et al., 2022). Therefore, MCI is commonly intervened using various pharmacological therapies to improve symptoms and slow the progression. Since mainstay medicines, such as acetylcholinesterase inhibitors and N-methyl-D-aspartate receptor antagonists, produce side effects in patients with MCI (Petersen et al., 2018), many researchers have investigated other methods for controlling the disease (Horr et al., 2015;Rodakowski et al., 2015;Wang Y. Q. et al, 2020). Non-pharmacological therapies are associated with few adverse events and may complement pharmacological therapy or prevent MCI progression; therefore, they are regarded as potential treatments for MCI. Currently, numerous studies have shown that several physical therapies have the potential to benefit patients with MCI (Bachurin et al., 2018;Canu et al., 2018). Acupuncture, a physical therapy used in China, has been widely applied to ameliorate cognitive, memory, and other types of functional decline for at least the past 2,000-3,000 years Zhou et al., 2020;Bao et al., 2021;Su et al., 2021;WuLi et al., 2021). It involves using needle insertion into specific acupoints (skin and underlying tissues) to enhance the cognitive ability of patients. Many systematic reviews (SRs; Cao et al., 2013;Deng et al., 2016;Lai et al., 2020;He et al., 2021;Yin et al., 2022) have demonstrated that acupuncture results in cognitive enhancement in MCI, improved subjective cognitive decline, and diminished post-stroke cognitive impairment without obvious side effects. The positive effects of acupuncture on MCI have encouraged researchers to further explore the therapeutic effects of acupuncture. Nonetheless, the mechanism by which acupuncture promotes cognitive function remains largely unknown. The central nervous system plays a vital role in corresponding to the cognitive function. Numerous studies (Bajo et al., 2010;Han et al., 2012;Tomasi and Volkow, 2012;Hoffstaedter et al., 2015) have illustrated that the decline of cognitive function is associated with changes in brain areas and networks. However, the decline process in the MCI period maybe reversible because of the human brain's plasticity and adaptivity (Cohen et al., 2009;Hötting and Röder, 2013). Physical stimulation is an approach to enhancing brain plasticity (Erickson et al., 2013). Considering the crucial role of brain plasticity in MCI, experts, scholars, and researchers have attempted attempt to understand the neural mechanism underlying acupuncture-related improvement in cognitive performance. With technical advances, numerous noninvasive neuroimaging methods (such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), functional near-infrared spectroscopy (fNIRS), magnetic resonance spectroscopy (MRS), etc.) have been applied to identify neural features of acupuncture-induced alternations in patients with MCI Liu et al., 2014;Tan et al., 2017;Shan et al., 2018). However, no SR has summarized the neuroimaging evidence regarding using acupuncture for MCI. Therefore, the present study aimed to explore three issues: (1) the main characteristics of current neuroimaging studies; (2) the acupuncture-induced alternations in the brain; and (3) the direction of future study. Thus, we summarized main characteristics, core brain areas, and potential networks involved in the response to acupuncture to better understand the neurological mechanism by which acupuncture improves MCI and provide suggestions and references for future research. Materials and methods This SR was registered on the PROSPERO platform (number: CRD42022331525). The study strictly followed the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines (Page et al., 2021). Inclusion and exclusion criteria Trials were incorporated in the assessment if they met the following criteria: (1) original, peer-reviewed neuroimaging clinical studies published in Chinese or English; (2) all patients met the diagnostic criteria of MCI; (3) the intervention group received acupuncture, regardless of acupuncture points, acupuncture methods, acupuncturists, and treatment duration; (4) the control group was healthy control, conventional medicine, sham acupuncture, and others; and (5) the neuroimaging tools used were fMRI, DTI, fNIRS, and/or MRS. Studies were excluded if they met any of the following criteria: (1) reviews, letters, comments, protocols, and experimental studies; (2) duplicated/retracted articles; and (3) if they reported insufficient/ unavailable data. Search strategies The following electronic databases were searched independently by two reviewers: PubMed, Embase, Cochrane Library, Web-of-Science Core Collection, Chinese Biomedical Literature Database, China National Knowledge Infrastructure, VIP Database, WF Database, Gray Literature Database, and other resources (ClinicalTrials.gov, Chinese Clinical Trial Register (ChiCTR), World Health Organization International Clinical Trials Registry Platform (WHO ICTRP)) from the date of database inception to 1 June 2022. The following phrases were used for searches of the literature: (1) clinical condition: mild cognitive impairment, cognitive dysfunction, and MCI; (2) acupuncture terms: acupuncture, electronic acupuncture, acupuncture moxibustion, warm needling, scalp needle, meridian, and acupoint; and (3) study type: neuroimaging trial. Search terms were combined using "and" and "or. " The electronic database search strategies used are presented in Appendix 1. Study selection and data extraction Two investigators (MX and ZC) independently screened identified studies. Intra-class correlation coefficient score (score = 0.95) was applied to evaluate between-investigator consistency. MX and ZC first read the titles and keywords of selected studies to identify duplicate articles. Thereafter, the investigators assessed article titles, abstracts, and keywords, selecting trials based on inclusion criteria. Finally, the investigators screened the full texts of studies to confirm that they met inclusion criteria. Any dispute between the two investigators was handled through an interchange. If no ideal solution was reached, a third referee (LZ or FL) assisted in making a final decision. Frontiers in Aging Neuroscience 03 frontiersin.org Two data extractors (MX and ZC) extracted information using a self-defined standardization extraction form that covered six general topics: (1) identification information (first author's name and year of publication); (2) basic information (study design, sample size, diagnostic criteria, age, and gender); (3) acupuncture details based on the Revised Standards for Reporting Interventions in Clinical Trials of Acupuncture (STRICTA) MacPherson et al., 2002; (4) details regarding controls used; (5) clinical outcomes; and (6) neuroimaging information. The procedure is displayed using a PRISMA flow diagram. Quality assessment Cochrane's tools were used to assess the methodological quality. For randomized controlled trials (RCTs), the Risk of Bias 2.0 tool (RoB 2; Sterne et al., 2019) was applied. The risk of bias (RoB) of each study was assessed and classified as follows: high, low, or some concerns. For non-randomized controlled trials (non-RCTs), the Risk Of Bias In Non-randomized Studies of Interventions (ROBINS-I) tool (Sterne et al., 2016) was applied. The overall RoB was classified as follows: critical, serious, moderate, low, or no information. A third party was consulted to resolve any disagreement between investigators. Statistical analysis for acupuncture-induced brain alterations Owing to the various neuroimaging methodology tools used in the included trials, a descriptive statistical analysis was considered appropriate. In addition, a narrative analysis was carried out to summarize the acupuncture-induced structural or functional brain alterations, regardless of acupuncture's instant/sustained effect. Search and selection of studies A PRISMA flow plot describing the methodology used to search and screen trials is shown in Figure 1. A total of 277 studies were included after a comprehensive search was implemented. After deduplication, 190 studies remained. After the initial screening phase, only 29 trials remained. After the second screening phase (screening full-text articles), seven articles were excluded (four that did not include neuroimaging data and three with treatments considered ineligible) according to the inclusion criteria, leaving 22 trials (Liu, 2009(Liu, , 2010Jin, 2010;Cui, 2011;Jiang, 2011;Feng et al., 2012;Jiang et al., 2012;Wang et al., 2012;Chen et al., 2013Chen et al., , 2014Xu, 2013;Xu et al., 2013;Liu et al., 2014;Jia et al., 2015;Tan et al., 2017;Xu and Peng, 2017;Shan et al., 2018;Ghafoor et al., 2019;Li et al., 2020;Wang F. et al., 2020;Cao et al., 2021;Khan et al., 2022) for the final analysis. The reasons for excluding the selected full-text trials are presented in Appendix 2. Study characteristics The key characteristics of the 22 neuroimaging trials included in this review are presented in Table 1. The publication dates of the included trials ranged from 2009 to 2022. In total, 20 studies (90.9%) were conducted in China and 2 (9.1%) were conducted in Korea. Eleven trials (50%) were published in English, whereas the others were published in Chinese. Study design Regarding the study design, 10 RCTs and 12 non-RCTs were assessed. Sample sizes of the studies ranged from 6 to 78 individuals. Sixteen studies were designed to investigate whether acupuncture induces cerebral responses. In addition, six trials were designed to investigate whether acupuncture affects neural networks in the brain. Participants A total of 511 patients with MCI and 136 healthy controls were included in this assessment. In total, nine neuroimaging trials used Petersen's criteria, six used other criteria, and seven did not mention the criteria used. Ten neuroimaging trials compared patients with MCI and healthy subjects, whereas others enrolled patients with MCI exclusively. All studies included participants with MCI aged 55-74 years. Twenty studies indicated the sex of patients with MCI (290 male and 380 female). Two studies did not report the sex of the patients with MCI. The sample sizes of the 10 articles that enrolled patients with MCI exclusively ranged from 6 to 36 per group. Among 12 trials that compared MCI patients and healthy volunteers, the main matching sample size ratio of MCI/healthy control was 1/1. Acupuncture details Based on the Standards for Reporting Interventions in Clinical Trials of Acupuncture (STRICTA) guidelines, acupuncture details are summarized and presented in Table 2. The rationale (acupuncture type and the reason for treatment provided) for selecting a particular type of acupuncture was mentioned in all neuroimaging trials. The number of needle insertions varied from 1 to 17 per session for each subject. KI 3 (Taixi), LR 3 (Taichong), and LI 4 (Hegu) were the acupoints most often used. Acupuncture insertion depth was 5-30 mm. Only eight neuroimaging trials described the deqi sensation. Manual acupuncture was applied in 18 articles, and electronic acupuncture was used in four studies. The diameter and length of the needles used in the included studies were 0.35 and 25 mm, respectively. The number of treatment sessions ranged from 1 to 48. Most frequently, one 3-min session was performed. Only six neuroimaging trials provided information about the acupuncturists. In total, 18 trials provided an elaborate depiction. Imaging conditions and analyses fMRI, DTI, fNIRS, and MRS were used to explore neuronal activities, functional brain alterations, brain structural alterations, and the metabolic ratio induced by acupuncture in patients with MCI. Only one trial assessed structural changes. The study carried out DTI to explore fractional anisotropy (FA). Three articles evaluated the metabolic ratio using MRS. One study evaluated hemodynamic responses via fNIRS, and another investigated functional connectivity (FC). Seventeen studies measured the functional changes induced by acupuncture. Eight studies used task-state functional magnetic resonance imaging (ts-fMRI) to measure cerebral neuron alterations. Eight studies used a single-block design. Acupuncture involved a persistent stimulation for 3 or 16 min per block. Nine studies employed resting-state functional magnetic resonance imaging (rs-fMRI) to investigate FC (five studies), regional homogeneity (ReHo; two studies), or amplitude of low-frequency fluctuation (ALFF; two studies). Figure 2 displays the proportions of imaging conditions and analytical methods used. Quality assessment The results of methodological quality assessments are depicted in Appendices 3, 4. The PRISMA flow chart of selection process. Among the 10 RCTs assessed, a moderate RoB was found in all studies using the RoB 2 tool. With regard to randomization, there was some concern with all articles because they all had ill-defined random sequence generation methods. Five articles had low bias regarding deviation from the intended intervention, whereas there was some concern with five studies due to short descriptions. Only one study had notable missing outcome data. The others were determined to have low bias. Notably, all articles were considered at a low RoB regarding outcome measurements. There were concerns with all RCTs regarding the selection of reported results because of a lack of protocol and registration information. Of the 12 non-RCTs considered, a moderate RoB for five studies and a low RoB for seven studies were shown using the ROBINS-I tool. Regarding confounders, seven articles had a low bias, whereas five articles had a moderate RoB due to missing or mixed information. Notably, all articles demonstrated low bias with regard to participant selection, the classification of interventions deviating from those intended, missing data, outcome measurement, and the selection of reported results. Potential neural mechanisms underlying acupuncture According to the studies considered, acupuncture-induced brain alterations in MCI patients occurred principally in the cingulate cortex (eight studies), hippocampus (six studies), or prefrontal cortex (six studies). The brain regions that were reported in the included studies were key regions of the default mode network (DMN; Raichle, 2015;Smallwood et al., 2021), central executive network (CEN; Chen et al., 2019;Fang et al., 2021;Daigle et al., 2022), and salience network (SN; Chand et al., 2017;Porto et al., 2018;Xue et al., 2021). As acupuncture effects are classified as either constant or instant, findings associated with each were considered separately. Cerebral constant response to acupuncture As displayed in Appendix 5, the most commonly reported constant acupuncture-related brain alterations in patients with MCI were located in the hippocampus (four studies), prefrontal cortex (four studies), parahippocampal gyrus (two studies), and cingulate cortex (two studies). Two studies employing fNIRS (Ghafoor et al., 2019;Khan et al., 2022) revealed that that constant acupuncture can improve the hemodynamic response (Khan et al., 2022) and FC in the prefrontal cortex in patients with MCI (Ghafoor et al., 2019). In addition, three trials (Liu, 2009(Liu, , 2010Jin, 2010) employing MRS revealed changes in N-acetyl aspartate/creatine, choline/creatine, and myo-inositol/creatine ratios in the temporal gyrus and hippocampus of patients with MCI due to regular acupuncture. Only one study (Xu and Peng, 2017) reported regional structural changes due to acupuncture treatment. Here, after 24 sessions of acupuncture over 8 weeks, white matter FA was increased in the splenium of the corpus callosum, cingulate gyrus, inferior frontaloccipital, and superior longitudinal fasciculus. Based on fMRI trials, two neuroimaging studies reported increased ReHo or ALFF after constant acupuncture in brain areas concerned with the processing of cognitive function including memory regions (e.g., the parahippocampal gyrus, temporal lobe, precuneus), visual-spatial regions (e.g., the occipital lobe, lingual gyrus), and affective-emotional processing cognitive function areas (e.g., the insula, cingulate cortex, thalamus). Conversely, one study (Xu and Peng, 2017) revealed that ReHo decreased after acupuncture therapy in brain areas involved in processing cognitive function, including memory regions (e.g., the inferior frontal gyrus, temporal lobe) and executive/language function regions (e.g., the posterior cerebellar lobe, inferior temporal gyrus). Through the FC matrix, one study (Tan et al., 2017) revealed increased connectivity between cognition-related brain areas such as the hippocampus, insula, dorsolateral prefrontal cortex, thalamus, inferior parietal lobule, and anterior cingulate cortex due to regular acupuncture. In addition, using region of interest-wise (ROI-wise) FC analysis, Li et al. (2020) found that right hippocampal FC with the right inferior temporal gyrus/middle temporal gyrus was significantly enhanced after acupuncture treatment. Furthermore, there was a significant correlation between the FC strength of the right hippocampus-inferior temporal gyrus and MoCA score change. Cerebral instant response to acupuncture As demonstrated in Appendix 6, fMRI was used in all studies to measure the instant cerebral response to acupuncture in patients with MCI. The most commonly reported brain alterations in MCI subjects undergoing fMRI were in the cingulate cortex (six studies), medial frontal gyrus (five studies), and posterior central gyrus (five studies). Seven studies explored which brain areas were activated or inactivated after acupuncture. The main brain areas involved included the executive/language function region (e.g., the medial frontal gyrus, middle frontal gyrus), sensory function region (e.g., the posterior The proportion of scanning techniques. central gyrus), affective-emotional processing areas of cognitive function region (e.g., the insula, cingulate cortex), and auditory speech area (e.g., the superior temporal gyrus). In addition, two articles Jia et al., 2015) reported increased ReHo or ALFF after instant acupuncture in brain areas involved in the processing of cognitive function, including the memory regions (e.g., the parahippocampal gyrus, precuneus), affective-emotional processing areas of cognitive function (e.g., the cingulate cortex, thalamus), and executive/language function region (e.g., the middle frontal gyrus). Through FC analysis, Xu et al. (2013) demonstrated increased connectivity between the dorsal lateral prefrontal cortex and frontal and bilateral frontal lobes due to instant acupuncture, and decreased connectivity between the bilateral inferior parietal lobule after instant acupuncture. In addition, using whole-brain FC analysis, Feng et al. (2012) found that connections among the hippocampus, amygdala, parahippocampal gyrus, insula, and cingulate cortex are significantly enhanced after acupuncture. In addition, via using multivariate granger causality analysis (mGCA) to assess connectivity it was shown that the dorsolateral prefrontal cortex and hippocampus act as central hubs and significantly influence each other. Discussion Twenty-two articles that used various neuroimaging tools to investigate the neurocentral mechanism of acupuncture were included in this review. Since 2009, the use of neuroimaging methods to investigate the central nervous regulatory mechanism by which acupuncture can affect MCI has gradually attracted attention. This SR focuses on summarizing the characteristics and findings of neuroimaging trials that investigated the effects of acupuncture on MCI to deepen our understanding of the central mechanism by which this occurs. Regarding study design, among the 22 neuroimaging studies, only 10 were RCTs, and the others were non-RCTs. Meanwhile, only seven had high methodological quality. Thus, to ensure that acupuncture therapy programs that are applied in neuroimaging trials are effective for MCI treatment, it is recommended that RCTs are used to this. More neuroimaging studies with random designs should be conducted to improve the quality of evidence. Meanwhile, the design of future research should be formulated based on the guideline from the Cochrane Handbook for Systematic Reviews of Interventions (Bian et al., 2011;Cumpston et al., 2019). Additionally, the sample size of each study included in our analysis was less than 80. These small sample sizes may undermine the reliability and replicability of expected effect sizes in neuroscience (Moayedi et al., 2018). Thus, enlarging sample sizes by implementing a standardized program has the potential to improve the statistical power of the findings. Moreover, 16 of the studies were designed to investigate whether acupuncture induces cerebral responses. However, whether the studies performed correlation analyses between cerebral responses and clinical outcomes or not, remains unclear. In addition, six of the trials were designed to investigate whether acupuncture affects neural networks in the brain and only two trials explored the relationship between clinical efficacy and FC strength. Thus, future studies should explore the relationship between clinical efficacy and neurological alteration to better understand the neural mechanisms underlying acupuncture-related improvement in cognitive performance. Nine of the included studies used Petersen's criteria. The criteria is commonly used to achieve a clinical diagnosis of MCI. Nevertheless, owing to its distinct phenotype and precise diagnosis, the Jak/Bondi The main reported alternation of brain regions and networks by instant acupuncture across the studies reviewed. Brain regions: ACG, anterior cingulate and paracingulate gyri; AMYG, amygdala; dlPFC, dorsolateral prefrontal cortex; HIP, hippocampus; INS, insula; IPL, Inferior parietal, but supramarginal and angular gyri; PCG, posterior cingulate gyrus; PCUN, precuneus; PHG, parahippocampal gyrus; THA, thalamus. Brain networks: blue network, default mode network; green network, salience network; yellow network, central executive network; red network, right frontoparietal network. The main reported alternation of brain regions and networks by sustained acupuncture across the studies reviewed. (Bondi et al., 2014) are considered a better diagnostic tool than the Petersen's criteria. Moreover, cognitive functions could be subdivided as follows: memory, executive, and verbal. However, no study has used neuroimaging to investigate the mechanism of acupuncture on specific cognitive domains in MCI. Therefore, the central mechanism by which acupuncture affects an MCI subtype requires investigation using the Jak/ Bondi criteria. In addition, in the studies, there were more women than men with MCI. Based on current studies, sex is an important feature affecting pathological mechanisms and treatments for patients with MCI. However, no study investigated sex-disaggregated neuroimaging on the mechanism of acupuncture in MCI. Therefore, sex-disaggregated neuroimaging studies of acupuncture in patients with MCI are required (Overton et al., 2019). In terms of acupuncture details, 18 studies used manual acupuncture. However, only six trials mentioned the details of the acupuncturists. Therefore, despite the common use of manual acupuncture, it is worth noting that stimulation is difficult to quantify for various manipulations by different acupuncturists. To ensure repeatability and consistency of findings, researchers should formulate elaborate acupuncture procedures and conduct standardized training for acupuncturists at the start of each trial. Furthermore, based on the traditional Chinese medicine theory, the deqi sensation plays a core role in the effect of acupuncture; however, it was reported in fewer than half of the studies. Moreover, numerous neuroimaging trials have illustrated the cerebral response to deqi sensations. Thus, these items have been recorded in detail. Finally, differences in acupuncture rationale, details of needling, treatment regimen, practitioners, comparator interventions, and other details may have influenced the findings of the included studies. Therefore, there is an urgent need to standardize acupuncture procedures based on STRICTA guideline. The top three comparison models used to compare groups included acupuncture vs. healthy volunteers, acupuncture vs. conventional medicine, and acupuncture vs. sham acupuncture. The acupuncture model vs. healthy volunteers was used to investigate various cerebral activity differences observed when healthy individuals and those with MCI were compared. Models of acupuncture vs. conventional medicine/ sham acupuncture were compared to investigate various cerebral activity differences after acupuncture vs. medicine/placebo. However, these models are far from adequate for exploring the mechanism by which acupuncture affects MCI. For instance, according to the STRICTA criteria, the depth, response sought, acupuncture stimulation, practitioners, and other factors affecting acupuncture efficacy require further research. The most commonly applied cognitive assessments were the MoCA and MMSE. Both tools have been shown to be accurate for the detection of AD. Compared with the MMSE, the MoCA was more commonly used for the identification of MCI. The MoCA is widely used in Western countries and is recommended for evaluating MCI (Beath et al., 2018). However, a common problem with use of MoCA in developing countries is its applicability for evaluating illiterate and lower-educated older adults. MoCA-basic (MoCA-B) is a revised version of the MoCA scale that is especially appropriate for application in older adult subjects who are illiterate or have little education (Huang et al., 2018). Therefore, MoCA-B is recommended for evaluating MCI in developing countries. The AVLT (Crawford et al., 1989) is the most used assessment approach for evaluating episodic memory. The AVLT tool (Xu et al., 2020) includes an assessment of immediate word recall, short-delayed recall, longdelayed recall, cued recall, and recognition. Episodic memory impairment is the most effective predictor of AD. Thus, the AVLT is recommended for measuring episodic memory function in patients with amnestic MCI induced by AD. To assess cerebral responses to acupuncture in the treatment of MCI, the following neuroimaging methods were applied in the reviewed studies: fMRI, DTI, fNIRS, and MRS. fMRI was the most frequently applied method to investigate the cerebral responses of acupuncture for MCI (Glover, 2011). fMRI indirectly evaluates brain alterations based on the presence of deoxyhemoglobin in venules, the blood oxygenation level-dependent (BOLD) effect. DTI is an advanced MRI technique and is used to provide qualitative and quantitative white matter microarchitecture information (Meoded and Huisman, 2019). fNIRS is based on optical absorption in the brain and is used to monitor functional brain activity changes (Pinti et al., 2020). MRS is an approach used to evaluate levels of specific neurotransmitters and investigate metabolite alterations in the brain (van Ewijk et al., 2015). According to previous studies (Gerardin et al., 2009), MCI is a multidimensional central nervous system disease that that affects brain structure and function. It is well known that these neuroimaging approaches have their own characteristics, so integrating multiple approaches allows for a more comprehensive assessment of the effects of acupuncture on MCI. Among the 22 trials considered, only one study integrated fMRI and DTI findings when investigating cerebral neuron alterations during acupuncture for MCI. The other studies focused either on structural changes in the brain or functional architecture using a single neuroimaging approach. Thus, the application and promotion of multimodal neuroimaging techniques (such as fMRI with DTI, fNIRS with fMRI, and fMRI with MRS) in future research is urgently needed to more comprehensively investigate mechanistic responses to acupuncture. Additionally, other neuroimaging techniques, such as positron emission tomography, electroencephalography, and magnetoencephalography, are approaches that will expand the current knowledge of MCI (Jackson and Snyder, 2008;López et al., 2014;Chandra et al., 2019). Thus, additional neuroimaging techniques are needed to more extensively investigate the mechanistic responses to acupuncture. Notably, the common brain areas affected by acupuncture were the cingulate cortex, hippocampus, and prefrontal cortex. The cingulate cortex is important for cognitive networks. Accumulating evidence suggests that improving our understanding of the cingulate cortex may likewise enhance our understanding the brain mechanisms underlying MCI (Cera et al., 2019;Jeong et al., 2021). Meanwhile, numerous studies (Pennanen et al., 2004;Xue et al., 2019;Valdés Hernández et al., 2020) have demonstrated that the hippocampus is a brain region impacted by MCI and is closely related with memory and orientation. In addition, the prefrontal cortex is the region responsible for impaired recollection of working memory in MCI (Serra et al., 2022;You et al., 2022). However, no previous study investigated the effect of acupuncture on MCI in a specific brain area. Based on this, this SR suggests researchers should explore the association between the neurological effect of acupuncture for MCI and the specific regions (such as cingulate cortex, hippocampus, and prefrontal cortex) of the brain. Moreover, neuroimaging studies assessing the neurological effects of acupuncture on MCI has illustrated that acupuncture may regulate brain networks. Important pathways associated with MCI improvement due to acupuncture are summarized as follows. (Meng et al., 2022). Further, multiple neuroimaging trials have suggested that regulating the activities of triple-network models is a key mechanism by which acupuncture therapy affects brain functioning (Bai et al., 2009;Deng et al., 2016;Wang et al., 2019). In this SR, most triple-network areas were shown to be involved in the response to acupuncture in patients with MCI, implying that acupuncture may modulate MCI distribution networks. However, up to now, none of the included neuroimaging studies have explored the effect of acupuncture on MCI through the triple-network models. Thus, future, related neuroimaging studies are proposed to investigate the certain effects of acupuncture on MCI in regulating these brain networks. It has been acknowledged that the effect of acupuncture could be divided into constant effect and the instant effects. In the reviewed studies, researchers not only focused on the instant effect but also the constant effect of acupuncture. We found that the cingulate cortex was the main brain area affected by the acupuncture response, regardless of whether the effects were instant or constant. Thus, the cingulate cortex maybe a crucial structure in the functional response to acupuncture. Additionally, instant but not constant effects of acupuncture in MCI were observed in areas of the right frontoparietal network. Previous studies (Pupíková et al., 2021) have shown that the right frontoparietal network plays a vital role in visual working memory performance. The network may mediate instant effects of acupuncture to improve memory. However, these neuroimaging findings need to be validated. To the best of our knowledge, current SRs investigating the use of acupuncture for treating MCI have focused primarily on the efficacy and safety of the therapy. However, no SRs have explored the mechanism by which acupuncture affects MCI. As the number of neuroimaging studies on acupuncture for MCI have increased, multiple imaging modalities and various analytical approaches have provided direct evidence of the central neural mechanism for acupuncture therapy in MCI Wen et al., 2021). Thus, this SR aims to provide specific insights regarding the neurocentral mechanism by which MCI is alleviated via acupuncture by summarizing the current clinical neuroimaging findings. This study has several limitations. First, various imaging modalities and analytical approaches have been applied, making a complete quantitative meta-analysis difficult. Second, due to the variability regarding acupuncture details (such as acupoints, frequency, treatment session, and needle type), no included articles completely adhered to the STRICTA statement. This has the potential to increase heterogeneity and the risk of bias. Further, the limited number of high-quality studies published may have limited our findings. Considering the instability arising from small sample sizes that have been discussed, the findings should be interpreted with caution. Conclusion Our systematic review summarizes neuroimaging data used to investigate the cerebral response to acupuncture in patients with MCI. The brain areas covered in acupuncture for MCI are mainly located in the DMN, CEN, and SN, especially in the cingulate cortex, hippocampus, and prefrontal cortex. However, the included studies are in the preliminary exploration stage. Thus, multicenter, large sample, and strictly designed RCTs employing multimodal neuroimaging approaches are needed to confirm current neuroimaging findings. Data availability statement The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding authors. Author contributions ZY, ZW, and FL conceived this study. ZY and JZ developed and achieved the review, under the supervision of LZ and wrote the first draft of the current review with XL, HY, MS, LZ, ZW. ZY and XZ provided the analysis plan and performed analysis. MX and ZC performed study search, screening, and extraction of data, whereas YL reviewed the work. FL provided input to the final draft. All authors contributed to the article and approved the submitted version. Funding This work was financially supported by the State Administration of Traditional Chinese Medicine, National Natural Science Foundation of China (nos. 81590951, 82004486, 81722050, and 81973961).
7,473.2
2023-02-15T00:00:00.000
[ "Biology", "Psychology" ]
Visual servoing of a laser beam through a mirror In this paper, we present a new approach to improving vocal fold access to perform phonomicrosurgery. It is done by shooting the laser through a mirror to reach the vocal fold hidden parts. A geometrical study of laser shooting path was conducted for vocal fold anatomical constraints, followed by devising a laser-shooting system conceptual design. Control laws were developed and tested by simulation and validated experimentally on a test bench in a monocular and stereoscopic configuration. Simulation and experimental results are provided to demonstrate the effectiveness of the developed approach. Introduction The demand to improve health quality has led to much research including, phonomicrosurgeries, which involves delicate surgical operations on the vocal fold and requires a highly skilled surgeon [1], [2].Vocal fold surgery requires precision and accuracy due to the tissue's nature being resected, thin, fragile, and viscous.Their lesions might be less than 1mm [3], [4].The most common procedure to resect those lesions relies on laser surgery.Systems for laser phonosurgery, such as Acupulse Duo by Lumenis [5], are based on a laser manipulator mounted onto an external microscope.The patient is placed in extreme neck extension so that a rigid straight laryngoscope can be placed in the patient mouth and throat to allow a direct line of sight between the laser manipulator and vocal fold in the larynx.However, certain vocal fold portions are inaccessible in such a placement as lateral and posterior sides because the laser source is located out of the patient's body.A small area laser beam could be moved into the laryngoscope, which prevents the surgeon from conducting a surgical operation on those portions.Another system, a flexible endoluminal robotic system, was developed during the European project µRALP [6].That concept has a miniaturized laser manipulator, having micro cameras embedded at the endoscope tip and shooting laser from within the larynx.Nevertheless, since the laser source was placed above the vocal fold and light only travels along straight lines hindered the surgeon from operating on the anterior vocal fold but hardly had access to lateral sides and, to the best of our knowledge, no access to the posterior side.The µRALP project also proposed improving laser steering accuracy by automatically controlling the laser [7] to follow a surgeon-drawn path [8], rather than having the surgeon manually steer the laser beam through a poorly ergonomic joystick.This automatic control is done by visually servoing laser spots from one [7] or two [9], [10] endoscopic cameras. There are several works reported in the literature, especially in visually guided laser ablation catheter [11].Which was designed to allow the operator to directly visualize target tissue for ablation and then deliver laser energy to perform point-to-point circumferential ablation.Also, the velocity independent visual path following for laser surgery in [12] where nonholonomic control of the unicycle model and path following at high frequency to satisfy the constraints of laser-tissue interaction were explored.Another example is reported in [13], where a robotic system for skin photo-rejuvenation, which uniformly delivers laser thermal stimulation to a subject's facial skin tissue, was investigated.Yet, as far as we could understand it, none of those work automatically steered laser along hidden paths.Visual servoing techniques use visual information extracted from the images to design a control law [14], [15], and [16].It is a systematic way to control a robot using the information provided by one or multiple cameras [17].Standard stereo sensors used for visual servoing have a limited view and consequently limit their application range.Hence, using planar mirrors has been prioritized to enlarge the field of view of classic pinhole cameras [18] and for high-speed gaze control [19], [20].A planar mirror is a mirror with a planar reflective surface.An image of an object in front of it appears to be behind the mirror plane.This image is equivalent to one being captured by a virtual camera located behind the mirror; additionally, the virtual camera's position is symmetric to the position of the real camera.In our case, the camera's reflection in the planar mirror is used to track virtual features points in the vocal fold hidden scene.For instance, by using mirror reflections of a scene, stereo images are captured [21].Tracking is the problem of estimating the trajectory of an object in the image plane as it moves around a scene.A tracker allocates unswerving labels to the tracked objects in different video frames.In addition, depending on the tracking field, a tracker can provide object-centric information, such as the area or shape of an object.Simple algorithms for video tracking rely on selecting the region of interest in the first frame associated with the moving objects.Tracking algorithms can be classified into three categories: point tracking [22], [23], kernel tracking [24], [25], and silhouette tracking [26].Occlusion can significantly undermine the performance of the object tracking algorithms.Occlusion often occurs when two or more objects come too close and seemingly merge or combine.Image processing systems with object tracking often wrongly track the occluded objects [27].After occlusion, the system will wrongly identify the initially tracked objects as new objects [28].If the geometry and placement of static objects in the real surroundings are known, the so-called phantom model is a common approach for handling the occlusion of virtual objects.A method for detecting a dynamic occlusion in front of static backgrounds is described in [29].This algorithm does not require any previous knowledge about the occluding objects but relies on a textured graphical model of planar elements in the scene.Some approaches solve the occlusion problem using depth information delivered by stereo matching [30].In our approach to the occlusion problem, we use the triangulation method, where we pay attention to the pixels which are well reconstructed when an image is reproduced and ignore the ones which are not well reconstructed.The paper focused on a conceptual method of servoing laser to hidden parts of the vocal fold.Having been inspired by this clinical need for improved access and those work-pieces on mirror reflections, we propose an analysis of anatomical constraints of the vocal fold, devised a conceptual design, and formulated a controller, which was evaluated experimentally on a tabletop set-up.The first contribution is to propose a method to access parts of the vocal fold work-space that are not directly visible during phonomicrosurgery, for instance, the posterior side of the vocal fold by seeing through an auxiliary mirror to overpass the limited micro-cameras field of view of a flexible endoscopy system that was missing in [7], and shooting surgical laser using the same auxiliary mirror to access those invisible parts in the vocal fold workspace.The second contribution is to derive the control equations for automatically steering the laser through the auxiliary mirror to the hidden parts of the vocal fold by updating the control in [9] and [10].However, through modelling, simulation, and experimentation, the addition of the auxiliary mirror is shown to have no impact, which is thereby demonstrated as being used as it is.Nonetheless, we took the opportunity of this study to derive a variant of the control in [9] based on the geodesic error (cross-product) rather than on the linear error.The figure 1 below shows a sample of a simulated image of the vocal fold.The remainder of this paper is presented as follows.Section 2 gives a detailed description of the conceptual system design to access parts of the vocal fold.Section 3 deals with modelling the proposed system both in monocular and stereoscopic cases to establish a controller.Section 4 focuses on the simulation results of the controller for both the monocular case and stereoscopic case.Section 5 presents the performed experimental validations in a tabletop setup. System configuration As illustrated in figure 2, the objective is to devise a method that improves access to hidden parts of the vocal fold.In our approach, we propose a system with two cameras to give stereo view and visual feedback to the scene, a laser source to provide surgical laser needed for tissue ablation, illumination guidance, auxiliary mirror guide, and an auxiliary mirror manipulator through which laser is steered to hidden parts of the vocal fold.In practice, human tissues will never contact the designed micro-robotic device, just the endoscope outer shell, which can be readily sterilized and biocompatible.An auxiliary mirror would be inserted at the beginning of the surgical process and remain stationary until the end of surgery.All those parts must be miniaturized and enclosed in a flexible endoscope during fabrication which is out of this paper's scope.However, details on packing all those hardware components (miniaturized) into an endoscope can be found in [31] Figure 2. System configuration System model for accessing hidden parts of vocal fold From the system configuration above, enlarging its distal arrangements of a flexible endoscope and focusing on how to access hidden features of the vocal fold is shown in figure 3. The system has a micro-robot, a tip/tilt actuating mirror to steer the laser through an auxiliary mirror to reach hidden parts.The two cameras also observe the same hidden scene through a mirror reflection.Hence, providing a clear vision of the surgeon's stage to define a trajectory followed automatically by surgical laser in those hidden parts. Enlarged anterior, lateral and posterior view for accessing hidden parts of vocal fold The former configuration of figure 2 has a limited field of view, even on anterior parts of the vocal fold, because of technical constraints (miniaturization for endoluminal systems, direct line of sight for extracorporeal systems).Thus, the proposed method for accessing hidden parts on the vocal fold anterior side is in figure 4().Based on the orientation and position of the auxiliary mirror, the surgical laser can be steered to all parts on the anterior side of the vocal fold, which is the surface of the vocal fold that is visible from the larynx.For instance, the laser can reach parts outside of the direct view field ( in yellow).In figure 4(), a surgical laser is first shot to an auxiliary mirror, then reflected towards the tissues located above the vocal fold in the larynx, such as the ventricular fold.As demonstrated in figure 4(), rare portions of a vocal fold being the surface visible from the trachea can be accessed by opening the vocal lips using forceps, exposing the backside for the auxiliary mirror to be oriented and pushed.Epipolar lines in right and left images, respectively Mirror reflection From a technical point of view, these control equations also differ from the already published ones [7], [9], [10] by the use of an alternative formulation of the perspective projection model and by the servoing of geodesic image errors instead of linear image errors.Consider the reflection of into through the auxiliary mirror plane = ( , ) where is the vector normal to the mirror plane, and is the distance of the reference frame origin to the mirror plane.Using homogeneous coordinates for the points and following [32], one has where Implementing these equations depends on the chosen reference frame and can thus be expressed either in the world frame. or in a camera frame: Camera projection based on a cross-product concept When a camera captures an image of a scene, depth information is lost as objects, and points in 3D space are mapped onto a 2D image plane.For the work in this paper, depth information is crucial since there is a need for scene reconstruction from the information provided by the 2D image to know the distance between the actuated mirror and the scene without prior knowledge of where they are.Therefore the used approach of the perspective (pinhole) image projection of a 3D point is stated as; where K represents calibrated intrinsic camera parameters, I 3×4 represents a canonical perspective projection in the form of 3 × 4 identity matrix, T represents Euclidean 3D transformation (rotation and translation) between the two coordinate systems of camera and world through a mirror.The ≡ sign represents depth loss in the projection up to some scale factor.In practice, the ≡ sign can be removed through division operation, which introduces non -linearity.Alternately, using the cross product, we can make the projection equation a linear constraint equation.Since the light ray emitted from the camera centre point aligns with the light ray coming from the 3D point.If we treat light ray emitted from the camera centre point as vector and light ray coming from the 3D point as vector .The cross-product of two 3D vectors and gives another vector with a magnitude equal to that of the area enclosed by the parallelogram formed between the two vectors.The direction of this vector is perpendicular to the plane enclosed by and in the direction given by the Right-hand rule, and the magnitude of the cross product will be given by | || | .However, if these two vectors are in the same direction, just like in our case, the angle between them will be zero.The magnitude of the cross product will be zero since (0) = 0.The resultant vector will be the zero vector. × = 0 Hence for notation simplicity, let; Consequently, (6) rewrites more simply as Laser spot kinematics The time derivative of (9) is considered to servo the spot position from the current position to the desired one while the camera and mirror remain stationary. Since x is a unit vector (i.e.∥ x∥ = 1) and using (9) yields with > 0 the unknown depth along the line of sight passing through x hence, (10) becomes Scanning laser mirror as a virtual camera Scanning a mirror as a virtual camera is considered with a virtual image plane.Therefore the mathematical relationship between it and the 3D spot on the reflected vocal fold is established in (13) below.Some parameters of (6) have changed as; = , and K = I 3×3 since when using the mirror as a camera, focal length, optical centre, and lens distortion are no longer a problem hence K is taken to be one. is the virtual projected spot on the mirror virtual image plane, T transformation matrix relating micro-mirror frame () with world frame through the auxiliary mirror () hence T constant.Differentiating (13) gives the velocity at which the laser is servoed from one point to another in the image.The resultant equation is; To be virtual or not to be? The overall static model for both the laser steering system through the auxiliary mirror and a camera observing the laser spot through the same mirror is given by the constraints in (6) and (13).This forms an implicit model of the geometry at play, from which one can, depending on what is known beforehand and what's needed, explicitly try to get the unknown values from the known ones.The easiest is to find the laser direction and its spot projection in the image from a known place of the spot in 3D and the 3D locations of the camera T , the steering mirror T , and auxiliary mirror D. However, in practice, one would like to "triangulate through the mirror" the 3D spot from the laser orientation and the spot image projection.And even more helpful, one would like to steer the laser (i.e., change , thus ) from an image-based controller (i.e., a desired motion of ).Then, the question is whether one should explicitly reconstruct or can the controller be derived without this explicit reconstruction. A large part of the answer to that question lies in the auxiliary mirror location D. If it is known, then triangulation can potentially be done, but this imposes strong practical constraints.However, looking closely at the above equations and figure 4, one can remark that there exists a virtual spot location, = D which lies behind the mirror.Replacing D by in the (6) and (13) yields a solution independent of the auxiliary mirror location. Of course, this simplification is only valid when both the laser and the camera reflect through the same mirror, forcing the user to check that the laser spot is visible in the image.This also reduces the calibration burden to determine the relative location T between the steering mirror and the camera since the steering mirror frame can arbitrarily be chosen as the world frame of the virtual scene.As a consequence, from a modelling point of view, working with the virtual scene reduces the problem to its core: As will be seen in the following sections, this allows to derive a controller without making an explicit triangulation, that is, without necessarily having sensors for . Consequently, placing the problem in the virtual space allows for a simple solution, independent from prior knowledge of the auxiliary mirror location.Which just needs to be held stable during control so that the desired visual feature and the current one are geometrically consistent. Geodesic error Geodesic error differs from linear error since error reduction is made along the unit sphere's surface for geodesic error rather than within the image plane, resulting in linear error minimization. where x is the detected position of the laser spot in the image and x * is the desired one, which is chosen arbitrarily by users in the visual image, and representing the shortest arc between the two points defining the rotation vector orthogonal to the arc plane.Once x is a unit vector, its derivative takes the form of x = × x where is a pseudo-control signal on the sphere and replacing in (12) yields since × and the virtual 3D laser spot velocity x , to be controlled, is thus constrained by Single-camera case of observing hidden portions of vocal fold We can effectively model and control the laser path with one camera, actuating mirror, and auxiliary mirror.By first establishing angular velocity of the actuating mirror to control the orientation of the laser beam.The general solution to (22) is where x can be interpreted as the motion of along its line of sight (thus a variation of ) that is not observable by the camera.It can be due to the irregular shape of the surface hit by the laser or made by a specific motion of that surface.Observing that allows solving for in (23) Hence, substituting with (25) result in: which simplifies into where ′ = and ′ = are the control gains and can be tuned without explicit reconstruction of the depths and .Again, the controller is independent of the mirror's position because both the image and the laser go through it. can be taken as zero unless one wishes to estimate and compensate for the surface shape and ego-motion.The relationship between laser speed velocity and angular velocity of the actuated mirror is given us: [9] = × (28) Making the subject of the formula from (29) Figure 5.The system model workflow Trifocal geometry Let us now investigate the effect of using two cameras, in addition to the actuating mirror and an auxiliary mirror.Vector is a unit vector in the direction of a laser beam, is the shortest distance from the centre of the micro-mirror to the plane of the auxiliary mirror.Vector 2 is the reflected unit vector of , 1 is the shortest distance between the auxiliary mirror and vocal fold plane, and 2 is the distance along the reflected laser beam.This simulation aims to validate the laser monocular visual servoing through an auxiliary mirror, controlled by (30). Stereo-view imaging system and auxiliary mirror simulation result in a realistic case The second simulation implies a stereoscopic imaging system.Thus, a second camera is added to the first simulation setup, and the control in (34) is applied.The obtained results in figure 9 Figure 10.Photography of the experimental setup Single-camera and auxiliary mirror experimental results Using the setup discussed in figure 10, with one camera.The monocular case was validated experimentally, and results obtained in figure 12() were similar to a simulated case in section 4. Stereo-view imaging system and auxiliary mirror experimental results Experimental validation of stereo-view imaging was performed with the setup discussed in figure 10.Even though both trajectories were straight but for the right image, the path didn't reach the desired target; this could be due to laser spot size differences; hence, their center of gravity moved slightly during control. Conclusion The study shows that vocal fold accessibility improved by seeing through a mirror and servoing surgical laser to reach those hidden portions of the vocal fold.Also, the mirror did not affect the controller.The derived control laws could work in both 2D and 3D paths without any prior knowledge of the scene.They were successfully validated in both simulation and experimentally; in all cases, the laser steering control law showed its ability to operate accurately.The experimental results further demonstrated that the proposed control laws were accurate, fully decoupled with exponential decay of the image errors. The next stages of this paper will involve adapting the controller to work under different conditions, for instance, in the influence of perturbations and experimenting on a vocal fold mock-up. Figure 1 . Figure 1.Simulated image of vocal fold anatomy Figure 6 . Figure 6.Model schematic In figure 6, three cameras with optical centres , , and observe a 3D point = ( , , ) through a mirror as point which is projected in 2D points = ( , ) , = ( x, ỹ) and = ( x, ỹ) in the images planes , and respectively.The fundamental matrices and and the epipolar lines and showing a relation between the cameras and actuated mirrors.There are mathematical relations between the Epipolar lines ( ) and ( ) and 2D point , Figure 7 .Figure 7 Figure 7. Simulated set-upFigure7shows the simulation setup used.Where point corresponds to the laser spot position on the vocal fold.Vector is a unit vector in the direction of a laser beam, is the shortest distance from the centre of the micro-mirror to the plane of the auxiliary mirror.Vector 2 is the reflected unit vector of , 1 is the shortest distance between the auxiliary mirror and vocal fold plane, and 2 is the distance along the reflected laser beam.This simulation aims to validate the laser monocular visual servoing through an auxiliary mirror, controlled by (30). Figure8() orange colour asterisk is the laser spot's initial position, red plus colour is the desired location of spot, and the magenta cross colour is geometric coherence.The trajectory path shown in the image of figure 8() marked with a blue line is the laser beam path followed by the steering laser in an image from the initial position to the desired place at hidden parts Figure 8 ( Figure7shows the simulation setup used.Where point corresponds to the laser spot position on the vocal fold.Vector is a unit vector in the direction of a laser beam, is the shortest distance from the centre of the micro-mirror to the plane of the auxiliary mirror.Vector 2 is the reflected unit vector of , 1 is the shortest distance between the auxiliary mirror and vocal fold plane, and 2 is the distance along the reflected laser beam.This simulation aims to validate the laser monocular visual servoing through an auxiliary mirror, controlled by (30). Figure8() orange colour asterisk is the laser spot's initial position, red plus colour is the desired location of spot, and the magenta cross colour is geometric coherence.The trajectory path shown in the image of figure 8() marked with a blue line is the laser beam path followed by the steering laser in an image from the initial position to the desired place at hidden parts Figure 8 ( Figure 8 (a).Image () and figure 9() showed that the laser beam's trajectory path from the initial position to the desired position was straight.Error versus time plot, in figure 9() converged to zero.Similarly, as in figure 9(), mirror velocity had exponential decay. Figure 13 (Figure 13 (Figure 14 .Figure 15 . Figure 13 (c).Error Vs.Time Figure 13 (d).Mirror Velocity Figure 13() and figure 13(), error versus time for each image, both , and error components had exponential decay.Figures 14 and 15 show live video screenshots of laser servoing for the conducted experiments. Table 1 : List of symbols used in the paper Symbol Remarks XVelocity of the 3D point The unknown depth along the line of sight passing through x The virtual projected spot on the mirror virtual image plane T Transformation matrix relating micro-mirror frame () with world frame through the auxiliary mirror The velocity of the virtual projected spot on the mirror virtual
5,630.8
2022-03-24T00:00:00.000
[ "Physics" ]
Using Facebook for Qualitative Research: A Brief Using Facebook for Qualitative Research: A Brief Primer Primer As Facebook continues to grow its number of active users, the potential to harness data generated by Facebook users also grows. As much of Facebook users’ activity consists of creating (and commenting on) written posts, the potential use of text data for research is enormous. However, conducting a content analysis of text from Facebook users requires adaptation of research methods used for more traditional sources of qualitative data. Furthermore, best practice guidelines to assist researchers interested in conducting qualitative studies using data derived from Facebook are lacking. The purpose of this primer was to identify opportunities, as well as potential pitfalls, of conducting qualitative research with Facebook users and their activity on Facebook and provide potential options to address each of these issues. We begin with an overview of information obtained from a literature review of 23 studies published between 2011 and 2018 and our own research experience to summarize current approaches to conducting qualitative health research using data obtained from Facebook users. We then identify potential strategies to address limitations related to current approaches and propose 5 key considerations for the collection, organization, and analysis of text data from Facebook. Finally, we consider ethical issues around the use and protection of Facebook data obtained from research participants. In this primer, we have identified several key considerations that should aid health researchers in the planning and execution of qualitative studies involving content analysis of text data from Facebook users. Introduction Social media platforms provide an information-rich opportunity to reach diverse populations that would otherwise be difficult to identify. Facebook, in particular, is the most dominant player in the social media landscape. Over the past decade, the number of active Facebook users has grown from 145 million in 2008 to more than 1.2 billion in 2018 [1,2]. As of 2018, approximately two-thirds of US adults use Facebook [3]. In addition, about 75% of Facebook users visit the site at least once per day and spend upward of 50 min daily on Facebook [3,4], where they get entertainment, read news, communicate with friends and family, and exchange social support [5]. As a significant portion of individuals' social lives is conducted (and hence displayed and recorded) on Facebook, it is a potentially rich source of qualitative data for researchers [6]. Numerous studies ranging in topic from psychopathology [7,8] and chronic physical illnesses (eg, cancer or diabetes) [9,10] to substance use [11,12] have incorporated data from Facebook, recruited from and included Facebook users as study participants [13,14], or conducted behavioral interventions on the Facebook platform [12]. Despite the rising number of studies on Facebook, relatively little is understood about how qualitative data from Facebook users can best be captured and used for health research purposes. Individual and group interviewing, focus groups, individual and group ethnographic interviewing, and observational data are among the most common methods used to traditionally collect qualitative data [15][16][17]. These sources of qualitative data naturally allow researchers to unpack deep meaning within a select group of people [18], probe for underlying values, beliefs, and assumptions [19], and obtain more nuanced or novel information than that derived from other methods such as close-ended survey questions [19]. However, because of the nature of Facebook data, qualitative research methods may require additional adaptation to best capture the visual, virtual, and textual interactions on social media with accuracy [20]. In this primer, we explore the opportunities, as well as potential pitfalls, of conducting qualitative research with Facebook users and their activity on Facebook. Our focus here is purposefully narrow. We limit our approach to content analysis and user-generated text related to health topics on Facebook. We begin with an overview of the forms of qualitative data and data analysis best suited to the Facebook environment, focusing on text data generated by Facebook users. Then, we consider gaps in current qualitative methods based on the existing published literature. Finally, we present 5 key issues that must be addressed in a successive manner when conducting qualitative content analyses of health-related topics involving Facebook data, and we offer potential options to address each of these issues. Overview of Using Qualitative Data on Facebook Data obtained from Facebook users offer substantial opportunities for qualitative researchers. As described in Table 1, user-generated videos, images, reactions, and text are a rich source of qualitative data on Facebook. For the purpose of this paper, we focused on user-generated textual data. There are 3 primary types of user-generated textual data on Facebook: 1. Posts: A post is written by a Facebook user, and that post then appears on another Facebook user's timeline. A status update is a common type of post in the Facebook environment, which will appear in the news feed of a user's Facebook friends. A news feed is a list of updates from a user's Facebook friends that is intended to provide the user a quick update on what their Facebook friends have been doing on Facebook. 2. Comments: A comment is a response to a Facebook post or a response to another comment itself. 3. Messages: A message is privately sent from one user to another Facebook user, typically a Facebook friend. A message does not appear on a user's Facebook timeline or in their news feed. All 3 of these types of user-generated text on Facebook may be accompanied by image(s), video(s), and/or emoticon(s). An emoticon, or emoji, is a graphic facial expression that can appear embedded in text communication on Facebook and is primarily used to provide emotional information that would otherwise only be found in traditional face-to-face interactions (eg, tone of voice) [21]. Social media qualitative research methods can be described in 3 ways: active analysis, passive analysis, and research self-identification [22]. Active analysis on Facebook involves the participation of research members in communication with Facebook participants. For instance, Cheung et al [11] created a study Facebook group and invited participants to join. The research team member serving as the Facebook group moderator actively participated in generating content (ie, posts and comments) that aimed to stimulate engagement with study participants. Passive analysis on Facebook involves the study of information patterns observed on Facebook or the interactions between users in existing Facebook groups. For example, Kent et al [13] investigated public attitudes about obesity and cancer by performing a keyword search on Facebook to identify relational themes, grammatical elements, and valence of the sentiments contained in Facebook posts and associated comments. Finally, research self-identification is when researchers use Facebook as a research recruitment tool to gather participants for Web-based interviews, focus groups, or surveys. For example, Pedersen et al [14] designed 3 different sets of study advertisements that appeared on approximately 3.6 million targeted Facebook users' news feed. By clicking on the study advertisements, Facebook users were redirected to a study survey and were given the option to participate in the study. To determine current approaches to the use of qualitative data on Facebook, we performed a literature search in April 2018 for papers that used qualitative methods to analyze user-generated Facebook text related to health topics (ie, any acute or chronic disease including substance abuse disorders). Data included Filters User-generated and user-directed posts, comments, reactions, shares, photos, videos, tagged posts and photos, and when the participant added someone as a friend. Displays public data Timeline User-generated and user-directed posts, comments, reactions, shares, photos, videos, tagged posts and photos, pages liked, and when the participant added someone as a friend. Displays public and private data Gaps in Current Qualitative Approaches Our review identified a number of limitations within the existing literature. First, most studies did not provide detailed descriptions of their methods [39,40]. In particular, description of data extraction methods was frequently missing [7,11,13,23,25,[27][28][29][30][31]33,34,37]. Furthermore, there are few existing resources that offer guidance for researchers seeking to use Facebook for health-related topics. Lack of methodological descriptions and advice in the literature pose as barriers to researchers trying to replicate study results or apply the same methods in pursuit of novel research questions in the health domain. Second, none of the studies analyzed bidirectional interactions among participants and other Facebook users. Bidirectional interactions are social exchanges of user-generated and received text between Facebook users. Received text is text directed to a Facebook user, such as a friend's comment to that Facebook user's post (hereafter, user-directed text). These interactions are commonly displayed as a chain of communication on a user's timeline or news feed that exemplify how individuals use and interact with others on Facebook. By collecting only user-generated text or user-directed text on Facebook, studies are only capturing one side of Facebook user's interactions with other Facebook members. However, collecting bidirectional interactions provides more context of social exchanges on Facebook, which can assist in more meaningful interpretations of the data. Therefore, it is important to establish methods for researchers seeking to capture this type of information. Third, most studies that included either manual or machine-coding techniques lacked familiarization methods before coding [8,[11][12][13][24][25][26][27][29][30][31][32][33][34][35][36][37][38]. Familiarization methods include researchers immersing themselves with the data before coding by actively reading the data to understand the depth and context of the content [41]. To conduct rigorous and trustworthy thematic analyses, it is vital to read through the entire dataset at least once before coding [41,42]. Owing to these limitations, in this paper, we identify and discuss 5 key issues in the process of conducting qualitative research using data obtained from Facebook. These issues are summarized in Textbox 1 and described in detail below. In addition, we use our own experience from a recent research project to illustrate 1 potential approach to handle each of these issues. Our experience derives from a study in which we used Facebook advertisements to recruit a sample of military veterans [43]. Study participants completed a Web-based survey about their psychiatric symptoms and social support, and a subgroup was invited to participate in an additional in-person study visit in which they provided access to some of their Facebook data. For qualitative analysis in this project using Facebook data, we used content analysis, which, for our study, was a more directed approach that allowed us to begin by identifying key concepts and variables as initial coding categories. Textbox 1. Key considerations for future studies using qualitative approaches for social media data. Step 1. What kind of Facebook user will be included in the study? • The method of recruitment of Facebook users will affect participants' characteristics and generalizability of results. • The degree of activity on Facebook by a study subject will impact the amount of data available for analysis. Step 2. What Facebook data will be analyzed? • Facebook contains a combination of public and private information about individual users. • Filters can be used to select desired variables and data about Facebook users. • It is helpful to predetermine a period of Facebook use to be included in data analysis. Step 3. How will the Facebook data be obtained? • Options include partnering with Facebook, collecting publicly available data, creating a research study-specific Facebook page or group, or downloading participants' Facebook data. • Each option has pros and cons related to the complexity of the process and comprehensiveness of data obtained. Step 4. How will the Facebook data be analyzed? • Depending on the size of the dataset, researchers may prefer a manual versus more automated approach to coding and data analysis. • Qualitative data analysis and other software can assist with the data analysis. • Consider the model of qualitative analysis used in the study. Step 5. How will participant's Facebook data be protected? • The Connected and Open Research Ethics is a Web-based resource [44] to help navigate ethical issues around social media research. • Common ethical issues include the following: who will informed consent be obtained from, how will data of research subjects be kept secure, and how will the privacy of research subjects be maintained. Step 1: What Kind of Facebook User Will Be Included in the Study? In deciding what kind of Facebook user will be included in the study, it is important to consider how participants will be recruited. For studies that involve delivery of an intervention through Facebook (ie, active analysis), the platform offers 2 main features that researchers can use to recruit and maintain participants: Facebook pages and Facebook groups. Facebook pages are public, whereas Facebook groups can be public, or private or secret. In public Facebook groups, only invited members can see content. However, in secret Facebook groups, only invited members can see content, and the group is hidden-it cannot be searched for, or found, using the Facebook search engine [45]. Facebook pages and all Facebook groups can be created to recruit and conduct an intervention. In addition, researchers can access existing public Facebook pages and groups comprising current members to collect data. However, these pages and groups cannot be tailored to a researcher's interventions. Furthermore, Facebook advertisements can be used to target a specific population by leveraging demographic profiles available on Facebook. Furthermore, Facebook advertisements can use additional information (eg, interests) added by a user to their profile. Some studies recruit both current Facebook users and other participants who are willing to open a Facebook account for the study [45]. In addition, it is important to consider the degree to which participants are regularly and actively using Facebook. Regular users will tend to have a richer record of their Facebook activity. That said, not all users of Facebook actively engage in behaviors that create a record of interaction on Facebook (eg, posting and commenting) [46]. Facebook users can be categorized into 2 types of users based on the frequency of engaging in these behaviors: active users and passive users. Active users contribute to Facebook interactions by posting and commenting frequently. Passive users tend to observe Facebook interactions and not actively contribute. For active analysis studies, both active and passive users can be considered for recruitment. Interventionists may consider designing posts to initiate interactions among participants, especially from passive users. In addition, studies intending to observe Facebook user's interactions with other users (ie, passive analysis) can use 2 public group features available on Facebook: Facebook pages and public Facebook groups. As these pages and groups are public, researchers are able to openly view all Facebook data without restrictions. As a result, researchers can search for an existing public page or group related to a health topic of interest and then collect the data presented within the page or group. Data found in public Facebook pages and groups can be from both active and passive users. Typically, there is a direct relationship between the number of members part of a Facebook page or group and the amount of data available. One drawback about using public Facebook pages and groups is that the pages and groups about a health topic of interest must already exist. Alternatively, passive analysis studies can recruit Facebook participants individually through Facebook advertisements. An advantage of this approach is the ability to continue an advertising campaign until enough participants and data are collected, whereas a disadvantage of it is the requirement for a nontrivial advertising budget. Paid advertisements on Facebook are also useful for studies seeking to recruit participants from Facebook to participate in interviews, focus groups, surveys, or other research activities (ie, research self-identification). Facebook advertisements can be used to target particular users using the methods described above. Facebook users can be directed to a study website when they click on the advertisement, which then can further describe the study and include Web-based informed consent. Furthermore, Facebook advertisements can record user actions such as advertisement clicks (ie, number of times the advertisement was clicked on) and comments on the post containing the advertisement. Finally, as with other Web-based studies in which in-person contact with a study participant does not occur, exclusion criteria should be carefully considered to reduce misrepresentation of participants and potentially counterfeit responders (ie, responders pretending to fit a certain demographic for study compensation). An Applied Example We used research self-identification methods to recruit participants through Facebook advertisements [43]. Advertisements contained a call to action to participate in a health research study. Study advertisements broadly targeted Facebook users in the United States of any age or gender who had interests relevant to military veterans. Advertisements were hosted by Facebook pages affiliated with our university. This allowed us to draw on the established base of Facebook users interested in and following our university on Facebook. To reduce misrepresentation of participants, we excluded individuals who completed the survey in less than 5 min, had a duplicate or multiple survey responses, or incorrectly answered military-related insider knowledge questions [14,47]. To help ensure study subjects had enough Facebook data to analyze, we chose to collect qualitative data from participants who reported using Facebook at least once a day. Step 2: What Facebook Data Will Be Analyzed? In deciding what Facebook data will be analyzed, it is critical to determine the setting in which the data will be collected. For active or passive analysis studies collecting data from public, private, or secret Facebook groups or pages, it is important to consider downloading individual Facebook user's profile information in addition to the information exchanged in groups or pages. A Facebook user's profile information shows how the user interacts in multiple Facebook settings compared with a singular setting (ie, a Facebook page or group). Therefore, collecting and analyzing data from a user's Facebook profile provides more context to how they interact, whom they interact with, and in which environments (ie, public or private) they are more active. Understanding how research participants interact on Facebook can be used to supplement the context of the responses and inform future intervention processes. In addition, given how expansive the amount of Facebook data can be, even just from a single Facebook user, it is vital to determine the scope of data that will be analyzed. As described in Table 1, Facebook features, such as Filters, allow data to be viewed in already separated Facebook variables such as user-generated data (ie, notes, posts tagged in, and timeline review). These filters can be manipulated to display specific data of interest. Although filters can help find user-generated and user-directed data, it is important to also capture these same data in the timeline. The timeline shows how Facebook users are interacting, which helps provide context when analyzing the data. Furthermore, it is also important to determine how long it takes to collect the Facebook data. Data collection time is dependent on how active the Facebook user is and, for pages or groups, how many users are part of a page or group. These factors can impact additional study procedures (eg, interviews) at the time of the Facebook data collection period. Our Experience and Applied Example In our study, we sought to capture all our veteran participants' written social interactions on Facebook. We did this by collecting user-generated and user-directed comments, status updates, and posts from the activity log and the timeline. The timeline was also included as it contains data from both public and private settings on Facebook. By collecting both user-generated and user-directed data, we were able to capture bidirectional interactions between study participants and other Facebook users within their social network. In addition, data were collected over a 4-week period around the time of the participants' survey completion. We decided to collect participant's Facebook data at the time of the in-person interview so that a research member could be physically present to assist a participant in the process of downloading his or her Facebook activity. After informed consent, the initial 10 min of the session were used to collect the participant's Facebook activity information, which was sufficient to collect users' Facebook data, ranging up to approximately 70 user-generated posts. Step 3: How Will the Facebook Data Be Obtained? Option 1: Partner With Facebook Facebook data can be obtained through a research partnership with Facebook. Kramer et al [48], supported by Facebook resources, collected posts and manipulated news feeds of 689,003 Facebook users over a 20-year period. Burke and Kraut [49], led by a Facebook researcher, collected user-directed comments, private messages, timeline posts, likes, and pokes, as well as user information such as number of profiles viewed, news feed stories clicked on, and photos viewed from 10,557 Facebook users. Some advantages of partnering with Facebook are that studies can have access to massive amounts of data including Facebook variables that are not shared with users or third parties [50]. In addition, one can leverage Facebook resources (ie, data processing systems) to track how much people are discussing specific topics of interest and the subsequent opinions of those topics expressed in everyday conversation. Such Facebook resources efficiently gather large-scale data in which data are retrieved almost instantaneously. However, a challenge of partnering with Facebook is meeting their collaborative requirements, such as finding a Facebook sponsor to lead the research effort, and the faculty principal investigator's institution paying up to 40% of overhead costs for a hosted researcher [51]. Therefore, this process can be resource intensive in terms of both time and financial investment by the partner researcher. Option 2: Publicly Available Data Active and passive analysis studies can obtain Facebook data through public Facebook pages and groups. There are several studies using extraction methods such as manual extraction (eg, copying and pasting data into a spreadsheet) or contracting through external models and third-party services for manual extraction. Abramson et al [9] copied and pasted each public timeline post from the Breast Cancer Organization page into a spreadsheet with the corresponding responses. Eghdam et al [8] used Netvizz version 1.25, a data collection software created by Facebook, to collect anonymous data from public Facebook groups. Kent et al [13] used a Web-crawling service that mined publicly available posts and comments from Facebook using keywords related to obesity. Furthermore, Kosinski et al [50] provide Pennebaker's Linguistic Inquiry and Word Count (LIWC), and the Apply Magic Sauce, a website developed by the University of Cambridge psychometrics center, [52] as an additional resource for data collection. An advantage of using public data is that there are a lot of data for a range of health topics, and informed consent by the participant is not required. However, the challenge of using data shared publicly could be biased because of social desirability influences and other censoring by a given participant. Studies suggest that both privacy concerns and the user's audience can impact self-disclosure on Facebook, especially when it comes to sharing health information [53][54][55][56][57]. Eysenbach and Till [22] recommend working with group moderators to develop an adequate plan for informing group members of the use of their data. Although they identify obtaining permission from the group moderator as insufficient on its own, group moderators have greater knowledge of their group members and may be able to provide important information on how to best obtain consent for use of data. Option 3: Create and Monitor a Facebook Page or Group In addition, for active analysis studies, Facebook data can be obtained by creating and monitoring a Facebook page or group. Beullens and Schepers [12] collected 2575 pictures and 92 status updates by creating a study Facebook profile and sending friend requests, including a study overview message, to 166 college students. Tower et al [38] collected post information by creating a Facebook group and inviting 198 nursing students to join the group through email. The invitation advised the group to post information related to their study. A faculty member initiated discussion in the Facebook group. The text and associated attributes were downloaded onto a spreadsheet. An advantage of creating and monitoring a Facebook page or group is that it allows a research team to customize a group specific to a particular health topic. Subsequently, targeted individuals can be invited to this page or group and be presented a set of specific questions/instructions to stimulate participant engagement. In addition, only group settings can be made private, which can create a more secure environment for participants to disclose personal information. However, a disadvantage of private groups is that there is a permanent setting that organizes user-directed posts such that the most recent interactions appear at the top of the group feed versus a chronological ordering of the post [45]. As a result, posts containing important content may be pushed to the bottom of the group feed because of frequent posting in the groups, thereby making it difficult for participants to find information posted by the groups interventionists [45]. In addition, although Facebook groups can be private or secret, they are still not the Facebook user's natural environment -that is, the social network comprising Facebook friends the user normally interacts with. Therefore, Facebook users recruited into an intervention conducted in a private or secret group may behave differently in groups created by researchers, especially when they know they are being observed by researchers [58]. Option 4: Private Messages Furthermore, for active analysis studies, Facebook data can be obtained by asking participants to copy and paste user-generated Facebook text (eg, text from timeline posts or private messages) and provide it to a research team member through a Web-based portal or through private messaging to a Facebook account created by the research team. Bazarova et al [37] collected 474 most recent status updates, timeline posts, and private messages by inviting 79 participants to copy and paste their data into a Web survey. An advantage of having users provide their Facebook data through the private messaging feature or a Web-based portal is that it creates a secure environment in which participants' Facebook data can be kept confidential from other Facebook users or study participants. However, one disadvantage of this particular method is that researchers would neither be able to observe passive interactions among a particular group of Facebook users nor observe interactions as a result of a proposed set of questions/instructions regarding health-related topics. Our Experience and Applied Example A fourth option, applicable to active and passive analyses and some research self-identification studies, is directly downloading participants' Facebook data during an in-person study visit. We chose this option because it was the only one that allowed us to download individual's Facebook profiles without establishing a partnership with Facebook. For instance, in our own study, we obtained Facebook data by downloading participants' Facebook activity information. During the in-person interview, users' Facebook activity log and timeline data were collected separately by study staff using the following steps: (1) ask participants to login to their Facebook account, (2) follow the steps described in Figure 1, (3) scroll backward on the selected page chronologically until 1-month period before the date of the survey; (4) save as an HTML file on OHSU Box (a cloud-based data storage service that complies with local security and regulatory policies), (5) open saved file with Safari to view extracted data, (6) log participants out, and (7) ensure that no username or password information was retained by making sure user login information was not saved by the browser. We noted some advantages of downloading participants' Facebook profile information, such as a participants' Facebook profile can provide insight to how individuals interact, who they interact with, and what environment (ie, public and private) they are more active in. This helped us understand how study participants interacted on Facebook. However, a challenge of downloading participants' Facebook profile information is that it requires participant consent, and it can be more difficult to collect massive quantities of private data because of the length of the collection period. Figure 1. Steps to access the timeline (eg, blue square) and activity log (eg, red squares) on Facebook. Step 4: How Will the Facebook Data Be Analyzed? Qualitative Facebook data are commonly analyzed using methods such as content analysis to assess a wide range of qualitative data or else constant comparison to identify themes [6]. In deciding how qualitative Facebook data will be analyzed, it is important to consider the quantity of the data as well as the qualitative approach being used. For active and passive analysis studies using larger datasets, it is preferable to analyze data using software programs. AlQarni et al [34] analyzed 1551 posts using predetermined themes, and further inductive codes were used to independently extract and analyze the Facebook posts to determine major content themes. Thematic analysis was performed using NVivo, a qualitative software used to code, store, and potentially exchange data with SPSS for further statistical analysis. Kramer et al [48] used LIWC (2007) software to analyze 689,003 posts to determine if the valence of the posts was positive or negative. Keller et al [32] used ATLAS.ti, a qualitative software used to code data, to code 1614 comments for major and minor themes. It is important to note that ATLAS.ti can be used to code HTML files of individual's Facebook downloads; however, this has not been done in social media qualitative research studies [59]. Instead, ATLAS.ti has been traditionally used to code Microsoft Word documents of transcribed interviews. Our Experience and Applied Example As our study contained a relatively small dataset (23 subjects with 201 posts and 424 comments), we opted to analyze data manually. User-generated text from status updates, posts, and comments and user-directed text from posts and comments from the HTML files were copied and pasted into an Excel spreadsheet and analyzed for markers of social support. Our codebook contained 3 different types of social support that have previously been described in the literature (emotional, instrumental, and informational) and a fourth category for other evidence of social support (eg, "Wow, that's a great joke"). In addition, we coded the valence of user-directed social support as positive, negative, or neutral. Before coding, each coder read over the entire dataset to familiarize themselves with the content of the data. The familiarization process helped lead to more meaningful interpretations of the data because we were able to easily provide context to each piece of text we coded. As is common in qualitative research, after an initial training period, 2 coders independently coded participants' data. Furthermore, each coder created a memo describing their experiences during the coding process. This highlighted the challenges and successes of the coding process, which guided conversations around any discrepancies. In addition, the memo process brought awareness to potential challenges of coding text on social media, which can be addressed early on for future social media qualitative work. Furthermore, the memo process also identified general themes that were prevalent in the data. Step 5: How Will Participants' Facebook Data Be Protected? It is important to highlight that Facebook research raises several ethical questions. Owing to the nature of studying Facebook communities, researchers can potentially violate the privacy rights of Facebook users. Facebook users that are members of public Facebook pages or groups do not expect to become research subjects nor do the Facebook friends of study participants (ie, nonparticipants). The boundary between private and public Facebook data may sometimes be unclear. The majority of Facebook users are aware that their data may not be private [22], especially in a public setting on Facebook. However, the literature regarding social media users' comprehension of privacy literacy is limited [60]. As a result, researchers should ensure that informed consent language is clear regarding how a participant's Web-based data will be used. Pilot testing of informed consent language may help ensure that the information presented is easily comprehensible for a broad range of populations. Regardless, it is important to maintain the safety and anonymity of individuals' Facebook information whether or not they are a research participant. In addition, it is important to note the potential ethical dilemmas associated with establishing a research partnership with Facebook. Facebook is a powerful company with a rich source of data; however, Facebook has received public scrutiny because of their misuse of their users' Facebook data. Therefore, the responsibly is placed on the research teams to ensure that Facebook users' data are obtained ethically and protected. Arigo et al [61] recommend including research team members who are well versed with Facebook's cooperate terms and conditions and privacy policies. It is strongly encouraged that research teams are knowledgeable of the peculiarities of Facebook before establishing a partnership to assist in the development of research methodological procedures regarding data collection and privacy. As each institutional review board (IRB) will vary in its familiarity with social media research, we recommend closely consulting with professional and independent organizations (eg, Association of Internet Researchers Ethics Working Group Guidelines, The National Committee for Research Ethics, and The Humanities Research Ethics Guidelines for Internet Research) as well as Web-based resources such as the Connected and Open Research Ethics (CORE). CORE can provide assistance in how to address potential ethical issues for researchers and IRBs interested in social media research. Common ethical questions that have been raised on CORE include the following: (1) Who will informed consent be obtained from-is informed consent required for nonparticipants on a research subject's account?; (2) How will data from research subjects be kept secure on the social media platform?; and (3) How will the privacy of research subjects be maintained? CORE has created a collaborative platform where researchers can exchange expertise and questions pertaining to social media research. Features such as the Resource Library, Q&A Forum, and the CORE Network provide scientists access to IRB-approved research protocols and consent forms and allow researchers to discuss collaboratively ethical design or potential social media strategies [44]. Our Experience and Applied Example In our study, participants interested in an optional, in-person interview provided contact information with which study staff used to arrange the study visit. For individuals who were unable to come in-person, we conducted interviews through phone but did not download their Facebook data. Overall, 2 separate informed consents were obtained, once online for those completing the survey and again in-person for those sharing their Facebook data. During the informed process for those sharing their Facebook data, participants were informed that their timeline and activity log would be collected to observe their online social interactions and Facebook usage. In addition, participants were informed that their Facebook data would be labeled with a unique code to protect their identity. All study procedures were approved by the IRB of Oregon Health & Science University. Limitations There are several limitations to this study. First, this study represents 1 proposed framework. Additional validation of this framework among other experts would be a helpful next step. Second, the scope of the study is limited. We primarily focused on content analysis of user-generated Facebook text related to health topics using a content analysis approach to qualitative analysis. Studies that intend to use other models of qualitative analysis may require somewhat different approaches to the use of data from Facebook. Nontext qualitative data from Facebook (eg, images, videos, and emoticons) also bear further examination. Third, because our key considerations are primarily directed toward health-related studies, it is unclear whether they are generalizable to other research topics that harness data from Facebook. Finally, our applied example did not address methods for collecting data from existing closed Facebook groups, although studies that did do so were identified in our literature review. Studies that involve interaction with Facebook group members require additional consideration, and future research could help elucidate this area by extending the work presented by Eysenbach and Till [22]. Conclusions Although there are an increasing number of studies that are using qualitative data obtained from Facebook users, there has been little published to date, summarizing the current state of this research. Our review of the literature and own experience conducting this type of research have led us to identify several key considerations for health researchers interested in conducting qualitative studies involving Facebook data. Our hope is that future research continues to refine and develop approaches to conducting research in this exciting area.
8,349.6
2019-02-11T00:00:00.000
[ "Computer Science" ]
Gonadal development and sexuality of Larkinia grandis (Arcida: Arcidae) inhabiting southeastern Gulf of California Larkinia grandis (Broderip & G.B. Sowerby I, 1829), an important fishing resource for Mexican communities, is an Arcidae clam. It is also considered a species with aquaculture potential. In this work we investigated the gonadal phases and sexuality in a population of L. grandis in the Gulf of California. Our findings support the hypothesis that there is one male per female in the population studied. It also documents that the shape, position and color of the gonads of L. grandis are consistent with observations in other Arcidae species. Additionally, five gonadal phases are differentiated and described in males and females (development, mature, spawning, post-spawning and resting), with a noticeable presence of brown cells during post-spawning and the onset of the resting phase, suggesting that those cells are involved in the reabsorption of remnants. Additionally, asynchronous gametogenesis in males, synchronic gametogenesis in females and batch spawning are defined. The results of this contribution can be used in the efforts to protect this bivalve. INTRODUCTION Larkinia grandis (Broderip & G.B. Sowerby I, 1829) (Mollusca: Bivalvia: Arcida: Arcidae) is a clam distributed from the Ballena Lagoon (Baja California, Mexico) to Tumbes (Northern Peru) (Coan and Valentich-Scott 2012), where it is known by the common name of "mangrove cockle" (García-Domínguez et al. 2008) or "pianguas" (Lucero-Rincón et al. 2012), respectively. This bivalve is found near the coastline, living in close relationship with the roots of the Rhizophora spp. mangrove, buried in the muddy sediment, or very rarely, half-buried or exposed (Fischer et al. 1995). Several members of the family Arcidae, including L. grandis, are commercially exploited on the Pacific coast Palacios 1983, Cruz 1987a). The clam L. grandis is caught along with other species of the same genus [Anadara mazatlanica (Hertlein & A.M. Strong, 1943), Anadara multicostata (G.B. Sowerby I, 1833), Anadara similis (C.B. Adams, 1852), and Anadara tuberculosa (G.B. Sowerby I, 1833)] in Mexico (CONAPESCA 2018). Generally, the data provided by the Mexican government does not distinguish among mangrove cockle species and fishery management does not consider the biological characteristics of each species when exploiting them commercially in Mexico. Interest in these clams has grown in recent years due to their use in handicrafts and for direct human consumption. Additionally, mangrove cockles have aquaculture potential in Mexico (Sotelo-Gonzalez et al. 2019), following the example of other countries (Broom 1985, Galdámez-Castillo et al. 2007. Studying the gonadal development and sexuality of wild mollusk populations of commercial importance helps to reveal their reproductive phases and to define their reproductive patterns (Bricelj et al. 2017). At the same time, it is a standard procedure to describe the changes in the tissues and cells of the gonad generated by the accumulation of energy and expulsion of gametes (Karray et al. 2015), and to explain the way in which organisms modulate the use of their reserves in relation to environmental variables (Boulais et al. 2017), under normal or anom-alous conditions. Furthermore, research on the sexual behavior of bivalves helps to understand the interactions between them and their environment and can provide information for the development of selective breeding programs (Breton et al. 2018). Previous studies on the gonadal development of L. grandis arrived at conflicting results. Four gonadal phases were identified in a population in Costa Rica (Cruz 1987a) and six phases in Nicaragua (Aguirre-Rubí 2017). Cruz (1987a) did not describe the resting phase and Aguirre-Rubí (2017) generalized the gonadal phases for three species [L. grandis, A. tuberculosa and Polymesoda arctata (Deshayes, 1855)]. In addition, Cruz (1987aCruz ( , 1987b stated that L. grandis is a gonochoristic (dioecious) species, while Aguirre-Rubí (2017) did not define the sexuality of the three species, but documented one hermaphrodite organism (female with intersex) among 40 analyzed individuals in which the male and female acini were separate. Additional information for other species of Arcidae is mentioned below. Broom (1983) identified six gonadal phases and one hermaphrodite individual in Anadara granosa (Linnaeus, 1758) from the west coast of West Malaysia; Broom (1985) defined that in A. granosa the sexes are undoubtedly separate; the gonadal status of A. granosa and Anadara antiquata (Linnaeus, 1758) were studied in central Java (Afiati 2007a(Afiati , 2007b and one study suggests that both species could be protandrous sequential hermaphrodites (Afiati 2007b); Jahangir et al. (2014) published four gonadal phases for A. antiquata from Pakistan and only individuals with separate sexes were described. In Colombia, Manjarrés-Villamil et al. (2013) described the five gonadal phases and found 15 hermaphrodite individuals in A. similis, and mentioned that hermaphroditism in the species needs to be further studied. Ghribi et al. (2017) observed four phases in the gonads of 142 females and 42 males, and documented five cases of protandric hermaphroditism in Arca noae Linnaeus, 1758. The gonadal status of A. tuberculosa has been studied in the Pacific Coast of Costa Rica (Cruz 1984), Mexico (Pérez-Medina 2005, García-Domínguez et al. 2008 and Colombia . These studies revealed differences in the number of gonadal phases and used different nomenclature. In Costa Rica, Cruz (1984) did not find evidence of sexual reversal in any specimen of A. tuberculosa. In Mexico, Pérez-Medina (2005) documented two hermaphroditic organisms of A. tuberculosa and defined that it is gonochoric, but with casual hermaphroditism. In contrast, Lucero-Rincón et al. (2013) determined that A. tuberculosa is a protandric hermaphrodite in Colombian Pacific. This contribution describes the gonadal phases and sexuality in a wild population of L. grandis in the southeastern Gulf of California and provides additional observations about the sex ratio and the color and anatomy of the gonads. MATERIAL AND METHODS The clams were collected in the El Cohui estuary (25°26'-19°38'N; 105°48'-43°90'W) within the San Ignacio-Navachiste-Macapule lagoon system in the state of Sinaloa, Mexico ( Fig. 1), where an important area for fishing of L. grandis is located. In total, 240 clams were collected by free divers, from August 2017 to July 2018. Each sample (n = 20 per month) was placed in a container with sea water and transported to the laboratory (30 minutes) for processing purposes. First, the length (mm) and weight (g) of each clam were registered; next, the shells were opened, and the soft tissues were removed. The soft tissues were macroscopically analyzed to observe external alterations, the appearance of the gonads and their location within the visceral cavity (Álvarez-Dagnino et al. 2017). The soft tissues were fixed with Davidson solution, rinsed with distilled water to remove excess fixative and placed in 70% alcohol until dehydration (Álvarez-Dagnino et al. 2017). Then, a longitudinal section of the gonad tissue was dehydrated and embedded in paraffin (Buesa and Peshkov 2009). Histological sections of 3 µm thickness were obtained and stained using the Hematoxylin-Eosin-Floxin (HEF) technique (Humason 1972). The sex of individuals was identified from the histological sections. The sex ratio (number of males per female, n:1) was calculated dividing the number of males by the number of females (Álvarez-Dagnino et al. 2017). The Yates-corrected Chisquared test was used to compared the observed sex ratio with the expected value 1:1, using χ 2 with n-2 degree of freedom and a significance level α = 0.05 (Zar 1996). The histological sections were analyzed qualitatively using an optical microscope (Zeiss model Axiostar 10x, 40x and 100x) to identify the gonadal phases (development, maturity, spawning, post-spawning and resting) based on cellular and tissue characteristics and stages of gametogenesis, considering the criteria of Aguirre-Rubí (2017) Broom (1983) for A. granosa, Afiati (2007a) for A. granosa and A. antiquata, Ghribi et al. (2017) for A. noae. Additionally we found sub-phases of gonadic resting and described them based on differences in the dispositions and thickness of the connective tissue and in the number of brown cells. The browns cells were identified according to Ghribi et al. (2017). These data were documented with a Nikon D5200 camera adapted to a microscope. The images were transferred and processed on a computer with the Sigma Scan Pro program (version 5.0 Systat Software, Inc., Richmon, CA, United States) (Álvarez-Dagnino et al. 2017) in order to measure the diameter of the oocytes (mean±standard deviation µm) of L. grandis. Only oocytes with a visible nucleus were measured, and due to their irregular shape, first, the area was measured and then the diameter (D) was calculated assuming a circular form of each oocyte. An amount of 1200 oocytes at different oogenesis stages were measured from 84 individuals. RESULTS All clams collected were adults. Their length and weight ranged from: 44.57 to 142 mm and 41.90 to 337 g, respectively. The male (n=95) and female (n=108) gonads were in different individuals in L. grandis, as in gonochoristic species, and the sex ratio was 0.88 males per female (0.88:1). This sexual proportion was not statistically different from the expected value 1:1 (χ 2 203-2 = 12.07, p = 0.36), and neither were the sex ratios estimated for most months (χ 2 ⁓20-2 = 0.07-2.57, p = 0.1-0.81), with the exception of April 2018, when there were more males than females (3:1; χ 2 20-2 = 7.14, p = 0.02) (Fig. 4). Also, 37 indeterminate individuals (sexually) were identified. The external appearance of the gonads on L. grandis showed sexual dichromatism when gametogenesis occurs. This is more noticeable in their maturity phase. The gonad of males was cream-beige, and the gonad of females was orange-brown. The color tonality intensified as the gonad matured. The gonads were located within the visceral mass and occupied 20-40% of it (Figs 2, 3). Five gonadal phases were differentiated for males and females: development, mature, spawning, post-spawning and resting. For the latter, three sub-phases were identified. Immature gonads (no previous ripening) were not defined. At the beginning of the progression of the gonadal phases, the acini are roughly round or elliptical, increase in size and their walls become thinner (development and mature phases), then the acini acquire irregular shape (after spawning phase) and finally the acini recover their roughly rounded or elliptical regular form (resting phase). Also the cell types and their quantity change due mainly to gametogenesis and reabsorption processes. The detailed description of gonadal phases for males and females of L grandis is in Table 1. DISCUSSION Among the bivalves studied, the majority were gonochoric (Gosling 2015). This is consistent with the findings for the population of L. grandis in southeastern Gulf of California, and also in the Pacific coast of Costa Rica, where hermaphrodite organisms were not found and the sexual proportion was close to one male per female (1:1) (Cruz 1987a(Cruz , 1987b. Only on April 2018, the studied population of L. grandis had more males than females, but the sex ratios estimated for the other months and the absence of hermaphrodites suggest that the deviation from the 1:1 sex ratio was casual. Aguirre-Rubí (2017) found one hermaphrodite specimen of L. grandis (2.5% of the total sample) on the Nicaraguan coast. In that publication, however, the sex ratio was not specified. Other Arcidae species have been defined as gonochoric, even though they have a low percentage of hermaphroditic organisms in the population, such as A. granosa (0.33%; Broom 1985) and A. tuberculosa (0.98%; Pérez-Medina 2005) because the percentage of hermaphroditism was considered low and the sexual proportion was close to 1:1. In these cases, hermaphroditism can be considered a casual phenomenon (Pérez-Medina 2005), and according with the observation of Aguirre-Rubí (2017), casual hermaphroditism could also occur in L. grandis. However, more research is needed on hermaphroditism in bivalves to improve our understanding of it (Breton et al. 2018). It is necessary to understand sexuality in species such as A. tuberculosa, which in different populations has been classified as gonochoric (Cruz 1984), gonochoric with casual hermaphroditism (Pérez-Medina 2005) and protandric hermaphroditism . These different sexualities have been defined according with different percentage of hermaphroditism, and some estimates of sex ratio and (Fig. 5). Mature The gonadic tissue has reached its maximum development; acini are large and distended. The acini are large, completely filled and their lumens are barely appreciated. Spermatids and spermatozoa cells are located towards the lumen of the acinus (Fig. 6). Acini completely filled of yolked oocytes, and these have granular cytoplasm. Yolked oocytes are rounded or polygonal. The acinus walls are very thin and there are no empty spaces in the acinus. Few immature oocytes are attached to the acini wall. Oocytes diameter = 32.79 ± 3.41 µm (all stages) (Fig. 10). Spawning Acini are almost empty. Spermatozoa in different development stage are dispersed within acinus. Abundant brown cells are observed (Fig. 7). Post-spawning Residual spermatozoa are in the acinus. Abundant brown cells are observed. The acinus walls are thicker (Fig. 8). Resting The sex of L. grandis cannot be defined since differentiated sex cells are not observed. The acini are empty. The connective tissue that forms the acini's walls is abundant and noticeable. The acini are contracted and lumens are narrow. The differences in the dispositions and thicknesses of the connective tissue and in the number of brown cells allow to define three sub-phases of resting: Sub-phase 1: The acini are small and irregular in shape; a lot of brown cells are attached to the connective tissue (Fig. 13). Sub-phase 2: The acini are elongated, bigger and there is smaller number of brown cells (Fig. 14). Sub-phase 3: The acini return to their roughly rounded or elliptical regular form and brown cells are scarce (Fig. 15). size differences between males and females that suggest sex change. Some questions that remain to be answered are, what percentage of hermaphroditism should be in the population to rule out gonochoric sexuality? How could we define whether disparity in sex ratio and size between sexes is due to different longevity of females and males as suggested by Cruz (1987b)? The criteria to define sexuality need to be improved and clarified. In this work, the shape and position of the gonads of L. grandis at the different gonadal phases are similar to those reported by Pérez-Medina (2005) and García-Domínguez et al. (2008) for A. tuberculosa, and by Manjarrés-Villamil et al. (2013) for A. similis. The gonad, regardless of sex, was located along the digestive gland. Karray et al. (2015) mentioned that gonad and digestive tissues in bivalve mollusks such as the sand clam Cerastoderma glaucum (Bruguière, 1789), are intertwined to facilitate the flow of nutrients according to the energy demand during the reproductive phases, while Menezes-Tunholi et al. (2016) highlighted the metabolic role of the gonad-digestive gland complex of the Biomphalaria glabrata (Say, 1818) gastropod. The tissue fusion described by both authors coincides with the conformation of the gonad-digestive gland complex in L. grandis. In bivalve mollusks, the color of the gonad is considered a characteristic of sexual differentiation; frequently, female gonads are orange, while male gonads are creamy white (Mikhailov et al. 1995). However, there are species in which the gonads are of the same color. For instance, Meléndez-Galicia et al. (2015) and Álvarez-Dagnino et al. (2017) observed that in the rock oyster [Crassostrea iridescens (Hanley, 1854)] and the callista clam [Megapitaria squalida (G. B. Sowerby I, 1835)], respectively, gonads between sexes were of the same color throughout their reproductive cycle. In the present study, the gonads of L. grandis presented sexual dichromatism when gametogenesis occurs and the color tonality intensified as the gonad matured, similar to what has been observed in other bivalve species (Morriconi et 11 10 9 al. 2002, Aragón-Noriega et al. 2007, Góngora-Gómez et al. 2016. The macroscopic dissection in L. grandis was not enough to distinguish their sex when the gonad is not mature, mainly due to the gonadal tissue being very small and difficult to examine with a naked eye. Histological analysis was necessary to adequately describe the sex of L. grandis, as well as in other clams (Cruz 1987b, Juhel et al. 2003, and the development of the gonads. Five gonadal phases were differentiated in females and males of L. grandis (development, mature, spawning, post-spawning and resting). The resting phase was not previously described by Cruz (1987a), but in a complementary work, Cruz (1987b) mentioned one finding of an individual that could have been in the resting phase since it was different from the immature ones, but the tissue characteristics of both phases were not described. Meanwhile, Aguirre-Rubí (2017) grouped rest and spent gonads in one phase and described them as inactive and undifferenti-ated, but tissue details were not documented because his work was focused on other objectives. Immature individuals were not found in the sample of the present work. The smallest organism was 44.57 mm in length, and according with Cruz (1987b), the immature specimens are smaller than that, between 16.5 and 20 mm in length. Unlike the contributions mentioned above, this work describes the development process in only one phase, due to an overlap between the presence of previtellogenic and vitellogenic oocytes. And while Cruz (1987a) documented the maximum maturity phase and grouped attributes of the spawning and post-spawning process in one phase (spent), Aguirre-Rubí (2017) documented the mature, spawning and post-spawning phases, similar to the present work, but without describing the tissue details and only mentioning general features. Despite the differences outlined above, the gonadal phases in L. grandis described by the present work are similar to the observations of Cruz (1987a) and Aguirre-Rubí (2017) for the species. The difference is that the present work gives more detailed descriptions and uses slightly different nomenclature (Table 2). Additionally, the findings of this work are consistent with the descriptions of the gonadal phases of other species of Arcidae such as A. tuberculosa (Cruz 1984, Pérez-Medina 2005, García-Domínguez et al. 2008, A. antiquata and A. granosa (Jahangir et al. 2014), A. similis (Manjarrés-Villamil et al. 2013) and A. noae (Ghribi et al. 2017). It is important to note that there are different classifications of the gonadal phases even for the same species (Table 2), mainly due to the criteria applied by each researcher. During the post-spawning phase and the onset of the resting phase of L. grandis (sub-phase 1), the presence of brown cells was noticeable, and their amount was reduced towards the end of the resting phase (sub-phases 2 and 3). Also, brown cells have been observed in the acini of ripe and partially spawned gonads of A. noae (Ghribi et al. 2017), in ripe and spawning gonads of A. tuberculosa and in spawning and post-spawning gonads of A. similis (Manjarrés-Villamil et al. 2013). These cells were named as brown secretion granules in A. tuberculosa and A. similis , Manjarrés-Villamil et al. 2013. In mollusks, brown cells (also known as rhogocytes) are characterized by phagocytosis (endocytosis), role in metal ion metabolism, transport or storage of nutrients, synthesis or breakdown of respiratory pigments, supportive cell, selective reabsorption, and others (reviewed by Haszprunar 1996). In L. grandis, their presence and abundance after spawning suggests that brown cells are involved in the reabsorption of remnants during gonad recovery (post-spawning phase), and after the gonad is recovered, the abundance of brown cells decreases (resting phase). Different stages of sex cells were distinguished in the same acini of male gonads of L. grandis (spermatogonia, spermatocytes, spermatids and spermatozoa), as reported by Ma et al. (2017) for the scallop Chlamys farreri (Johnes & Preston, 1904) and by Chatchavalvanich et al. (2006) for the freshwater mussel Hyriopsis bialatus Simpson, 1900. This asynchronous gametogenesis in males of L. grandis occurs because different cohorts of gametes that are forming occur simultaneously, and this indicates that spermatozoa production is continuous during the development and maturity of the gonadal phases in males. In contrast, female gonads presented synchronic gametogenesis due to the presence of one or two gametogenic cell stages in a single acinus, suggesting that oocyte production is discrete, and therefore it takes time for the next cohort to be ready to be released. Since partially or totally empty acinus during the spawning phase of L. grandis were observed, and in some cases gamete-filled acinus, we suggest that this clam spawns in batches, a condition also reported by Pérez-Medina (2005), Manjarrés-Villamil et al. (2013) and Hernández-Hernández (2014) for A. tuberculosa, A. similis and A. multicostata, respectively.
4,676
2021-03-09T00:00:00.000
[ "Biology", "Environmental Science" ]
MuscleJ2: a rebuilding of MuscleJ with new features for high-content analysis of skeletal muscle immunofluorescence slides Histological analysis of skeletal muscle is of major interest for understanding its behavior in different pathophysiological conditions, such as the response to different environments or myopathies. In this context, many software programs have been developed to perform automated high-content analysis. We created MuscleJ, a macro that runs in ImageJ/Fiji on batches of images. MuscleJ is a multianalysis tool that initially allows the analysis of muscle fibers, capillaries, and satellite cells. Since its creation, it has been used in many studies, and we have further developed the software and added new features, which are presented in this article. We converted the macro into a Java-language plugin with an improved user interface. MuscleJ2 provides quantitative analysis of fibrosis, vascularization, and cell phenotype in whole muscle sections. It also performs analysis of the peri-myonuclei, the individual capillaries, and any staining in the muscle fibers, providing accurate quantification within regional sublocalizations of the fiber. A multicartography option allows users to visualize multiple results simultaneously. The plugin is freely available to the muscle science community. Supplementary Information The online version contains supplementary material available at 10.1186/s13395-023-00323-1. The histological study of skeletal muscle is an efficient way to understand its pathophysiological state, especially in the context of myopathies, aging, or responses to exercise and regeneration.Histological analysis is particularly useful in establishing a diagnosis and understanding the progression of various pathological conditions or for evaluating potential therapeutic approaches.Different parameters of skeletal muscle sections are examined to generate quantitative measurements of specific readouts.For example, the fiber cross-sectional area (CSA) and Feret diameter of muscle fibers can be used to evaluate skeletal muscle atrophy/hypertrophy.However, the amount of histological detail that can be obtained from a skeletal muscle tissue slide is large and often underexploited, mostly due to the subjectivity and massive time consumption of manual feature assessment.Therefore, several software programs (Sup.Table 1) have been developed to automate the estimation of these different parameters, including MuscleJ [1], an automated ImageJ macro that we created to quantify multiple types of histological data from muscle immunofluorescence slide images. Initially, MuscleJ was able to automatically extract the fiber CSA and Feret diameters, the number of centronucleated fibers, the number of centronuclei, the number of satellite cells and capillaries (initially called "vessels") per fiber, and the fiber typing [1].MuscleJ begins its analysis with fiber segmentation, which then defines four regions of interest (ROIs): the ROI fiber (ROI F ), the ROI centronucleated fibers (ROI CNF ), the ROI satellite cells (ROI SC ), and the ROI vessels (ROI V ).Specific staining is then quantified, and the results are automatically stored in results' files.This automated process enables the high-content analysis of raw immunofluorescence image batches.Since its development, MuscleJ has been used in many studies.However, user requests prompted us to implement additional functions, which we have bundled in a new plugin named MuscleJ2, which enables faster analysis. Along with a new interface, additional extracted features include the quantification of peripheral myonuclei, the evaluation of vascularization, and the characterization of specific cells anywhere in skeletal muscle.We also quantify the fluorescence intensity of any immunolabeling within muscle fibers and have made it possible to analyze this staining in multiple ROIs.A fifth ROI corresponding to the region bordering the muscle fiber membrane was also added.Another new feature is the ability to quantify any staining of extracellular matrix (ECM) components, such as different types of collagen or laminin.We have improved the sensitivity of the software for different quantification workflows and added numerous new measurements to the table of results.In MuscleJ2, users can perform multioutput analyses and multicartographies to obtain a full characterization of skeletal muscle tissue.The plugin is freely available in a publicly shared space (https:// github.com/ ADanc kaert/ Muscl eJ2/) and will be updated regularly. Results The user interface of MuscleJ2 is organized into five panels: Sample Data, Data Acquisition, Data Analysis by Section, Data Analysis by Fiber, and Data Cartographies (Fig. 1), which will be described in more detail below.Before starting a run on an image set, the user must organize the acquired images into different folders so that the images in a given folder have the same properties (same type of muscle, same pathophysiological state, same staining, same data acquisition), as explained in the online User Guide. Sample data panel Under physiological conditions, CSA is homogeneous across fiber regions, making it possible to use this parameter to discriminate between what can and cannot be labeled as a fiber.This is not the case when skeletal muscle is damaged, as in myopathies or after injury, where fiber size can be very heterogeneous.We have taken this point into account and have introduced an option, named Pathophysiology, where the user can choose between Healthy and Damaged fiber populations (Fig. 1).In damaged muscles, the heterogeneity of fiber CSA is increased, and MuscleJ2 flexibly considers a wider range of CSA measurements.The contribution of the Damaged option (selected in the Pathophysiology tab) is illustrated in Fig. S1, where the mouse tibialis anterior was partially injured, resulting in significant variability in fiber CSA between injured and uninjured parts.When Healthy is selected, MuscleJ2 excludes the largest and smallest fibers.When the Damaged option is selected, the range of differences in fiber CSA is much wider, and all the fibers are taken into account.This option allows the users to adapt the algorithm according to their parameters of interest.Notably, even in fusiform muscles such as the tibialis anterior, not all myofibers are fully aligned with the longitudinal axis of the muscle, and some have a high pennation angle [2].This has led to the presence of nontransversal but extremely elongated fibers within cross sections (in parts of rat muscles) holding a low circularity value, and these fibers were correctly excluded by Mus-cleJ2.There can also be variation in CSA values along the length of the muscle [3], which would require the analysis of multiple levels of cross sections for a better assessment of myofiber size variation. In the Sample Data panel, the user can inform the plugin of the anatomical origin of the sections, i.e., from limb or diaphragm muscle (Fig. 1).This option was added because of the large difference between classical hind limb muscles and the diaphragm, the latter usually being cut in a folded state (Fig. S2).When the Diaphragm option is selected, MuscleJ2 does not fill in holes to account for the actual surface of the tissue.As this type of skeletal muscle is studied with particular interest in pathological states [4,5], this option now offers the possibility of analyzing it with MuscleJ2. It is now possible to analyze a section of skeletal muscle divided into several pieces in the image, whereas in the first version of MuscleJ, only the largest region was selected.This allows the analysis of different skeletal muscle subsections grouped on the same image, which is particularly useful for muscles with different chiefs, such as the quadriceps femoris or the gastrocnemius, which can be separated into several parts during the cryosection preparation. Data acquisition panel We have developed an algorithm applicable to images obtained from a wider range of more recent equipment, which is why the selection of the Acquisition system (Apotome/Wide field/…) and the File format is no longer necessary.MuscleJ2 can easily work on different image formats (such as.czi,.lif,.tiff…) supplied by the majority of gold standard image acquisition systems (Fig. S3).Importantly, image quality is a prerequisite for good analysis, and the acquisition system must be carefully selected before the batch experiments are performed. In the Volume option, the user must inform MuscleJ2 if the images contain a single Z or a stack of Z.When the Z-stack option is selected, MuscleJ2 will automatically perform a maximum intensity projection prior to any analysis (Fig. S3).Although MuscleJ2 is designed to work on whole skeletal muscle sections, there is a Scanned Area option (Entire section/Crop) in case the muscle section is not whole.However, the user must be careful when using the Crop option and ensure that the crop contains a minimum of 25% of the image with a black background without tissue.This is essential for correct quantification.The Artefact Detection option, which was previously in the MuscleJ macro, has been incorporated into this section (Fig. 1).It allows the user to eliminate from the analysis any slides where the detected muscle fibers represent less than the indicated percentage of the total muscle surface. A new panel with features related to the whole skeletal muscle section In this third panel, named Data Analysis by Section, we have introduced new functionalities that do not refer to individual fibers but to the total surface of the skeletal muscle (Fig. 1).All these analyses are performed on whole-slide image sections or on representative parts of the image manually cropped by users.For these analyses, the definition of ROI is not necessary, unlike other functionalities of the Data Analysis by Fiber panel, described below.Consequently, laminin staining is not mandatory, and artifact detection is not associated with these options.It is therefore the responsibility of the user to ensure that the muscle sections are correctly detected and do not contain holes or folds.However, staining for ECM or any fiber marker (except nuclear markers) is necessary for MuscleJ2 to delineate the section contours and estimate the total surface area.The corresponding channel must be implemented in the Section Shape in dialog box 2 (Fig. 1).This allows the quantifications of the different parameters to be related to the total surface of skeletal muscle. In this panel, three new features have been developed: ECM Area Detection (Fig. 2A) The ECM forms a network of macromolecules and smaller components that fill the extracellular space and can be divided into two parts: the basement membrane, which surrounds thin muscle fibers, and a more diffuse interstitial matrix.The basement membrane can be specifically detected using anti-laminin or anti-collagen IV antibodies, for example.Quantification of the ECM is particularly important in the context of myopathies and skeletal muscle regeneration studies, which require assessment of the area of fibrosis corresponding to modifications of the ECM, with accumulation of different components, such as collagen I (reviewed in Loreti et al. [6]).Similarly, wheat germ agglutinin (WGA), a carbohydrate-binding protein conjugated to various fluorochromes, can be used for the global visualization of muscle ECM and fiber boundaries [7].This provides rapid fluorescence staining with few background noise events (Fig. 2A).Since WGA detects ECM by labeling sialic acid and N-acetylglucosamine residues contained in glycoproteins and glycolipids, it could also be linked to oligosaccharides contained in the cell membrane.Therefore, we do not recommend its utilization in conditions with large and multifocal myofiber necrosis areas and/or with immune infiltrates, such as the first few days after muscle injury (data not shown). Because ECM staining is sufficient and is included in the algorithm to detect the muscle section, another channel for the Section Shape is not needed.We would like to emphasize that variations in ECM content can also be observed when tissue sections are obtained from different levels along the muscle length since internal tendons may or may not be present [8].In the "GlobalResults" file for this analysis, two outputs are reported: the ECM area (in µm 2 ) and the percentage of the total section area accounted for the ECM area (Fig. 2A). Vascularization (Fig. 2B) The second feature of the panel is the assessment of the Vascularization of the skeletal muscle.In the original version of MuscleJ, the number of vessels was quantified and reported to their associated fibers [1].Such staining corresponds to capillaries.We now distinguish between total Vascularization, including all types of vessels without morphological criteria, and Capillaries (detailed below in the section Data Analysis by Fiber).This concerns all arteries or veins contained in the entire muscle section.This option measures the percentage of the total surface occupied by the vessels relative to the total section area of skeletal muscle.The number of vessels per mm 2 is also provided in the result tables (Fig. 2B).Therefore, it is possible to perform an analysis of vascularization independently without using fiber morphology (which does not need to be labeled).As for the ECM, endothelial cell staining Fig. 2 New functionalities of the plugin.A Immunostaining of skeletal muscle with WGA showing the extracellular matrix (ECM) in green (SB = 600 µm) and respective quantification with MuscleJ2 in the "GlobalResults" file.B Immunostaining of skeletal muscle with laminin (gray) and CD31 showing the endothelial cells in red (SB = 600 µm) and quantification of vessels and capillaries with MuscleJ2.Tables present the results obtained after selecting the option Vascularization (section Data Analysis by Section) and Capillaries (section Data Analysis by Fibers).The gray table presents the results obtained in the "GlobalResults" file, and the green table presents some of the results obtained in the "CapillaryDetails" file (SB = 600 µm).C Immunostaining of skeletal muscle with laminin (gray), DAPI (blue), and F4/80 showing the macrophages in red and quantification of specific cells with MuscleJ2 (SB = 600 µm).The gray table presents the results obtained in the "GlobalResults" file, and the green table presents some of the results obtained in the "SpecificCells" file.Nucleus GC X and Y correspond to the coordinates of identified specific cells colabeled with DAPI.All areas are indicated in µm 2 (See figure on next page.) (for example, with CD31 antibody) is sufficient and is included in the algorithm to detect the muscle section; therefore, this channel can be used for the Section Shape. Specific Cells (Fig. 2C) Skeletal muscle tissue contains a variety of nonmyogenic cell types that are located between fibers and are not capillary or satellite cells, which are already tracked by Mus-cleJ.The third functionality is the characterization of these Specific Cells located anywhere in skeletal muscle.These may be, for example, resident stromal cells or infiltrating immune cells observed in pathological conditions or in injured tissues [9][10][11].The name of the marker used to label-specific cells is entered manually as Cell Marker in the second dialog box (Channel Information) (Fig. 1F).This name will then be reported in the final table of results (Fig. 2C).This cell-specific marker can label an antigen as being located in the cytoplasm, membrane, or nucleus.A nuclear DNA label is also needed to ensure that the detected staining identifies a true cell and not artifacts such as cellular debris or a nonspecific signal.However, because nuclei of some cells may be imaged out of focus, the total number of specific cells, including those not counterstained with nuclear dye, is reported in the final "GlobalResults" file (Nb-Specific Cells and Nb-Specific Cells with nuclei) (Fig. 2C).In addition, MuscleJ2 provides information corresponding to the percentage of the area occupied by these cells (%Specific Cell Area), as well as the mean intensity of the signal in specific cells with nuclei (Intensity Mean) and their Area Mean (Fig. 2C). Because each signal is different, MuscleJ2 provides the users with the raw data to allow them to set a personal threshold and filter their results based on their experience.In the final "SpecificDetails" file, the user can find the min and max Feret diameter, the coordinates (x, y) of the gravity center of the specific staining (only cells costained with DAPI), the nuclear center of gravity (x, y), and the intensity of the appropriate channel for each specific cell.As with all the other options, to allow viewing of the specific cells identified by MuscleJ2, their coordinates are saved in the ROI dedicated folder, and it is easy to return to any cell if needed.As an example, this new functionality was tested to detect F4/80-positive pan-macrophages (Fig. 2C) in a series of cross sections of regenerating muscle.Any validated antibody giving rise to a distinct signal in any cell in skeletal muscle can be used, offering a large panel of data analysis.In a set of images, it is possible to quantify several cellular markers, albeit one at a time, by running the batch of images for each specific marker.Since cells positive for multiple labels will share the same located nucleus, they could be quantified by mixing (using open-access software such as R) all the files "SpecificDetails" for each muscle section by the column nucleus gravity center (x, y). All these novel functions are compatible with the other functions of the Data Analysis by Fiber panel. New functionalities reported for muscle fibers In this section, all the results are given per fiber, based on the laminin staining (or any equivalent staining to identify myofibers).We have already described the different ROIs in the original version of MuscleJ [1], and they are conserved in this new version of the plugin.However, to be more precise, we have changed ROI V (vessels) to ROI Cap (capillaries), as explained previously.Moreover, we added a new ROI corresponding to the cellular membrane region of the fiber (ROI MB ) (Fig. 3A).This new specific ROI MB has been designed to quantify fluorescence staining in sarcolemmal or subsarcolemmal regions, such as the dystrophin-glycoproteins complex, where mutations in the genes encoding for its components can cause several muscular dystrophies. We have also implemented new functionalities in this section. Peri-myonuclei Nuclei located inside myofibers are named "myonuclei" (Fig. 1).This novel functionality allows the quantification of the nuclei belonging exclusively to muscle fibers independently of the Centro-Myonuclei function.In healthy conditions, these nuclei exhibit a peripheral location.Since skeletal muscle is a highly adaptable tissue, their number may vary and needs to be quantified for each fiber.Myonuclei can be labeled in vivo using a transgenic mouse strain expressing histones coupled to GFP specifically in myofibers [12] or by using an antibody against the centrosomal protein PCM1 [13].While PCM1 can also be expressed by proliferating myoblasts and macrophages in damaged muscle [14], MuscleJ2 can specifically detect myonuclei based on their location in the ROI MB (Fig. S4). To be identified as peripheral myonuclei by MuscleJ2, nuclei must be colabeled with the myonuclei marker and a fluorescent DNA stain such as DAPI.This is different (See figure on next page.)Fig. 3 Measurement of Fiber Intensity by ROI.A Representation of the different ROIs in MuscleJ2.ROI F , ROI Fiber; ROI CNF , ROI Centronucleated Fiber; ROI SC , ROI Satellite Cell; ROI Cap , ROI Capillary; ROI MB , ROI Membrane.B Original image of skeletal muscle stained with dystrophin and corresponding cartographies representing the different ROIs obtained after MuscleJ2 analysis with the Fiber Intensity option.C For each fiber, the intensity of the staining and the percentage of positive pixels in each ROI are given.D Quantification of dystrophin staining in the different ROIs.The gray table presents the results obtained in the "GlobalResults" file, and the green table presents some of the results obtained in the "FiberDetails" file from Centro-Myonuclei detection, which uses ROI CNF and does not require colabeling because the central location may be sufficient for classification as myonuclei. This analysis could be particularly useful to study myonuclei modifications in response to exercise training, and its controversial persistence during detraining (reviewed in Rhamati et al. [15]), which could vary according to fiber type, could be associated with changes in nuclei [16] or could be regulated by epigenetic modifications that could be investigated in situ with fluorescent labeling [17]. Capillaries The option named Capillaries replaces the option Vessels of the original version of MuscleJ.This allows the user to analyze the capillaries associated with the fibers independently of the total vascularization of the muscle, which can now be performed using the Vascularization option, as described above.Consequently, the new ROI Cap replaces the previous ROI V . The "GlobalResults" file shows the number of fibers with capillaries and the total number of capillaries.The min and max Feret diameter, the gravity center coordinates (x, y), and the intensity of the appropriate channel for each capillary (Fig. 2B), as well as the parameter named Sharing Factor (SF), which represents the number of fibers around each capillary [17], are included in the "CapillaryDetails" file.In the "FibersDetails" file, the number of capillaries surrounding each fiber has been named capillary contacts to correspond to the commonly used terms [18,19]. New fiber type IIX and changes in fiber typing In the previous version of MuscleJ, fibers expressing type IIX myosin heavy chain (MyHC) were detected indirectly as corresponding to unstained fibers.In MuscleJ2, a channel can now be selected to directly identify this additional adult MyHC.This allows more accurate detection of type IIX fibers and hybrid myofibers expressing two or more isoforms [20].This option is named Type IIX fibers (Fig. S5A).This allows, for example, investigation of hybrid myofiber transitions in disease or in response to exercise [20].Specific labeling of fibers expressing MyHC IIX may be particularly useful for human muscle samples because the type IIB isoform is not expressed, and some antibodies may cross-react against other isoforms [21]. In addition, many changes were made in the fiber typing to improve this quantification (see "Methods").The fiber-type analysis of the plugin has been optimized using a set of images from different users where type IIB or IIX fibers have, in most cases, a lower fluorescence intensity, probably due to lower reactivity of the IgM subclass of these primary antibodies [22].We would like to point out that a fiber type could be variable along the same myofiber, as type IIA has been reported to be more abundant at the proximal extremity of the tibialis anterior in mice [3]. The thresholds are given in the "GlobalResults" file, as along with the associated fiber type defined by MuscleJ2 (Fig. S5B).However, if staining problems are encountered, it is possible not to use the automatic classification performed by MuscleJ2 and to go back to the "FiberDetails" files to reclassify the fibers manually based on a user-defined threshold. Fiber intensity by ROI (Fig. 3) A myriad of fluorescent labels can be investigated in muscle fibers as part of skeletal muscle research.The Fiber Intensity by ROI is a feature that allows quantification of any staining in muscle fibers (Fig. 1).Staining intensity is measured simultaneously in different areas of interest, since some markers may be heterogeneously expressed within the myofiber or at or below the cell membrane (sarcolemma).The results provide the intensity of the labeling and the %intensity positivity in the different ROIs (Fig. 3A-B).In the "GlobalResults" file, the average intensity of all segmented fibers is given for each ROI (ROIx Intensity Mean), as well as the associated standard deviation (ROIx Intensity StdDev). For example, in regenerating or pathologic states, developmental isoforms could be re-expressed as embryonic and perinatal MyHC [23].The quantification of the number of newly regenerated fibers re-expressing embryonic MyHC (MYH3 -positive fibers) can now be detected using this new function.Another example is the quantification of the percentage of dystrophin positivity in the different fiber ROIs, particularly in the ROI MB (Fig. 3C).In the "GlobalResults" file, MuscleJ2 indicates the mean intensity of staining for all the fibers based on the staining/background ratio.However, the user can decide to use a different threshold based on the results by working directly on the "FiberDetails" file (Fig. 3D). Multiple analysis in the cartography section All analyses carried out by MuscleJ2 can be visualized on cartographies in the Data Cartographies panel (Fig. 1).In addition to the five cartographies initially developed in MuscleJ to visualize the results of the analysis of Fiber Morphology, Centro Myonuclei Fibers, Satellite Cells, Vessels, and Fiber Type, we have added the cartographies of peri-myonuclei, Fiber Intensity, Specific Cell Localization, and in situ ECM Signal (Fig. 4A).MuscleJ2 also offers the possibility of adding a legend and a scale bar at different positions of the image, determined by the user.In addition, it is now possible for the user to select the channel on which the cartography will be drawn (in the second dialog box: Image used for cartographies).Furthermore, for the option Fiber Intensity by ROI, different cartographies were added to represent the MuscleJ2 results in the different ROIs.Another new option in this panel, named Multi-Cartography Montage, allows users to obtain a photo montage of all selected options in the Data cartography panel (Fig. 4B). Generation of metadata files After each run, MuscleJ2 generates a text file containing a summary of the options and selected analyses (Fig. S6).This file allows the user to easily retrieve the metadata associated with the performed analysis and is in the result folder along with other files such as "Global-Results" and "FiberDetails." In the latter, the user can access the details of the requested Analysis by Fiber.The user can therefore use the "ROI" file to review the identified fibers of the section and possibly manually delete some major aberrant fiber detections.However, we do not recommend adding new fibers manually, as this will add an additional source of variability since the mode of quantification will be different from that performed automatically.The "GlobalResults" file averages all the fibers in the section.Compared to the original version of Mus-cleJ, it is no longer generated at the end of the run but is updated after each executed image to obtain the global results step by step, without losing data if the plugin unexpectedly stops before the end of the process. Discussion The development of MuscleJ2, running on ImageJ/Fiji, now significantly extends the functionality of the original macro to cover a wider range of possible quantification scenarios performed on fluorescence-labeled images of skeletal muscle sections. While numerous other software programs have been developed since the publication of the original MuscleJ, most of them are complementary and focus on specific parameters (Sup.Table 1).One of the major advantages of MuscleJ2 over other comparable software is its capacity to handle a combination of different types of analysis.All the described functions can be used together to provide new information. With the parallel development of new acquisition systems, which offer the possibility of working with more than four fluorescent markers, MuscleJ2 enables multioutput skeletal muscle analysis.For example, it is possible to perform multiple labeling with the detection of laminin and nuclei associated with three types of myosin heavy chains, requiring 5 detection channels (Fig. S4) or more complex multicolor staining with codetection of laminin, vessels, and two types of myosin heavy chains using one of the protocols described by Bailly et al. [24]. The developed plugin is easy to use, and the troubleshooting annex of the User Guide will be completed based on feedback received from users to list the problems users encounter and their solutions.We encourage users to keep up to date with the changes that will be made in future versions of MuscleJ2, which will be updated and made available on GitHub. We will continue to implement new features in the plugin, which will be updated online and detailed on GitHub.All new features to be developed will be available on GitHub, which will be updated regularly. Animals and tissue preparation To validate and illustrate the new features of Mus-cleJ2, skeletal muscle sections were obtained from mice and rats.Animals underwent experimental procedures approved by local ethics committees for other projects in which tissue sampling had already been planned.Rats were euthanized by decapitation following isoflurane anesthesia.For both models, various muscle anatomic localizations were harvested, including the hind limb (tibialis anterior and gastrocnemius) as well as the diaphragm and snap-frozen in liquid nitrogen-cooled isopentane.Samples were then stored at − 80 °C before cryosectioning. Immunofluorescence staining protocols Immunofluorescence staining was performed on frozen thin sections of skeletal muscle (7 to 12 μm).Briefly, sections were rehydrated with 1X PBS and fixed for 10 min (mouse samples) and 20 min (rat) in 4% (w/v) paraformaldehyde (PFA) in 1X PBS.After washing in 1X PBS, the cells were permeabilized with 0.1% or 0.5% (v/v) 1X PBS-Triton × 100 and then blocked either in 1X PBS -5% horse serum (mouse samples) or 10% BSA (F4/80 staining in mouse samples) or else in Emerald: Antibody Diluent (no.936B-08, Sigma; rat samples).Primary antibodies were incubated overnight (ON) at 4 °C.Hoechst H33342, and WGA and secondary antibodies, were incubated for 45 min at room temperature.In rat samples, nuclei were stained by DAPI contained in the mounting medium. The specific protocols, products, antibodies, and markers used are described in Sup.Table 2. Most of the images were acquired at 10 × and 20 × magnification of the whole section using the Axio Scan.Z1 (Zeiss, Germany), in several imaging platforms, includes one with 7 fluorescence channels by LED light source (385 nm/430 nm/475 nm/5 55 nm/590 nm/630 nm/735 nm).Some acquisitions were performed on the NanoZoomer S60 equipped with 5 fluorescence channels (Hamamatsu, Japan) or on the epifluorescence microscope DM6000 (Leica, Germany) using a monochrome camera. MuscleJ2 as a plugin in Fiji/ImageJ As Java-based public domain software implemented as a plugin for ImageJ (NIH, Bethesda, MD, USA, https:// imagej.nih.gov/ ij/) or Fiji [25], MuscleJ2 benefits from the facilities offered by Fiji/ImageJ for image input/output and preprocessing. The Java source codes were developed in the Eclipse IDE for Java Developers free environment (Version 4.23.0,www.eclip se.org) with ImageJ internal libraries. Thanks to the Java language, memory management has been optimized, and MuscleJ2 permits either a larger batch or image size than the previous macro MuscleJ.Moreover, several features have been optimized to increase the speed of Analysis by Fiber, e.g., for morphological analysis, the time mean by section with MuscleJ2 is decreased by 2 and that for result cartographies by 10. Hardware and software requisites The plugin has been tested on different operating systems (OS) such as Windows7, 8, and 10/MacOS Monterey up to 12.4/Ubuntu 20.04 with the following minimum computer requirements: • RAM: 8 GB minimum, 16 GB highly suggested • System type: 64-bit operating system The Fiji/ImageJ environment is required with a maximum memory setting fixed to 75% of the computer's total memory, and the Bio-Formats plugin (https:// docs.openm icros copy.org/) must be present in the Plugins menu. The plugin has been tested on the following software versions: • Fiji/ImageJ version: from 1.51e to 1.53t • Java version (64 bits): From Java 1.8.0-66 to Java 1.8.0-172 • Used plugins: Bio-Formats plugins (up to release 6.6) For more information about MuscleJ2 plugin installation and preliminary requests before starting MuscleJ2, please refer to the User's Guide Chap.I "MuscleJ2 in the Fiji/ImageJ environment." New implementations and improvements The main entry point of MuscleJ2 is a graphical user interface organized into five panels: Sample Data, Data Acquisition, Data Analysis by Section, Data Analysis by Fiber, and Data Cartographies involving the implementation or improvement of several functions related to these different panels. User interface panel 1: sample data Option Pathophysiology: The plugin measures the mean of all the CSA fibers of the section.From this average, only fibers with a circularity between 0.45 and 1 and a fiber area between 100 µm 2 and area mean + 3 × Std-Dev for the Healthy option and 50 µm 2 and area mean + 4 × StdDev for the Damaged option are analyzed. User interface panel 2: analysis by section Option ECM Area Detection: To detect the area representative of ECM, a threshold based on Moments method was performed, followed by a low erosion filter. Option Vascularization: After a series of pretreatments and an intensity histogram analysis to subtract the background and to detect the real intensity on the appropriate channel, the vessel borders are delimited, and the total surface covered by vessels is calculated. The Vascularization surface (%) mentioned in the "GlobalResults" table corresponds to the ratio between the total surface covered by vessels and the total surface of the section. Option Specific Cells: The algorithm first applies a series of pretreatments on the indicated channel to quantify both circular and irregularly shaped cells on the whole section.Then, the nuclei are localized on the appropriate channel, and MuscleJ2 checks if they overlap with the specific cells previously segmented.At this step, there are two sets of specific cells, with or without nuclei gravity center coordinates, as mentioned in the "SpecificDetails" file. User interface panel 3: analysis by fiber ROI MB definition: The ROI MB is defined as the space inside the fiber ROI corresponding to one-twentieth of the minimal (Min) Feret diameter of the ROI F . Option fiber Intensity by ROI: To track the real positive signal inside the ROIs, after background subtraction, an automatic threshold based on intensity histogram analysis was applied to attribute each fiber a percentage of positive intensity defined by the number of positive intensity pixels divided by the surface of the ROIs (%intensity positivity) (Fig. 3B). Option fiber typing: With this option, for all fibers detected by morphological analysis, the intensity mean (mean IF ) and its standard deviation were calculated by channel corresponding to a type of fiber (I, IIA, IIB, or IIX). Based on intensity histogram analysis, different thresholds of positivity, depending on fiber types, have been defined as Mean IF + StdDev for type I and IIA fibers and equal to Mean IF for type IIB and IIX fibers. User interface panel 4: data cartographies Option: legend The sizes of the legend (width and height) are proportional to the original image size.For the type legend, only the hybrid types present in the section are mentioned. Option scale bar: With this option, four locations to position the scale bar are possible: lower right, lower left, upper right, and upper left.By default, no scale bar is shown.The length of the scale bar is fixed to 300 µm for a whole section and to 100 µm for an image crop. Option Multi Cartography Montage: For this option, thanks to the ImageJ function called "Make a montage…"; an automatic montage of the asked cartographies is created with the following parameters: if the number of cartographies is higher than 3, a new line is created to perform readable montage with a reasonable size.A title is written on each cartography corresponding to the analysis performed.For more information on the features and options of the MuscleJ2 plugin, please refer to the User Guide Chap.II "How to launch MuscleJ." Metadata and user choices by batch A text log file by batch is created with the following nomenclature: YYYYMMDD_HHMMSS_imagefoldername_BATCH_ LOG.txt. It contains the information associated with the performed analysis as the metadata selected by the user in the principal dialog box but also the performed analysis, channel attribution, and general information linked to the batch run. Data analysis nomenclature A text global result file by batch run is created with the nomenclature "ImageFolderName_GlobalResults_Listo fanalysisperformed.txt",where "ImageFolderName" corresponds to the name of the image folder selected at the beginning of the batch run and "Listofanalysisperformed" corresponds to the abbreviations added at the end of the global result file name.This allows the user to associate a global result table with the analysis pipeline performed by batch run. Output files Each distinct cell type has a proper file such as "SatCell-Details" for satellite cells, "SpecificDetails, " "fluorescent marker name" for each labeling of specific cells, or "Cap-illaryDetails" replacing the "VesselDetails" file of the previous version.All these files have been provided for each image analyzed per batch run (for more information, see User Guide Chap.III "Description of result files by batch"). All fiber ROIs are automatically saved in the ROI folder with the extension "_xxROI.zip," including the ROI corresponding to section shape but also the ROIs for specific cells as well as for satellite cells. Fig. 1 Fig. 1 Interface of the MuscleJ2 plugin.Screenshots of the plugin dialog boxes.A The main dialog box MuscleJ2 is divided into five sections where the user must select from a drop-down menu or check boxes.The lowercase letters in red refer to dialog box 2, in which the channels and staining information must be indicated.B The Channel information dialog box is used to indicate the channel number for each requested analysis.Depending on the analyses selected in the MuscleJ2 dialog box, the design of this dialog box changes.In the upper panel, the lowercase letters in red refer to the section Data Analysis by Section (a, b, c); in the lower panel, they refer to the section Data Analysis by Fiber (d, e, f ) and Data Cartographies (g) Fig. 4 Fig. 4 Novelty of the cartography section.A Representative images of the cartographies obtained for specific cells, ECM detection, and capillaries (SB = 600 µm).B Representation of the image obtained after selection of the multicartography option, in which different cartographies are assembled on the same image.In this example, the image was stained with dystrophin, and the results are represented in the different cartographies (SB = 300 µm)
8,383.2
2023-08-23T00:00:00.000
[ "Computer Science", "Biology", "Medicine" ]
Flavonoid Derivatives as New Potent Inhibitors of Cysteine Proteases: An Important Step toward the Design of New Compounds for the Treatment of Leishmaniasis Leishmaniasis is a neglected tropical disease, affecting more than 350 million people globally. However, there is currently no vaccine available against human leishmaniasis, and current treatment is hampered by high cost, side-effects, and painful administration routes. It has become a United Nations goal to end leishmaniasis epidemics by 2030, and multitarget drug strategy emerges as a promising alternative. Among the multitarget compounds, flavonoids are a renowned class of natural products, and a structurally diverse library can be prepared through organic synthesis, which can be tested for biological effectiveness. In this study, we synthesised 17 flavonoid analogues using a scalable, easy-to-reproduce, and inexpensive method. All synthesised compounds presented an impressive inhibition capacity against rCPB2.8, rCPB3, and rH84Y enzymes, which are highly expressed in the amastigote stage, the target form of the parasite. Compounds 3c, f12a, and f12b were found to be effective against all isoforms. Furthermore, their intermolecular interactions were also investigated through a molecular modelling study. These compounds were highly potent against the parasite and demonstrated low cytotoxic action against mammalian cells. These results are pioneering, representing an advance in the investigation of the mechanisms behind the antileishmanial action of flavonoid derivatives. Moreover, compounds have been shown to be promising leads for the design of other cysteine protease inhibitors for the treatment of leishmaniasis diseases. Introduction Leishmaniasis comprises a group of vector-borne infectious diseases with a broad clinical spectrum. Classified as a neglected disease by the World Health Organisation (WHO), leishmaniasis affects more than 350 million people worldwide [1,2]. Although it represents a serious public health problem, there is still no vaccine for humans. In addition, Leishmaniasis comprises a group of vector-borne infectious diseases with a broad clinical spectrum. Classified as a neglected disease by the World Health Organisation (WHO), leishmaniasis affects more than 350 million people worldwide [1,2]. Although it represents a serious public health problem, there is still no vaccine for humans. In addition, the drugs used in therapy are expensive and highly toxic, they cause numerous side-effects, and the routes of administration are painful. Consequently, adherence to treatment is impaired, further strengthening the disease cycle, especially in developing countries [3]. The serious situation involving leishmaniasis resulted in the United Nations (UN) setting a goal in 2020 to combat this group of diseases: sustainable development goal (SDG) 3.3 aims to end the epidemics of several diseases, including neglected tropical diseases (NTDs) by 2030 [4]. This objective has made the discovery and development of new antileishmanial drugs that are more effective, cheaper, easily obtainable, and capable of being administered via alternative routes an emergency. To achieve this goal, different drug discovery approaches need to be used. The multitarget drug strategy has emerged in the last few decades; this approach is based on the complexity of the pathologies and considers that single-target drugs are insufficient to achieve the desired therapeutic effects [5]. Recently, the multitarget drug strategy was reported as a tool to accelerate the discovery of safer, more active, and patient-compliant drugs for the treatment of leishmaniasis [6]. In countries rich in biodiversity, such as Brazil, the use of secondary metabolites from natural sources as new prototypes is a compelling alternative. Flavonoids stand out as they comprise one of the most diverse groups of secondary metabolites, marked by their wide distribution in plants and different therapeutic potentials [7]. These compounds are structurally versatile due to their chemical core and are considered important prototypes for the development of multitarget drugs (Figure 1). It is essential to note that flavonoids have shown in vitro and in vivo antileishmanial activity [8,9]. Flavonols such as quercetin and fisetin inhibit the arginase enzyme, as well as modulate the host's immune response against the parasite, resulting in low patient toxicity [8]. However, the process of isolating and purifying these compounds is expensive and time-consuming. These methodologies generally require high-cost equipment, result in low yields, and use toxic solvents, reducing the sustainability of the process. In line with our interest in discovering and developing new sustainable, efficient methodologies for biologically active compounds and a patient-compliant alternative for leishmaniasis treatment [10][11][12][13][14][15][16][17], our research group aimed to yield natural-based bioactive compounds with multitarget properties. To achieve this purpose, we designed a scalable, easy-to-replicate, and inexpensive synthetic route to obtain flavonol and chalcone analogues. The antileishmanial activity was elucidated using in vitro and in silico methods, and the cytotoxicity was measured to determine the selectivity index (SI). Chemistry Our retrosynthetic analysis was based on green chemistry principles. Thus, we used mild reaction conditions and inexpensive catalysts and reagents ( Figure 2). The chalcone In line with our interest in discovering and developing new sustainable, efficient methodologies for biologically active compounds and a patient-compliant alternative for leishmaniasis treatment [10][11][12][13][14][15][16][17], our research group aimed to yield natural-based bioactive compounds with multitarget properties. To achieve this purpose, we designed a scalable, easy-to-replicate, and inexpensive synthetic route to obtain flavonol and chalcone analogues. The antileishmanial activity was elucidated using in vitro and in silico methods, and the cytotoxicity was measured to determine the selectivity index (SI). Chemistry Our retrosynthetic analysis was based on green chemistry principles. Thus, we used mild reaction conditions and inexpensive catalysts and reagents ( Figure 2). The chalcone analogue synthesis consists of a Claisen-Schmidt reaction, which is a condensation of aldehydes and carbonyl compounds, leading to α,β-unsaturated ketones in the presence of a base or Lewis acid [18]. In particular, the use of a base as a catalyst provides higher yields of flavonol-like compounds. The intramolecular H-bond of o-hydroxyacetophenones leads to an increase in acidity of its α-hydrogen and, in the presence of a base, aids in the generation of a strongly attacking enol anion [19]. Chalcone analogues were used in the synthesis of flavonol-like molecules through the Algar Flynn-Oyamada reaction, in which a chalcone undergoes an oxidative cyclisation under alkaline conditions to form a flavonol [20]. of a base or Lewis acid [18]. In particular, the use of a base as a catalyst provides higher yields of flavonol-like compounds. The intramolecular H-bond of ohydroxyacetophenones leads to an increase in acidity of its α-hydrogen and, in the presence of a base, aids in the generation of a strongly attacking enol anion [19]. Chalcone analogues were used in the synthesis of flavonol-like molecules through the Algar Flynn-Oyamada reaction, in which a chalcone undergoes an oxidative cyclisation under alkaline conditions to form a flavonol [20]. To obtain inexpensive bioactive compounds, we selected low-cost benzaldehydes and acetophenones. The number of synthetic compounds and the structural variations were designed to be statistically relevant for the elucidation of the structure-activity relationship (SAR). Among acetophenones, we also used halogenated ohydroxyacetophenones, as these substituents are reported for their high and selective antimicrobial actions, including antileishmanial activity [21,22]. Previously described aspects of the synthetic process were reviewed to obtain moderate to high yields of the target compounds (Table 1). Our protocol can also be applied to significantly larger reaction mixtures, and this scale-up also gives good yields. To obtain inexpensive bioactive compounds, we selected low-cost benzaldehydes and acetophenones. The number of synthetic compounds and the structural variations were designed to be statistically relevant for the elucidation of the structure-activity relationship (SAR). Among acetophenones, we also used halogenated o-hydroxyacetophenones, as these substituents are reported for their high and selective antimicrobial actions, including antileishmanial activity [21,22]. Previously described aspects of the synthetic process were reviewed to obtain moderate to high yields of the target compounds (Table 1). Our protocol can also be applied to significantly larger reaction mixtures, and this scale-up also gives good yields. hydroxyacetophenones leads to an increase in acidity of its α-hydrogen and, in the presence of a base, aids in the generation of a strongly attacking enol anion [19]. Chalcone analogues were used in the synthesis of flavonol-like molecules through the Algar Flynn-Oyamada reaction, in which a chalcone undergoes an oxidative cyclisation under alkaline conditions to form a flavonol [20]. To obtain inexpensive bioactive compounds, we selected low-cost benzaldehydes and acetophenones. The number of synthetic compounds and the structural variations were designed to be statistically relevant for the elucidation of the structure-activity relationship (SAR). Among acetophenones, we also used halogenated ohydroxyacetophenones, as these substituents are reported for their high and selective antimicrobial actions, including antileishmanial activity [21,22]. Previously described aspects of the synthetic process were reviewed to obtain moderate to high yields of the target compounds (Table 1). Our protocol can also be applied to significantly larger reaction mixtures, and this scale-up also gives good yields. Higher yields of chalcones were achieved when halogenated o-hydroxyacetophenones were used. Substituting a halogen atom in the phenolic ring results in an increase in the acidity of the hydroxyl group, which further favours the formation of the enol anion over the formation of unsubstituted o-hydroxyacetophenones [23]. Although chalcones exist as trans (E) or cis (Z) isomers, the E isomer is more stable from the thermodynamic perspective [24]. The configuration of the Z isomer is unstable as a result of the strong steric effects between the carbonyl group and the A-ring, making the E isomer the predominant configuration obtained in our study. The compounds were unambiguously characterised by NMR spectra. Chalcones 4a-4c presented most of the 13 C-NMR signals as doublets due to 1 J, 2 J, and 3 J fluorine (spin 1 /2) coupling with the respective carbons in the aromatic ring. Compared to the starting materials, the chalcones presented CH group signals at 7.30-7.90 ppm, with coupling constants (J) of 15.3 Hz, confirming the formation of the E isomer. These signals were not observed in the flavonol spectra. Furthermore, compounds f12a-f13c presented an OH group signal at 9.60-9.30 ppm, indicating the loss of 2-hydroxy from the starting material (chalcone), classically observed with a larger chemical shift due to the intramolecular H bond with the carbonyl group. The NMR data were compared with the literature, and the structures of the synthesised compounds were confirmed [25][26][27][28][29][30][31][32][33][34][35][36][37]. The spectra are available in the Supplementary Material. The discovery and development of new antileishmanial drugs is usually directed through phenotypic or target-based approaches [2]. The target-based strategy is based on previous evidence of the action of compounds on specific pharmacological targets. Driven by advances in molecular biology and the urgency to discover new effective drugs, this approach has been the dominant tool in the last three decades [38]. However, most of the molecules designed by the target-based strategy only demonstrated antipromastigote effects, although amastigote is the target form of the parasite, since this stage occurs in mammalian host cells [39]. Leishmania proteases stand out in this context. In particular, cysteine protease B (CPB) expression is elevated in the amastigote stage and plays an important role in the interaction between the parasite and its mammalian host [40]. The results of the inhibition of these proteases indicate their influence on macrophage infection and amastigote survival in host cells, as well as modulating the host's immune response [41]. The isolated bioflavonoids and their semisynthetic derivatives have demonstrated satisfactory activity against the isoforms rCPB2.8 and rCPB3 [42]; thus, these enzymes represent promising therapeutic targets for our study. In the screening of compounds with rCPB3, it was found that chalcone analogues 1b, 2a, and 2b reduced the enzymatic activity by 67.01%, 70.92%, and 62.55%, respectively, at a concentration of 5 µM. Flavonols were unable to inhibit the enzyme by more than 50% at concentrations of 1 µM or 5 µM ( Figure 3B). In the inhibition of rH84Y with compounds at 5 µM, it was observed that chalcones 1b, 1c, and 2c reduced activity by 73.61%, 82.29%, and 70.77%, respectively, while the flavonoid f12c reduced activity by 56%. Only compound 2c was able to inhibit more than 50% at 1 µM ( Figure 3C). Since all chalcones and flavonol analogues inhibited the three enzymes to some degree at a concentration of 5 µM, all were subjected to assays to determine their inhibitory potential (IC 50 ) against rCPB2.8 and isoforms. Since all chalcones and flavonol analogues inhibited the three enzymes to some degree at a concentration of 5 µM, all were subjected to assays to determine their inhibitory potential (IC50) against rCPB2.8 and isoforms. The difference in amino-acid sequence may have affected the inhibitory potential of 4c. The enzyme rH84Y differs from rCPB3 by a single amino-acid residue [43]. Since 4c was 8.74 times more potent in inhibiting rH84Y compared to rCPB3, it is possible to infer that the substitution of histidine for tyrosine was the variation that most affected the inhibitory potential of 4c. In the evaluation of the inhibitory capacity of flavonoids against rCPB2.8, the compounds f12a (IC 50 = 4.72 ± 0.38 µM), f12b (IC 50 = 5.23 ± 0.32 µM), and f13a (IC 50 = 1.88 ± 0.07 µM) showed the highest inhibitory potentials. Methoxy added to the C4 carbon in the f13a structure seems to potentiate the enzyme inhibition. On the other hand, the presence of dimethylamine on C4 in f13c disfavoured the inhibition of rCPB2.8 by 7.62-fold. Compounds f12a (IC 50 = 7.71 ± 0.78 µM) and f12b (IC 50 = 7.06 ± 0.63 µM) showed the best IC 50 values on rCPB3. Again, it was found that the presence of dimethylamine did not favour rCPB3 inhibition, as compounds f12c (IC 50 = 16.67 ± 2.53 µM) and f13c (IC 50 = 29.75 ± 2.08 µM) were the least effective. Regarding the inhibition of rH84Y, the flavonoids f12a (IC 50 = 3.85 ± 0.27 µM) and f12b (IC 50 = 8.85 ± 0.33 µM) presented the best inhibitory potential, and compounds f12c and f13c had the lowest inhibitory capacity (Table 3). On the basis of these data, it can be observed that the presence of dimethylamine on C4 carbon negatively impacts the inhibition of the three enzymes. The high inhibition capacity of all compounds against the three enzyme isoforms confirms their multitarget properties. Studies have shown the formation of larger lesions in BALB/c mice that received amastigotes expressing only CPB2.8 compared to those with amastigotes deficient in all three isoforms [44]. This result confirms that inhibition against the three isoforms is essential when developing a new effective antileishmanial compound based on the inhibition of the CPB enzymes. Among the compounds under study, the chalcone 3c and flavonols f12a and f12b were the most effective simultaneously against all enzymes tested, making them good candidates for prototypes. Therefore, these compounds were selected to evaluate the inhibition mechanisms of rCPB2.8, rCPB3, and rH84Y. By evaluating the inhibition mechanisms of compounds 3c, f12a, and f12b on rCPB2.8, we verified the slope vs. [inhibitor] and the intercept vs. [inhibitor] (Figures S1D, S2D, and S3D, Supplementary Materials) of the parabolic profile. This means that two molecules participate in the inhibition mechanism process. Initially, there is binding of the first molecule, which may favour or impair the binding of the second. This molecular behaviour is responsible for respectively establishing the cooperativity as positive or The high inhibition capacity of all compounds against the three enzyme isoforms confirms their multitarget properties. Studies have shown the formation of larger lesions in BALB/c mice that received amastigotes expressing only CPB2.8 compared to those with amastigotes deficient in all three isoforms [44]. This result confirms that inhibition against the three isoforms is essential when developing a new effective antileishmanial compound based on the inhibition of the CPB enzymes. Among the compounds under study, the chalcone 3c and flavonols f12a and f12b were the most effective simultaneously against all enzymes tested, making them good candidates for prototypes. Therefore, these compounds were selected to evaluate the inhibition mechanisms of rCPB2.8, rCPB3, and rH84Y. By evaluating the inhibition mechanisms of compounds 3c, f12a, and f12b on rCPB2.8, we verified the slope vs. [inhibitor] and the intercept vs. [inhibitor] (Figures S1D, S2D, and S3D, Supplementary Materials) of the parabolic profile. This means that two molecules participate in the inhibition mechanism process. Initially, there is binding of the first molecule, which may favour or impair the binding of the second. This molecular behaviour is responsible for respectively establishing the cooperativity as positive or negative. To determine the affinity constants of the compounds, it was necessary to perform the linearisation of the parabolas, obtaining the replots 1/K Slope vs. According to the replots 1/K Slope vs. The results of the inhibitory mechanism of rCPB2.8 by f12a demonstrated a cooperativity inhibition as a function of parabolic replots. Their respective linearisation ( Figure S2C-F, Supplementary Materials) allowed the determination of K i = 12.4 ± 1.4 µM and αK i = 10.7 ± 1.1 µM, with α being~1, considering the standard deviation. The first molecule of f12a had the same affinity for binding to the free enzyme E or the ES complex. The values of βK i = 3.65 ± 0.39 µM and γK i = 0.25 ± 0.03 µM defined β = 0.29 and γ = 0.02 (Table 4). Therefore, binding of the first molecule favoured the formation of the IEI and IESI complexes 3.4-and 50-fold, respectively. Parabolic replots were observed for the inhibition of rCPB3 by compound 3c (Figure S4B stants were K i = 15.4 ± 3.0 µM, αK i = 141 ± 27 µM, βK i = 2.21 ± 0.16 µM (β = 0.14), and γK i = 0.77 ± 0.14 µM (γ = 0.05) ( Table 4). The α value (α = 9.15) shows that 3c preferentially binds to the free enzyme. With the binding of the compound to the ES complex that forms ESI defined by the γ factor, the formation of the quaternary IESI complex will be favoured 20-fold. On the other hand, the formation of the EI complex by binding of the first molecule 3c favours the formation of IEI sevenfold. Therefore, 3c presented a non-competitive inhibition mechanism with positive cooperativity, while f12a and f12b presented a simple linear non-competitive inhibition mechanism. Molecular Modelling Study The results of inhibitory capacity and the mechanism of action against these isoforms by the obtained chalcone and flavonol analogues are unprecedented. However, previous work in the literature may shed light on understanding the potential binding mode of the compounds at the active site of the CPB isoforms. Leishmania mexicana type B cysteine proteases are L-like cathepsins. The inhibition activity of these enzymes by chalcones is already known. Studies by Raghav and Kaur found that the catalytic CYS 29 thiolate of cathepsin L was able to attack the nucleophilic sites of chalcones [45]. In other recent work, chalcones demonstrated in vitro antileishmanial activity on the amastigote and promastigote forms of L. infantum. The suggested mechanism of action was the inhibition of pro-cathepsin L-like by the formation of hydrogen bonds between the amino acid TRP 151 of the active site and the carboxyl group of the chalcones [46]. Isolated flavonoids were also able to inhibit cathepsins L and B in previous studies [47]. Other classes of small molecules have also been described as inhibitors that target multiple cathepsin L-like cysteine proteases, some with overlapping antiparasitic activity [40]. Among them, vinyl sulphones have been shown to be highly potent and selective inhibitors of cathepsins L and B and are also considered antiparasitic prototypes [48,49]. Interestingly, a remarkable 3D similarity is demonstrated by the structural overlap of a crystallographic analogue of vinyl sulphone and compounds 3c, f12b, and f12c; as a result, there is a compatible binding mode at the active site of the enzyme ( Figure 4A,B, respectively). In particular, the overlap with the vinyl sulphone was useful in understanding non-competitive inhibition with a positive cooperativity mechanism. Therefore, the crystal structure of a papain-like cysteine protease bound with the vinyl sulphone derivative [50] was used as a template in the homology modelling study. The isoforms rCPB2.8, rCPB3, and rH84Y have a small number of modifications in their active sites (Table S1, Supplementary Material). However, the few amino-acid variations between these isoenzymes are important in modifying substrate specificities [43]. This change may have modified either the catalytic or allosteric site, even at a distance. This type of event was observed in substitutions distant from the active site that affect the catalytic activity of CheZ and the binding of CheYp, with possible propagation of structural or dynamic disturbance [51]. inhibitors of cathepsins L and B and are also considered antiparasitic prototypes [48,49]. Interestingly, a remarkable 3D similarity is demonstrated by the structural overlap of a crystallographic analogue of vinyl sulphone and compounds 3c, f12b, and f12c; as a result, there is a compatible binding mode at the active site of the enzyme ( Figure 4A,B, respectively). In particular, the overlap with the vinyl sulphone was useful in understanding non-competitive inhibition with a positive cooperativity mechanism. Therefore, the crystal structure of a papain-like cysteine protease bound with the vinyl sulphone derivative [50] was used as a template in the homology modelling study. The isoforms rCPB2.8, rCPB3, and rH84Y have a small number of modifications in their active sites (Table S1, Supplementary Material). However, the few amino-acid Interestingly, even with the amino-acid residue variations, chalcone 3c showed the same mechanism of action on the three isoforms. To investigate its possible binding mode at the active site of the enzymes, the energy values of each pose pointed by 3D overlapping at rCPB2.8 was calculated after geometry optimisation (Table S2, Supplementary Materials). The position with the lower potential energy was also used to investigate the intermolecular interactions at the active site of rCPB3 and rH84Y. Despite having the same mechanism of action, the calculated binding free energy values of compound 3c at the catalytic site of the isoforms were remarkable different (Table 5). Particularly, the chalcone had the most promising binding free energy result on rCPB3 (−24.70 kcal·mol −1 ). These values reinforce the hypothesis of a more favourable interaction in a molecular scenario between compound 3c and this isoform and corroborates with the in vitro results. On the contrary, the variation of amino-acid residues resulted in a different mechanism of action for flavonols f12a and f12b between the three isoforms. This change is due to the difference in the negative charge distribution of these residues, which necessarily results in significant changes in the electrostatic potential on the surface of the isoenzymes, in addition to providing the parasite with a series of hydrolytic activities [43,52]. Therefore, the change in the electrostatic potential on the surface of the isoenzymes promoted changes in the inhibition mechanisms, as well as differences in the affinity constants. Following the simple linear non-competitive inhibition mechanism, the potential energy of the different positions of f12a at the binding site of rCPB3 was promising (Table S3, Supplementary Materials). The binding pose with the lower potential energy was used to investigate the intermolecular interactions of the flavonol analogues at the active site of rCPB3 and rH84Y, since the mechanism of action was the same at these two isoforms. As also occurred with compound 3c, the calculated binding free energy values of f12a at the binding site of the isoforms were notably different, with the lowest value for rCPB3 (−5.65 kcal·mol −1 ). This result corroborates the in vitro assays that demonstrate a higher affinity constant for this isoform (Table 5). Despite the structural similarity, compound f12b had a lower binding free energy at the simulations of the active site of rH84Y ( Table 5). The calculated energy values also demonstrated a great difference among the three isoforms (Table 5), which corroborates the considerably lower affinity constant for rCPB2.8. Analysing the output results of the simulations of compound 3c, the variations of amino-acid residues at the binding site of the isoforms resulted in a similar occupation of the binding pockets, but noticeable differences in the intermolecular interactions ( Figure 5A,C,E). At the binding site of rCPB2.8, 3c mainly made hydrophobic interactions ( Figure 5B). However, the substitution of ASP 186 to ASN 186 resulted in the formation of a hydrogen bond, which was also observed with GLY 144 ( Figure 5D). This strong intermolecular interaction was conserved at the active site of rH84Y; however, the hydrogen bond with GLY 144 was not observed at the catalytic site of this isoform ( Figure 5F). The difference in intermolecular interactions, added to the binding-free energy values, corroborates the affinity constants obtained by in vitro assays. Microorganisms 2022, 10, x FOR PEER REVIEW 11 o calculated energy values also demonstrated a great difference among the three isofor (Table 5), which corroborates the considerably lower affinity constant for rCPB2.8. Analysing the output results of the simulations of compound 3c, the variations amino-acid residues at the binding site of the isoforms resulted in a similar occupation the binding pockets, but noticeable differences in the intermolecular interactions ( Fig 5A,C,E). At the binding site of rCPB2.8, 3c mainly made hydrophobic interactions ( Fig 5B). However, the substitution of ASP 186 to ASN 186 resulted in the formation o hydrogen bond, which was also observed with GLY 144 ( Figure 5D). This stro intermolecular interaction was conserved at the active site of rH84Y; however, hydrogen bond with GLY 144 was not observed at the catalytic site of this isoform ( Fig 5F). The difference in intermolecular interactions, added to the binding-free ene values, corroborates the affinity constants obtained by in vitro assays. At the active site of rCPB2.8, the molecules of flavonol f12a had a great occupation the binding pockets and made hydrogen bonds with the amino-acid residue ASP ( Figure 6A,B). The number of hydrogen bonds increased at the binding site of rCPB3, si the compound made interactions with TRP 310 and GLY 191 ( Figure 6D). Previous stud have already discussed the interaction between flavonoid derivatives and the GLY ami At the active site of rCPB2.8, the molecules of flavonol f12a had a great occupation of the binding pockets and made hydrogen bonds with the amino-acid residue ASP 189 ( Figure 6A,B). The number of hydrogen bonds increased at the binding site of rCPB3, since the compound made interactions with TRP 310 and GLY 191 ( Figure 6D). Previous studies have already discussed the interaction between flavonoid derivatives and the GLY amino-acid residue of the cathepsin L catalytic site, as well as its importance in stabilising the active compound at the binding site of the enzyme [45]. The occupation of f12a at the binding pockets of rCPB3 and rH84Y was very similar (Figure 6C,E). This resemblance was reflected by the interactions with the amino-acid residues of the two isoforms ( Figure 6F). The binding poses of f12b with the rCPB isoforms were very similar to those found for f12a and highly resembled the binding poses of each isoform, even by the simple linear non-competitive inhibition mechanism ( Figure 7A,C,E). At the active site of rH84Y, the proximity with the amino-acid residue GLY 191 (3.953 Å) may be related to a better interaction with this isoform ( Figure 7F). Microorganisms 2022, 10, x FOR PEER REVIEW 12 of 24 was reflected by the interactions with the amino-acid residues of the two isoforms ( Figure 6F). The binding poses of f12b with the rCPB isoforms were very similar to those found for f12a and highly resembled the binding poses of each isoform, even by the simple linear non-competitive inhibition mechanism ( Figure 7A,C,E). At the active site of rH84Y, the proximity with the amino-acid residue GLY 191 (3.953 Å) may be related to a better interaction with this isoform ( Figure 7F). Antipromastigote Assay and Cytotoxicity Elucidation Although flavonoid derivatives have demonstrated promising enzyme-inhibitory potential, the development of a drug candidate for leishmaniasis treatment depends on several pharmacological aspects. Among them, the cytotoxicity of compounds is crucial for the discovery of a new antileishmanial prototype, since high toxicity still represents a Antipromastigote Assay and Cytotoxicity Elucidation Although flavonoid derivatives have demonstrated promising enzyme-inhibitory potential, the development of a drug candidate for leishmaniasis treatment depends on several pharmacological aspects. Among them, the cytotoxicity of compounds is crucial for the discovery of a new antileishmanial prototype, since high toxicity still represents a serious limitation of the drugs used in current therapy [53]. The compounds need to be highly active against the Leishmania parasite and provide safety to host cells. This pharmacological characteristic is measured by SI, defined as the ratio of the 50% cytotoxic concentration of mammalian cells (GL 50 ) to the half-maximum inhibitory concentration on parasites (IC 50 ). However, the determination of IC 50 by in vitro screening tests is a challenge with the Leishmania parasite itself. The Leishmania lifecycle requires the presence of a sand fly vector and a mammalian host that causes the existence of two distinct morphological forms (promastigote and amastigote). Although the amastigote form is found in host cells and is considered the target form of the parasite, determination of the IC 50 consists of a time-consuming and laborious procedure and is not suitable for a large-scale screening method [54]. In general, exploratory screening methods designed to accelerate the testing of many compounds are performed on the promastigote form [54,55]. Therefore, to determine the SI of all flavonoid derivatives, we used the half-maximum inhibitory concentration on promastigote forms. All 17 flavonoid derivatives had moderate to low solubility in water. This physical property represented an obstacle in determining the IC 50 of the compounds, as the test occurred in an aqueous medium. Consequently, the higher concentration tested (10 µg/mL) was lower than that generally described in the literature (50 µg/mL) [56]. Compounds with IC 50 greater than 10 µg/mL were considered nonactive. However, since even the higher concentration tested was lower than that reported for compounds considered active against the parasite, these molecules may not be excluded as potential antileishmanial prototypes. Among the tested molecules, the chalcone analogues stood out. Compounds 2a, 3a, and 4c were more active than the standard drug pentamidine (IC 50 = 0.71, 0.60, and 0.50 µM, respectively). Meanwhile, f12c had the highest potency of the flavonol derivatives against the parasite (IC 50 = 0.73 µM). The results indicated that the substitution of a chloride atom in the phenolic ring of the chalcone and flavonol derivatives increased the activity against the Leishmania promastigote form (Table 6). Compounds 3a and 4c, the most active against the promastigote form, were also noncytotoxic against mammal cells ( Figure S10, Supplementary Materials), with an optimal selective index (SI > 1752.48 and 1443.10, respectively) when compared to first-line drugs such as pentamidine and amphotericin B [56] (Table 6). It is important to emphasise that the compounds tested all demonstrated low cytotoxicity to mammal cells, resulting in high SI values. The lipophilicity (logP) and water solubility (logS) properties of the flavonoid derivatives were also measured by in silico analysis. The importance of these chemical characteristics was first discussed by Lipinski et al. through the publication of the rule of five. In this study, among the physicochemical characteristics of a set of standard drugs, clogP ≤ 5 was postulated as being necessary for an ideal prototype [57]. Later, the development of an ADME in silico tool, based on the analysis of more than 2000 standard drugs, indicated that more than 80% of the drugs on the market have a (estimated) logS value greater than −4 [58]. All 17 flavonoid derivatives had logS values close to −4 and logP ≤ 5, following the characteristics postulated by the rule of five. Although further studies against amastigote form need to be carried out, the ADME results, added to the biological potential, indicate that all synthetic flavonoids are druglike and can be considered promising prototypes for the treatment of Leishmaniasis disease. LogP, octanol/water partition coefficient measured by SwissADME [59]; 2 LogS expressed as log (g/100 g water) measured by SwissADME [59], 3 IC 50 , half-maximum inhibitory concentration on promastigote forms; 4 n.a, not active (IC 50 > 10 µg/mL); 5 GL 50 , concentration that inhibited cell growth by 50%; 6 SI (selectivity index), IC 50 in mammalian cells/IC 50 in extracellular promastigotes; 7 n.d, not determined. The data are representative of three independent experiments. Conclusions Our synthetic protocols confirmed that the methods are versatile, scalable, easy to reproduce, and inexpensive for obtaining high yields of flavonoid derivatives. The compounds demonstrated the multitarget properties intended by our study to inhibit all tested rCPB isoforms of L. mexicana. All chalcones and flavonol analogues inhibited the three enzymes to some degree at a concentration of 5 µM and were subjected to assays to determine their inhibitory potential IC 50 against rCPB2.8 and isoforms. Regarding the activity of chalcones, the presence of chlorine attached on carbon C3 or C4 , fluoride on C3 , or hydroxyl on the C6 seems not to affect the inhibitory capacity of the compounds toward rCPB2.8. However, the presence of the 1,3-dioxolane group, in general, did not favour this enzyme inhibition. Interestingly, the chalcones bearing a chlorine atom at C3 (3a and 3c) or a methoxy group at C4 (3a and 4a) had the lowest IC 50 on rCPB3, on the rH84Y isoform and compound 4c had the lowest IC 50 on the rH84Y isoform. In the evaluation of the inhibitory capacity of flavonoids, it was observed that the presence of dimethylamine on the C4 carbon negatively impacted the inhibition of the three enzymes. Among the flavonoid analogues, compounds 3c, f12a, and f12b stood out for being effective against all isoforms simultaneously. Interestingly, the in vitro study of the mechanism of cysteine protease inhibition showed that small variations of amino-acid residues between the rCPB isoforms were able to change the mechanism of action and binding mode position of the compounds. These findings were confirmed by the in silico investigation that demonstrated the formation of strong intermolecular interactions between the compounds and the active site of each enzyme. The compounds were highly potent and demonstrated low cytotoxic action against mammalian cells, proving that the tested molecules are extremely selective. In addition, the calculated logP and logS values proved that the compounds have ADME properties compatible with those observed in standard drugs. The antileishmanial activity of all flavonoid analogues needs to be elucidated against amastigote form in future studies. However, our results show important progress in the investigation of the antileishmanial action of synthetic flavonoid derivatives and reinforce their potential as prototypes for the design of other cysteine protease inhibitors for the treatment of leishmaniasis. Chemistry All reagents were purchased from Sigma-Aldrich ® and were analytical grade, used without further purification. Reactions were monitored by TLC using Merck 60 F254 precoated silica plates, and spot visualisation was achieved with UV light (254-360 nm), molybdophosphoric acid (10% w/v), and a solution of sulphur vanillin (0.5 g vanillin in 100 mL of sulphuric acid/methanol (40:10)). All products were purified by recrystallisation from ethanol (EtOH). The solvents used in the reactions and recrystallisation were purified and dried according to procedures found in the literature [60]. A mixture of hexane and ethyl acetate was used in a 1:2 (v/v) proportion as the mobile phase to measure the retention factor (R f ) values of all purified compounds. All melting points were determined using a Quimis ® of Brazil model Q340S instrument. The 1 H-and 13 C-NMR spectra were recorded on a Bruker Avance DPX-300 or Bruker Ascend 500 spectrometer. Chemical shifts are reported as δ values (ppm) referenced to the residual solvent (CDCl 3 at δ 7.24 ppm, DMSOd 6 at δ 2.50 ppm). Peak multiplicities are abbreviated as follows: s (singlet); d (doublet); dd (doublet of doublets); tp (triplet of doublets); t (triplet); dt (doublet of triplets); m (multiplet). The coupling constants (J) are quoted in hertz and recorded at the nearest 0.1 Hz. General Procedure for the Synthesis of Chalcone-Like Compounds by the Claisen-Schmidt Reaction (1a-4c) The synthesis procedure followed the Claisen-Schmidt reaction methodology described in the literature, with modifications [33]. An aqueous solution of NaOH (3 M, 1.6 mL) was added to a solution of aromatic ketone (1 mmol) in EtOH. An ethanolic solution of substituted benzaldehyde was added dropwise to the reaction mixture. The mixture was stirred at room temperature for 24 h and then cooled. The reaction mixture was acidified with concentrated HCl (37%) to pH = 2 in an ice bath and under vigorous stirring. The precipitate formed was filtered, washed with cold water, and purified by recrystallisation from ethanol. General Procedure for the Synthesis of Flavonol-Like Compounds by the Algar Flynn-Oyamada Reaction (f12a-f13c) In a round-bottom flask, an aqueous solution of NaOH (1 M, 2 mL) was added to 1 mmol of chalcone in EtOH (5 mL). The solution was cooled until an ice-cold suspension was formed. An aqueous solution of H 2 O 2 (35%, 250 µL) was added to the ice-cold suspension; the mixture was allowed to warm to room temperature and stirred for 1-2 h. Distilled water (3 mL) was then added. The reaction mixture was acidified with concentrated HCl (37%) to pH = 2 in an ice bath and under vigorous stirring. The precipitate formed was filtered, washed with cold water, and purified by recrystallisation from ethanol. Screening the Inhibitory Activity of Compounds The screening assays for the inhibitory activity of the compounds in the rCPB2.8, rCPB3, and rH84Y enzymes were performed using 100 mM sodium acetate buffer containing 5 mM EDTA, 100 mM NaCl, 0.01% Triton X-100, and 20% glycerol, at pH 5.5. Enzyme aliquots were pre-incubated with 5 mM DTT for 5 min at 37 • C. After checking the initial rate of the reaction corresponding to the control, the enzymatic rate was measured at two concentrations of compounds, 1 µM and 5 µM. Enzyme activity was monitored by hydrolysis of the substrate Z-FR-MCA by measuring the fluorescence at λ Ex = 360 nm and λ Em = 480 nm on a Hitachi F2700 spectrofluorometer, obtaining the rate values in UAF/min (arbitrary fluorescence units by minute). Determination of IC 50 Values for Inhibitors Cysteine proteases rCPB2.8, rCPB3, and rH84Y were assayed in 100 mM sodium acetate buffer containing 5 mM EDTA, 100 mM NaCl, 0.01% Triton X-100, and 20% glycerol, at pH 5.5. The enzymes were pre-incubated in the presence of 5 mM DTT for 5 min at 37 • C in a 1 mL final volume with constant stirring. Enzyme activities were monitored using fluorogenic probe Z-FR-AMC (9.25 µM final concentration), and the fluorescence was monitored by spectrofluorometry using fluorometer F2700 (Hitachi, Tokyo, Japan) set to λ Ex = 360 nm and λ Em = 480 nm. The IC 50 evaluation was performed using a progressive increase in the concentration of the compounds; the IC 50 values were calculated using nonlinear regression, and the data were analysed by Grafit 5.0.13 software using Equation (1). (1) Enzyme Kinetics and Determination of the Mechanism of Inhibition Studies of cysteine proteases rCPB2.8, rCPB3, and rH84Y inhibition kinetics were performed in different concentrations of Z-FR-AMC in the presence and absence of compounds using 100 mM sodium acetate buffer containing 5 mM EDTA, 100 mM NaCl, 0.01% Triton X-100, and 20% glycerol, at pH 5.5. Aliquots of the enzymes were pre-incubated with 5 mM DTT for 5 min at 37 • C. For every kinetic measurement, the compounds were pre-incubated with each enzyme for 10 min before adding the substrate. All kinetic assays were performed in duplicate. Inhibition constants were determined using different equations, depending on the inhibition mechanism. The assumed K M values of rCPB2.8, rCPB3, and rH84Y for Z-FR-AMC were 3.23 µM, 2.99 µM, and 2.80 µM, respectively. The data of the activity rate and substrate concentration generated rectangular hyperbolic profiles that were linearised using the Lineweaver-Burk double-reciprocal plot. The replot profiles of the slope vs. inhibitor and intercept vs. inhibitor provided Ki and αKi parameters, respectively. If the replots present a parabolic profile, the system involves the participation of a second molecule of the compound, which can bind the enzyme complexed to the first compound (EI), forming IEI, and bind to the ESI complex, forming IESI. Linearisation is required, generating the 1/K slope vs. inhibitor and 1/K intercept vs. inhibitor replots, according to Equations (2) and (3). where Ki is the inhibitory constant, [I] is the concentration of inhibitor, α is the factor of formation of the ESI complex (enzyme-substrate-inhibitor complex), β is the factor of formation of the IEI complex (inhibitor-enzyme-inhibitor complex), and γ is the factor of formation of the IESI complex (inhibitor-enzyme-substrate-inhibitor complex) according to the general mechanism ( Figure 8). Furthermore, β and γ measure the cooperativity between the binding of the first and second inhibitor molecules to form IEI and IESI, respectively. Molecular Modelling Compounds 3c, f12a, and f12b had their 3D structure drawn using the program MarvinSketch 16.9.5 (ChemAxon Ltd., Budapest, Hungary). The optimisation was carried out using the PM7 semiempirical method incorporated in the software MOPAC2016 [61]. A pH of 7.4 was considered for the definition of charges. The three-dimensional structure of rCPB2.8 was obtained through the homology modelling methodology using the Swiss-Model program [62]. Therefore, we used the 3D structure of papain-like cysteine protease obtained from the Protein Data Bank (PDB ID: 1F2A) as a layout [50] and the primary structure of rCPB2.8 as the target sequence. The choice of the crystal was based on the similarity with rCPB2.8, as well as that between the tested compound and the crystallographic ligand. To determine the potential binding modes at the active site of rCPB2.8, different binding poses were obtained on the basis of the overlay between the tested molecules and the crystallographic ligand. For the compounds with non-competitive inhibition with a positive cooperativity mechanism, we manually added a second molecule in different positions. The 3D structures of rCPB3 and rH84Y were obtained according to the amino-acid residues differences at their active sites, as described in the literature, using the program UCSF Chimera [63]. All the binding poses were further optimised geometrically. Geometry optimisations were made using the GROMACS 2018 package [64] and the CHARMM force field [65]. The ligand topology was obtained from the Swiss Param Server [66], and the properties of the solvent were mimetic based on the TIP3P water model. A cubic box was used to guarantee a space of 1.2 nm between the protein and the box walls, and ions were under physiological conditions (0.15 µM) in order to neutralise the system charges. The energy optimisation steps were performed using the steepest descent followed by the conjugated gradient algorithm. The convergent criterion was a Molecular Modelling Compounds 3c, f12a, and f12b had their 3D structure drawn using the program MarvinSketch 16.9.5 (ChemAxon Ltd., Budapest, Hungary). The optimisation was carried out using the PM7 semiempirical method incorporated in the software MOPAC2016 [61]. A pH of 7.4 was considered for the definition of charges. The three-dimensional structure of rCPB2.8 was obtained through the homology modelling methodology using the Swiss-Model program [62]. Therefore, we used the 3D structure of papain-like cysteine protease obtained from the Protein Data Bank (PDB ID: 1F2A) as a layout [50] and the primary structure of rCPB2.8 as the target sequence. The choice of the crystal was based on the similarity with rCPB2.8, as well as that between the tested compound and the crystallographic ligand. To determine the potential binding modes at the active site of rCPB2.8, different binding poses were obtained on the basis of the overlay between the tested molecules and the crystallographic ligand. For the compounds with non-competitive inhibition with a positive cooperativity mechanism, we manually added a second molecule in different positions. The 3D structures of rCPB3 and rH84Y were obtained according to the amino-acid residues differences at their active sites, as described in the literature, using the program UCSF Chimera [63]. All the binding poses were further optimised geometrically. Geometry optimisations were made using the GROMACS 2018 package [64] and the CHARMM force field [65]. The ligand topology was obtained from the Swiss Param Server [66], and the properties of the solvent were mimetic based on the TIP3P water model. A cubic box was used to guarantee a space of 1.2 nm between the protein and the box walls, and ions were under physiological conditions (0.15 µM) in order to neutralise the system charges. The energy optimisation steps were performed using the steepest descent followed by the conjugated gradient algorithm. The convergent criterion was a maximum force of 50 N on the atoms. The potential binding energy was measured, and the position of the binding mode with the best result was used to analyse the interactions with the binding site of all isoforms. Cytotoxicity on Mammalian Cells The cytotoxic effect of the test samples was evaluated on NIH/3T3 fibroblasts, obtained from the Rio de Janeiro Cell Bank (Brazil). Cells were seeded in 96-well plates (5 × 10 5 cells/well). After 24 h of fixation, cells were incubated for 48 h with test samples at 0.25-250 µg/mL, in triplicate. The tested compounds were dissolved in DMSO (Sigma-Aldrich ® SP/Brazil) while ensuring that the final concentration of the latter (0.25% at the highest sample concentration) did not interfere with cell viability. Doxorubicin (0.025-25 µg/mL was used as a positive control. Cells were fixed by adding 20% trichloroacetic acid and were subsequently stained with sulphorhodamine B (0.1%) diluted in acetic acid after 48 h of exposure [67]. Absorbance values were read on the PT-READER microplate instrument (Thermoplate ® ), and growth percentages were calculated according to procedures in the literature [68]. Cytotoxicity activity was expressed as the concentration of drug that inhibited cell growth by 50% (GL 50 ), and growth was determined by nonlinear regression using Origin 6.0 software (OriginLab). The statistical significance was analysed using an unpaired Student's t-test or a one-way analysis of variance. A p-value < 0.05 was considered statistically relevant. In addition, the SI was calculated using the ratio between cytotoxicity in NHI-3T3 cells (GL 50 ) and activity in the parasite forms (IC 50 ). Parasites A standard strain of Leishmania (Leishmania amazonensis) (IFLA/BR/1967/PH8) was used for the evaluation of in vitro antileishmanial activity. The promastigote forms were grown in Schneider's insect medium (Sigma-Aldrich ® , SP/Brazil) supplemented with 20% foetal bovine serum (Sigma-Aldrich ® , SP/Brazil), 10,000 U/mL penicillin, and 10 mg/mL streptomycin (Sigma-Aldrich ® , SP/Brazil). Skin lesions were routinely isolated from previously induced skin lesions in BALB/c mice and kept in axenic culture until the 20th serial passage. Supplementary Materials: The following supporting information can be downloaded at www.mdpi. com/xxx/s1: Figures S1-S10; Tables S1-S3; Spectra 1-34. Figure S1: Determination of the affinity constants of the 3c compound in the inhibition of rCPB2.8; Figure S2: Determination of the affinity constants of the f12a compound in the inhibition of rCPB2.8; Figure S3: Determination of the affinity constants of the f12b compound in the inhibition of rCPB2.8; Figure S4: Determination of the affinity constants of the 3c compound in the inhibition of rCPB3; Figure S5: Determination of the affinity constants of the f12a compound in the inhibition of rCPB3; Figure S6: Determination of the affinity constants of the f12b compound in the inhibition of rCPB3; Figure S7: Determination of the affinity constants of the 3c compound in the inhibition of rH84Y; Figure S8: Determination of the affinity constants of the f12a compound in the inhibition of rH84Y; Figure S9: Determination of the affinity constants of the f12b compound in the inhibition of rH84Y; Table S1: Differences of amino acid
10,981.6
2023-01-01T00:00:00.000
[ "Medicine", "Chemistry" ]
Numeral Systems Across Languages Support Efficient Communication: From Approximate Numerosity to Recursion Languages differ qualitatively in their numeral systems. At one extreme, some languages have a small set of number terms, which denote approximate or inexact numerosities; at the other extreme, many languages have forms for exact numerosities over a very large range, through a recursively defined counting system. Why do numeral systems vary as they do? Here, we use computational analyses to explore the numeral systems of 30 languages that span this spectrum. We find that these numeral systems all reflect a functional need for efficient communication, mirroring existing arguments in other semantic domains such as color, kinship, and space. Our findings suggest that cross-language variation in numeral systems may be understood in terms of a shared functional need to communicate precisely while using minimal cognitive resources. Classes of numeral system Our definition of classes of numeral system largely follows that of Comrie (2013). Comrie draws a distinction between "restricted" numeral systems, which he defines as those that do "not effectively go above around 20", and other numeral systems, which cover a larger range, often through recursion. We took a language's numeral system to be approximate if the grammar or other description on which we relied for that language explicitly stated that the meanings of the numerals in the system are approximate or inexact. All such systems in our data were restricted in Comrie's sense. We took a language's numeral system to be exact restricted if the system covers a restricted range (again in Comrie's sense) but the description of the system did not explicitly state that the meanings were approximate or inexact; thus we assumed exactness unless there was evidence to the contrary. Finally, we took a language's numeral system to be recursive if the numeral system was listed by Comrie as having a base which can be used to recursively produce numbers through a much higher range. We classified each language based on the most fine-grained set of numeral terms available in the language, ignoring for now approximate terms in languages with an exact numeral system, e.g. "a few" in English. The classes of numeral system we consider do not perfectly partition the space of attested systems. For example, Comrie lists several extended body-part numeral systems, which use body parts beyond the 10 fingers to enumerate, and can reach well above 20, and there are some restricted languages that use recursion within a limited range. However, these broad classes do pick out major types of numeral system. Collection of numeral data For each of the 24 languages listed by Comrie (2013), we attempted to consult the reference work that Comrie lists for that language. There are several languages that are listed by Comrie (2013) but for which that chapter provides no reference. We located alternative references for each of these languages and those additional languages we analyzed, as listed below: Chiquitano (Chan 2014: Chiquitano-South America) English (Eastwood 1994: 245-247) French (Chan 2014: French-Indo-European) Fuyuge (Bradshaw 2007) Krenak (Chan 2014: Krenak-South America) Mandarin (Ross 2014: 28-29) Pirahã (Gordon 2004: p496) Spanish (Chan 2014: Spanish-Indo-European) !Xóõ (Chan 2014: Khoisan-Africa) We added to this set Pica et al.'s (2004) description of Mundurukú. Semantic primitives We explain here several of the semantic primitives out of which we construct grammars. The primitive concepts c=1, 2 or 3 are intended to capture the capacity for subitizing: the accurate estimation of small numbers up to about 3 (Revkin et al., 2008).x is a Gaussian centered at positionx on a number line that scales in accord with the non-linguistic approximate number system, which obeys Weber's law; this primitive is intended to ground approximate numerals directly in that cognitive system. s(w, v) is a generalization of the standard successor function (successor(w)= m(w)+1); it defines an interval that begins at m(w)+1 and continues for some exact length that is specified by the form v, i.e. the interval [m(w)+1, m(w)+m(v)]. Although in attested systems the length m(v) of this line segment is generally 1, the more general interval case is used for hypothetical numeral systems against which we compare attested ones. 1 Finally, and again to support hypothetical systems, we also allow systems that are mirror-images of those definable in terms of these components: e.g. a standard one-two-many system would have numerals for 1, 2, and the range [3,100], whereas its mirror-image would instead have numerals for the range [1-98], 99, and 100. Numeral system grammars A typical reference work description for a given language's numeral system includes specification of the basic numerals (noncompositional forms, e.g. "one" to "twelve" in English), the bases of recursion if any (e.g. "ten" in English), and rules for composing higher numerals recursively out of the basic numerals and bases (e.g. "twenty-one" is defined as "two" times "ten" incremented by "one"). For each language, we translated such precise verbal descriptions of the numeral system into symbolic form, cast in terms of the semantic primitive components in Table 2, resulting in full numeral grammars as in Table 5. For all languages, we restricted the grammatical specification to cardinal numerals that cover the range 1-100. We also assumed no languages contain any gap in that numerical interval. For example, for languages that do not have numerals up to 100, e.g. restricted systems, and contain a term that denotes "many" such as Pirahã, we defined the extension of such terms between the highest numeral preceding them and 100; for languages that do not have a "many" term listed in the reference work we consulted, we created a "many" category to fill the gap between the highest-order numeral available in that language and 100, hence assuming additional complexity for that category. For languages that contain multiple forms that denote the same number(s), we took the simplest form for that numeral in specifying a grammar. In addition, we constructed the grammar such that the meaning of every numeral has to be specified either by primitive concepts or in terms of meanings of numerals that are already defined. The complexity of a grammar is defined as the total number of symbols needed to specify the entire grammar. This is the number of rules needed to specify each rule in the grammar, summed over all rules in the grammar. Consider the rule for the numerals 20...90 ("twenty"..."ninety") in the English grammar in Table 5: This rule has complexity 8, determined as follows: 1 symbol for the variable u, 1 for the form 'ty', 1 for the operator d =, 1 for the operator m(·), 1 for the variable u, 1 for the operator ×, 1 for the operator m(·), and 1 for the form 'ten'. The complexity for each other rule in the grammar is determined analogously, and the complexity of the grammar as a whole is the sum of the complexities of its constituent rules. Listener distribution The listener distribution depends on the word w uttered by the speaker, and thus depends on the primitives in terms of which that word is defined. We consider here listener distributions for words grounded in the subitizing number system, the approximate number system, and exact numerosity. Subitizing. If the speaker has produced a word w that is semantically grounded in the subitizing number system via a rule involving the primitive concepts 1, 2, or 3, we assume that the listener distribution takes the form: Approximate number system. If the speaker has produced a word w that is semantically grounded in the approximate number system via a rule involving the primitivex, we assume that the listener distribution takes the form: This formulation follows from p. S5 of the supporting online materials from Pica et al. (2004), who present it as a formalization of the cognitive representation of numerosity in the non-linguistic approximate number system, which obeys Weber's law. p(i|w) captures the listener's subjective degree of belief that the intended number is i, given that speaker has produced word w. The category corresponding to w is represented as a normal distribution with mean µ w = x and standard deviation σ w = v × µ w following a scalar variability model, where v is the empirically determined Weber fraction, which we take to be 0.31 in our analyses, following Piazza et al. (2013). 2 Exact numerosity. In contrast, if the speaker has used an exact number term w grounded in exact primitives such as s(·,·), we assume that the listener distribution is uniform over numbers in the named interval: where |w| is the number of integers contained in the exact interval named by the number word w. In the case of most attested systems, an exact numeral such as "nine" will pick out just a single integer, so that p(9|"nine") = 1 1 = 1. However the formula also generalizes to hypothetical exact numerals defined as longer exact intervals of the number line. Modeling Mundurukú naming data We obtained Mundurukú number naming data from Pica et al. (2004). Specifically, for numerosities 1 to 15, we noted the fraction of times each numerosity i was named with a given Mundurukú word or locution w. 3 We modeled this fraction p(w|i) using Bayes' rule: p(w|i) ∝ f (i|w)p(w), where the prior p(w) is given by the relative frequency of word w in the data, over all numerosities, and f (i|w) is given by Equation 1 if w is grounded in subitizing (which we assume for numeral categories that peak at 1, 2 or 3), or by Equation 2 if w is grounded in the approximate numeral system (which we assume for all other Mundurukú categories). We fit this model to the Pica et al. (2004) data by finding placements of category means µ w that minimize the mean-squared-error (MSE) between model and data. The model fit was very good (MSE = 0.002). The same model without subitizing yielded a three-fold increase in error (MSE = 0.006), 4 and a variant of this model that was instead based only on exact numeral representation (Equation 3) performed much more poorly (MSE = 0.03). We illustrate these findings in Figure 1 where we took Weber fraction to be 0.31. For the standard model, grounded in subitizing and approximate numerosity, we also assessed model performance under alternative values of the Weber fraction, specifically 0.25 (which yielded MSE = 0.0027) and 0.15 (MSE = 0.0096), illustrated in Figure 2. These findings suggest that the model of the approximate number system given by Equations 1 and 2 provide a reasonable basis for grounding approximate numeral systems. Need probability We estimated need probabilities by the normalized frequencies of English numerals in the Google ngram corpus (Michel et al., 2011) for the year 2000, smoothed with a power-law distribution (0.6182x −2.02 ; Pearson correlation with unsmoothed data = 0.97). Both the use of a power law, and the specific exponent we use here, are broadly consistent with earlier studies (Dehaene & Mehler, 1992;Piantadosi, 2016). 5 Figure 3 shows the raw and smoothed frequencies of English numerals in log-log scales. The Google ngram corpus is based on word and ngram frequencies in published books; numeral frequencies in spoken English also decay with increasing target number t (Leech et al., 2001). Near-optimal tradeoff between communicative cost and complexity across attested numeral systems, compared with corresponding hypothetical approximate, exact restricted, and recursive systems. Hypothetical numeral systems We generated hypothetical numeral systems for each of the three major classes of system considered in this paper: approximate, exact restricted, and recursive. Hypothetical approximate systems. To generate hypothetical approximate systems, we explored the space of possible approximate systems up to a maximum complexity of 200. Each such hypothetical approximate system is composed of some number k of numeral categories, represented either as primitive concepts 1, 2, or 3 (subitizing) or as Gaussians. For each k, we initialized a hypothetical system by placing k category centers (either means for Gaussians, or primitive concepts 1, 2, or 3) at random positions on the number line within the interval [1,100]. We then repeatedly adjusted the placement of each category center by shifting it either to the left or to the right on the number line by step size 1, if that shift lowered the communicative cost at that complexity, until no further local optimization was possible. We ran this greedy procedure 100 times, from 100 different initialization states, to alleviate the problem of locally optimal solutions. 6 We took all systems encountered during these optimization processes to be hypothetical approximate systems. The resulting grammar for each such hypothetical system was a list of Gaussians g(·) centered at these means, either combined with subitizing (e.g. Mundurukú) if numerals map to unique numbers up to 3, or without subitizing (e.g. Pirahã). Hypothetical exact restricted systems. In the case of hypothetical exact restricted systems, we estimated the range of possible costs at each complexity-again up to complexity 200-by separately considering systems that should be expected to perform especially well, and those that should be expected to perform especially poorly. Because of the shape of the need probability distribution, we expect good performance (low cost) for systems that assign a separate single numeral to each integer on the 6 To ensure our greedy procedure for exploring the space of hypothetical systems is valid, we also independently generated a more exhaustive set of systems in the range that accommodates the attested systems in our dataset. Specifically, we examined systems that have k = 2 through k = 20 numeral categories, and place these categories at the lower end of the number line in the interval [1,20]. We then enumerated all possible placements of k means for a k-term system in the interval [1,20], producing 20 k systems for each k. This exhaustive procedure over a limited range yielded results similar to those from our greedy optimization procedure, which we extended over a wider range of complexities. number line up to a numerical value k, and a terminal numeral that covers the remaining tail region up to 100. k is varied from 2 to 99, yielding system of different complexities. Such systems place the only uninformative (and thus costly) large category in the least-weighted (high numerosity) part of the number line, and for this reason should perform well. We also considered mirror-images of these systems, that are expected to perform especially poorly, by analogous reasoning: these systems place numerals with large extensions at the beginning of the number line as opposed to at the tail. 7 The grammar for each such hypothetical system was a list of successor functions s(·,·) with varying interval lengths combined with subitizing up to 3, by analogy with the Kayardild grammar given in the main text. For each complexity, we assumed that the range of achievable costs by an exact restricted system was bounded by the high-performing and low-performing hypothetical system that we considered at that complexity. Hypothetical recursive systems. Finally, we generated hypothetical recursive systems by considering the full space of canonical base-n recursive numeral systems (Hurford, 1999) for n = 2 to 100. We took a canonical base-n system to be one in which there are distinct lexical items for the numerals 1 through n, and all numerals beyond that are constructed by generative rules according to recursive base-n patterns such as xn + y for some already-defined numerals x, y (Comrie, 2013). In these systems, all numerals correspond to specific integers. The English grammar provided in the main text is not perfectly canonical because the teens are part of a separate subsystem from other high numerosities. Complexities of canonical recursive systems The canonical numeral system we chose for recursive systems is as follows: up to the base, each numeral N is expressed as n where n is the value of N if in the subitizing range, or s(n−1) otherwise. After the base, the two rules uB , m('n')) was used. It is implicit in this system that previously generated terms can be substituted into these rules, implying that higher numerical terms are built from lower-valued terms. Rules were generated from values 1-100. Canonical base 10 is provided as an example grammar in Table 1 (compare with grammar for English in the main text), along with example grammars for base 5 (Table 2) and base 3 (Table 3). Figure 6 shows the complexities of canonical systems with different bases. We observed that the optimal system in this simulation has base 5. Although base-5 systems have been attested in the world's languages (Epps et al., 2012), they are less frequent than the dominant base-10 systems (Comrie, 2013). This mismatch may be due to the fact that there are factors outside complexity that drive the dominance of base-10 systems. Understanding the complexities of different recursive systems in the world's languages is a topic for future research. 7 To empirically verify the region of the space we explored is valid, we also independently generated hypothetical systems exhaustively within the range of complexities that is attested across the language sample we have. We did so by exploring all possible partitions (from k = 2 through k = 20 numeral categories) at the interval [1,20] and comparing the resulting systems against the attested ones. This exhaustive procedure over a limited range yielded results identical to those from our procedure described above, which we extended over a wider range of complexities. Data and code availability Data and code for the analyses that we reported are available at https://osf.io/jmrqw/?view only=7fa3c3d085c743998cd8b1ebe92d74b4.
4,048.8
2020-08-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
Comparing the Quality and Speed of Sentence Classification with Modern Language Models : After the advent of Glove and Word2vec, the dynamic development of language models (LMs) used to generate word embeddings has enabled the creation of better text classifier frameworks. With the vector representations of words generated by newer LMs, embeddings are no longer static but are context-aware. However, the quality of results provided by state-of-the-art LMs comes at the price of speed. Our goal was to present a benchmark to provide insight into the speed–quality trade-off of a sentence classifier framework based on word embeddings provided by selected LMs. We used a recurrent neural network with gated recurrent units to create sentence-level vector representations from word embeddings provided by an LM and a single fully connected layer for classification. Benchmarking was performed on two sentence classification data sets: The Sixth Text REtrieval Conference (TREC6)set and a 1000-sentence data set of our design. Our Monte Carlo cross-validated results based on these two data sources demonstrated that the newest deep learning LMs provided improvements over Glove and FastText in terms of weighted Matthews correlation coefficient (MCC) scores. We postulate that progress in LMs is more apparent when more difficult classification tasks are addressed. Introduction Recent years have seen substantial development in natural language processing (NLP), owing to the creation of vector representations of text called embeddings. The breakthrough language models (LMs) Glove [1] and Word2 Vec [2] enabled the creation of static word-level embeddings, which have the same form regardless of the context in which the word is found. Over time, researchers developed LMs capable of creating vector representations of text entities of different lengths, e.g., by analyzing text at the character [3][4][5] or sub-word level [6]. Recent progress in this field is exemplified by the development of many LMs that create contextualized word embeddings, i.e., their form depends on the context in which the token is found, as in Embeddings from Language Models (ELMo) [7], Bidirectional Encoder Representations from Transformers (BERT) [8], or Generalized Autoregressive Pretraining for Language Understanding (XLNet) [9]. The modern sentence classification approach relies on token level vector representations provided by a chosen LM from which sentence level embeddings are created. LMs providing more sophisticated contextualized embeddings should increase the quality of sentence classification but at the expense of speed. Maintaining a balance between quality and speed is considered important by many researchers, as evidenced by the struggles in developing efficient NLP frameworks [10][11][12]. However, the time cost Text Classification In this research, we used Flair v. 0.4.4 [11], an NLP framework written in PyTorch and published by the authors of the Flair LM [20]; this program enables the creation of text classification models based on various state-of-the-art LMs. With this framework, we instantiated sentence-level classifiers by using a unidirectional RNN with gated recurrent units (GRU RNN) with a single fully connected layer for final sentence classification. The GRU RNN task was to create a constant-length, sentence-level vector representation from token level embeddings provided by the selected LMs. An overall scheme presenting this workflow is presented in Figure 1. In the Flair framework, if we chose an LM enabling fine-tuning on the downstream task, e.g., BERT, the process of training the GRU RNN along with the classification layer automatically included fine-tuning of the LM. Appl. Sci. 2020, 10, x FOR PEER REVIEW 2 of 13 results in downstream tasks remain unclear. Determining this information is not trivial because when a new LM is published, authors usually provide comparisons of quality but not information regarding the training time (TT) and inference time (IT). Comparing results from various frameworks can also be difficult because of the use of different methods for creating sentence embeddings from lower-level embeddings, e.g., by naïvely computing averages of word embeddings or by computing the element-wise sum of the representations at each word position [13], or by using a deep neural network [13], recurrent neural network (RNN) [14], bidirectional long short-term memory RNN (BiLSTM) [15], or hierarchy by max pooling (HBMP) [16] model. Therefore, two factors often change at one time and influence the quality of sentence-level embeddings, namely the LM providing token embeddings and the method of creating sentence-level embeddings. Another hindrance when comparing the quality of LMs is that authors often choose to publish evaluations carried out on a variety of data sets yet provide results for only the best model obtained after many training runs without reporting relevant statistical information regarding the average models trained in their applied frameworks. In the present study, we compared the embeddings created by various LMs in a single sentence classification framework that used the same RNN to create sentence-level embeddings to ensure that the overall performance was unbiased by the choice of algorithm for the creation of sentence-level embeddings. Additionally, to improve the comparability of the selected LMs, we performed benchmarking analysis on the same data sets, namely The Sixth Text REtrieval Conference (TREC6) set [17] and our preliminary meetings preliminary data set (MPD) [18] consisting of 1000 labeled sentences, which are published along with this paper. Furthermore, we performed Monte Carlo crossvalidation (MCCV) [19] to assess the quality of predictions and the TT and IT achieved with the selected LMs. All tests were conducted on the same computing machine. To further improve comparability, for all compared LMs, we used the same fine-tuning regimen developed during a preliminary study. Text Classification In this research, we used Flair v. 0.4.4 [11], an NLP framework written in PyTorch and published by the authors of the Flair LM [20]; this program enables the creation of text classification models based on various state-of-the-art LMs. With this framework, we instantiated sentence-level classifiers by using a unidirectional RNN with gated recurrent units (GRU RNN) with a single fully connected layer for final sentence classification. The GRU RNN task was to create a constant-length, sentencelevel vector representation from token level embeddings provided by the selected LMs. An overall scheme presenting this workflow is presented in Figure 1. In the Flair framework, if we chose an LM enabling fine-tuning on the downstream task, e.g., BERT, the process of training the GRU RNN along with the classification layer automatically included fine-tuning of the LM. Selection of the LMs The LMs selected for testing are presented in Table 1. We chose classic LMs that provide static word-level embeddings, including Glove [1], FastText [21], a size optimized LM called BPEmb [22] that operates on a sub-word level, and a variety of recently developed deep learning LMs that create high-quality contextualized embeddings. The complex architecture of recent models, such as BERT, Robustly Optimized BERT Pretraining Approach (RoBERTa), and XLNet, allows for choosing from many layers (heads) of the LM model that output usable token embeddings. The quality of these embeddings varies across the LM layers and even across language tasks [23]. For these LMs, we compared embeddings from the default output layer and mixing embeddings from all available output layers, as proposed in a technique called scalar mix [23]. Data Used for Benchmarking To benchmark the selected LMs, we decided to address two-sentence classification tasks. The first was defined as the TREC6 set, a data set consisting of 5452 training questions and 500 test questions divided into six classes. The task was to classify questions as being about abbreviation, entity, description or abstract concept, human being, location, or numeric values. The class definitions in the TREC6 set were not overlapping and generally constituted a relatively easy classification task. The second task was defined by the MPD, consisting of 1000 artificial sentences that appeared as if they could Appl. Sci. 2020, 10, 3386 4 of 12 have been written by an anonymous attendee of a meeting and could have come from several-sentence comments posted after that meeting. The task was to classify whether the sentence was about the following: time and punctuality, performance assessment, subject of the meeting, recommendations, or technical issues. Because the definitions of the classes were somewhat overlapping, the data set was created by three researchers who wrote and labeled the sentences independently, and because each sentence was considered outside of a wider context, this classification task was considered difficult. Notably, both benchmarked data sets had imbalanced classes (distributions presented in Table 2). Performance Metrics For drawing conclusions and discussing the statistically significant differences, the benchmark relied on the Matthews correlation coefficient (MCC) introduced by Matthews [27] because it is known to be more informative than the F1 score (F1) and class-balanced accuracy (BA) [28]. However, because F1 and CBA are widely used, we also computed them and included them in the full results published along with the MPD [18]. To address the speed of training and inference of the sentence classification framework, we report the time (hours) taken for framework training (TT) per model and time (seconds) taken to classify the entire test set (IT). Training and Testing Procedure All computations were carried out on a single NVIDIA Titan RTX 24 GB RAM GPU. We carried out ten test runs on data splits created according to the MCCV method [29] to counter the high variance in results due to the small sizes of the data sets. The TREC6 training data were split into 80 percent training and 20 percent validation data, and tests were carried out on a set of 500 sentences officially labeled as a test. With the MPD, the procedure involved randomly choosing 100 sentences for the testing of all LMs, and according to MCCV, splitting the remaining 900 sentences ten times into 80 percent training and 20 percent validation data. Statistics In this paper, the results are based on statistically significant differences in the assessed parameters. Our procedure for statistical analysis began with Shapiro-Wilk [30] testing for the normality of the distribution, which led us to the conclusion that in most cases, the two distributions were normal. In the second step, we carried out one-way ANOVA. If the ANOVA indicated statistically significant differences, we carried out a third step involving Tukey HSD multiple comparison tests to assess the trials that had significant differences between them. For a test case in which the values of only one Appl. Sci. 2020, 10, 3386 5 of 12 pair of parameters were compared, a Wilcoxon signed-rank test was used instead of a Tukey HSD test. The statistical analysis was carried out in Python3 (Statsmodels version ==0.10.1 and Pingouin version==0.2.9 packages). Throughout the analysis, we used a significance threshold of p = 0.05. The full results of the statistical analysis were published, along with the MPD [18]. Preliminary Study to Identify Framework Parameters and the Training Regimen For a given LM, it is possible to select from a range of training parameter values that will affect both the training time (TT) and final model performance. Generally, training a model for a small number of epochs will provide an advantage in training time (TT) but can cause under-fitting, i.e., a negative influence on performance. If a model is trained for more epochs, TT will increase and the final model performance can increase, but it is not guaranteed due to a negative over-fitting phenomenon. Therefore, before benchmarking the effects of the chosen LMs on the quality and speed of the training and the sentence classification, we carried out a preliminary study to determine the impact of the hyperparameters and the training parameters of the framework on our classification tasks. Our work examined various parameters from a group of learning-rate-related parameters: (a) Learning rate-the step size at each iteration while moving toward a minimum of a loss function [31]; we tested values of 0.3, 0.2, 0.15, 0.1, and 0.05. (b) Initial learning rate-the learning rate that the model is trained with during the first epoch. (c) Patience-the number of epochs the model is trained with a given learning rate after the last improvement of model performance in a given step and before annealing of the learning rate. For example, given patience equal to 10, if the performance improved during training at the current learning rate only in the 3rd epoch, even without improvement in performance in any other epoch of the executed step, the training will proceed up to 3 + 10 epochs at the given learning rate. In our experiments, values of 5, 10, 15, 20, and 25 were tested. (d) Annealing factor-the ratio of decreasing the learning rate. Annealing factor × previous learning rate = current learning rate. In our experiments values of 0.3, 0.4, and 0.5 were considered. (e) Minimal learning rate-a minimal threshold learning rate. When computing the learning rate for the next training step according to the equation: learning rate × anneal factor results in a value lower than minimal learning rate, the model training is terminated. Our work experimented with values of 0.01, 0.008, 0.002, 0.001, and 0.0001. Other considered parameters affecting the speed and quality of the model performance included: (1) The size of a mini-batch, i.e., the number of data instances provided at the same time on a model input during training, where values of 4, 8, 32, and 64 were considered. (2) The RNN hidden size, which defines the size of the vector representation of a sentence created by the RNN from token level embeddings, where values of 2, 16, 128, 256, and 512 were considered. (3) Shuffling sentences provided to the framework during training (true or false) were analyzed. Other framework parameters were set to default, as proposed by the authors of the Flair framework. All trials during the preliminary study were computed with the FastText ("en-news") LM [2] with 1 million static word vectors. The initial study was carried out on both the TREC6 set and the MPD to demonstrate possible differences that could be attributed to the data set selection. Preliminary results from the TREC6 data set were considered decisive regarding the selection of training parameters for the final experiment as this data set consisted of over five times more data instances. When assessing the results of our preliminary study regarding training-related parameters, we considered only the MCC scores and TT of the framework. However, because the hidden size can also affect the IT, in this case, we also included it in the results. Because the purpose of the study was to compare various LMs in the same training conditions, which most likely do not provide the highest possible performance for all LMs, for the final experiment, we selected training parameters that probably caused the models to be slightly under-fitted but provided a rather low TT. If one aimed for a training procedure that guarantees the top performance, a much longer TT must be considered and the training regime should be fitted precisely to both the LM and the data set. Principal Component Analysis for Visualization of the LMs' Quality To visualize discrepancies between the baseline and state-of-the-art LMs, we carried out a two-component principal component analysis of sentence embeddings provided by the framework using GloVe and RoBERTa large scalar mix LMs. The sentence-level vector representations were computed for all 1000 sentences from the MPD. Each sentence embedding was accompanied by an original class label, which allowed us to inspect the quality of grouping each class using the analyzed LM. Figure 2 shows that increasing the size of the sentence-level vector representation created by the RNN increased the TT. Simultaneously, for the short questions in the TREC6 data set, sentence embeddings as short as 16 elements enabled high performance. A higher dimensionality of sentence embeddings did not appear to improve the MCC score since no statistically significant differences were found; however, owing to a relatively low increase in the TT, we used a hidden size of 256 to ensure that we provided sufficient space for the information present in the sentences. Preliminary Study Appl. Sci. 2020, 10, x FOR PEER REVIEW 6 of 13 Figure 2 shows that increasing the size of the sentence-level vector representation created by the RNN increased the TT. Simultaneously, for the short questions in the TREC6 data set, sentence embeddings as short as 16 elements enabled high performance. A higher dimensionality of sentence embeddings did not appear to improve the MCC score since no statistically significant differences were found; however, owing to a relatively low increase in the TT, we used a hidden size of 256 to ensure that we provided sufficient space for the information present in the sentences. Figure 3 illustrates the effect of the mini-batch size on the framework performance, given a timeconsuming training regimen. For the given setup, there were no statistically significant differences in the MCC scores with small mini-batch sizes of 4 and 8. However, the small mini-batch sizes were significantly superior to the higher values of 32 and 64. Additionally, the TT with a mini-batch size of 8 was significantly shorter than that with a batch size of 4. Therefore, we used a mini-batch size of 8. The full results of the statistical analysis have been published, along with the MPD and python code used in our research [18]. Figure 3 illustrates the effect of the mini-batch size on the framework performance, given a time-consuming training regimen. For the given setup, there were no statistically significant differences in the MCC scores with small mini-batch sizes of 4 and 8. However, the small mini-batch sizes were significantly superior to the higher values of 32 and 64. Additionally, the TT with a mini-batch size of 8 was significantly shorter than that with a batch size of 4. Therefore, we used a mini-batch size of 8. The full results of the statistical analysis have been published, along with the MPD and python code used in our research [18]. Table 3 depicts the influence of the training-rate-related parameters on the performance and TT of the adopted framework for the TREC6 set and the MPD. The performance discrepancies for the MPD and the TREC6 set measured in points of the mean MCC score were 0.05 and 0.036, respectively. When the mean TT was considered, the differences were up to 0.116 h for the MPD and 0.643 h for the TREC6 set. It can be observed that the relation of the TT achieved in various test runs was very similar in both data sets. This allowed us to hypothesize that the relation of the TT between test runs did not vary strongly with the choice of data set. However, this was not the case when the LMs quality was concerned. We believe that the proposed training regimes for the TREC6 data set, when applied to the MPD, caused each model to be as well trained, with only one exception of a very short training scheme p5lr.2mlr.008a.5. This was likely caused by the smaller size of the MPD, which allowed the LMs to fit the data in a shorter time. Table 3 depicts the influence of the training-rate-related parameters on the performance and TT of the adopted framework for the TREC6 set and the MPD. The performance discrepancies for the MPD and the TREC6 set measured in points of the mean MCC score were 0.05 and 0.036, respectively. When the mean TT was considered, the differences were up to 0.116 h for the MPD and 0.643 h for the TREC6 set. It can be observed that the relation of the TT achieved in various test runs was very similar in both data sets. This allowed us to hypothesize that the relation of the TT between test runs did not vary strongly with the choice of data set. However, this was not the case when the LMs quality was concerned. We believe that the proposed training regimes for the TREC6 data set, when applied to the MPD, caused each model to be as well trained, with only one exception of a very short training scheme p5lr.2mlr.008a.5. This was likely caused by the smaller size of the MPD, which allowed the LMs to fit the data in a shorter time. Table 3 depicts the influence of the training-rate-related parameters on the performance and TT of the adopted framework for the TREC6 set and the MPD. The performance discrepancies for the MPD and the TREC6 set measured in points of the mean MCC score were 0.05 and 0.036, respectively. When the mean TT was considered, the differences were up to 0.116 h for the MPD and 0.643 h for the TREC6 set. It can be observed that the relation of the TT achieved in various test runs was very similar in both data sets. This allowed us to hypothesize that the relation of the TT between test runs did not vary strongly with the choice of data set. However, this was not the case when the LMs quality was concerned. We believe that the proposed training regimes for the TREC6 data set, when applied to the MPD, caused each model to be as well trained, with only one exception of a very short training scheme p5lr.2mlr.008a.5. This was likely caused by the smaller size of the MPD, which allowed the LMs to fit the data in a shorter time. Preliminary Study Based on this exploratory study, we decided to use a training regimen providing almost the best performance for both the TREC6 set and the MPD, namely p20lr.1mlr.002a.5. For the TREC6 set, the selected training procedure allowed for a 0.933 mean MCC score at a decent TT cost of a median of 0.269 h per trained model. Therefore, the configuration of the adopted framework in the final LM comparison was as follows: patience = 20, initial learning rate = 0.1, minimal learning rate = 0.002, annealing factor = 0.5, mini-batch size = 8, hidden size = 256, with example shuffling during training. Table 3. Impact of the learning-rate-related parameters on the framework performance for the TREC6 set and the MPD given a mini-batch size = 8, hidden size = 256, shuffle = True. Compared variants were named with use of abbreviations: p-patience, lr-initial learning rate, mlr-minimal learning rate, a-annealing. For example, p5lr.1mlr.0001a.5 stands for patience = 5, initial learning rate = 0.1, minimal learning rate = 0.0001, and annealing factor = 0.5. The final training regime selected for the final LM comparison is highlighted in bold. Results of the Main Study A comparison of selected LMs in the previously described framework was carried out on two data sets. The results obtained with the TREC6 data set are presented in Figure 5. The results obtained with the preliminary data set are shown in Figure 6. The results presented in Figure 5 indicated that the baseline provided by static embeddings created by Glove and FastText was high and reached a 0.933 ± 0.005 MCC score. Recent transformer-based models outperformed this baseline and have achieved up to a 0.96 ± 0.006 MCC score for the BERT large uncased scalar mix version. Simultaneously, the MCC scores achieved by RoBERTa, XLNet, and variants of these models were unexpectedly worse than those of BERT for several possible reasons, such as incorrect training regimens for these LMs and data set properties. Generally, these results may indicate that, given the small differences in quality and the small data set size, to distinguish among very good models, a similar benchmark analysis must be used on an n-times-larger data set. Simultaneously, the results achieved on the MPD (see Figure 6) revealed a different picture. A large and statistically significant discrepancy was observed between the MCC scores of the baseline LMs providing static word embeddings, such as Glove (0.402 ± 0.051) or FastText (0.47 ± 0.047), and state-of-the-art context-aware LMs, such as RoBERTa (0.59 ± 0.026). Leveraging the scalar mix technique allowed us to use the output from all available layers of these complex models and thus achieve the best results in all tested model pairs. The default layer of the XLNet achieved a very poor result of 0.377 ± 0.063, whereas the scalar mix version achieved 0.521 ± 0.038; the straightforward BERT scored 0.532 ± 0.043 and its scalar mix counterpart scored 0.596 ± 0.027; and the RoBERTa and RoBERTa scalar mix version achieved 0.59 ± 0.026 and 0.603 ± 0.023, respectively. Therefore, the scalar mix technique appeared to be highly useful, in agreement with findings from Liu et al. [32]. However, the increase in quality came at the expense of both TT and IT. As expected, the IT was the lowest for LMs providing static embeddings, reaching as low as 0.111 ± 0.011 s per 100 sentences for the FastText LM. The highest inference time of 3.822 ± 0.015 was obtained with the XLNet scalar mix LM. Additionally, the TTs of the contextualized LMs in their scalar mix versions were the highest, reaching 1.444 ± 0.175 h for the XLNet scalar mix LM, whereas the baseline Glove-based framework required only 0.069 ± 0.011 h to train. Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 13 Figure 5. Comparison of framework performance with selected language models on the TREC6 data set. Explanation of the abbreviated names of LMs: S-scalar mix, L-large model version. Example notation RoBERTa LS stands for RoBERTa large scalar mix model. The results for the MPD allowed us to visualize different qualities of various LMs grouping sentences by classes. Figure 7 presents a visualization based on a principal component analysis, which indicates that the Glove-based framework was unable to clearly divide classes, whereas the RoBERTa large scalar mix LM was able to distinguish groups with significantly higher quality. providing static embeddings, reaching as low as 0.111 ± 0.011 s per 100 sentences for the FastText LM. The highest inference time of 3.822 ± 0.015 was obtained with the XLNet scalar mix LM. Additionally, the TTs of the contextualized LMs in their scalar mix versions were the highest, reaching 1.444 ± 0.175 h for the XLNet scalar mix LM, whereas the baseline Glove-based framework required only 0.069 ± 0.011 h to train. The results for the MPD allowed us to visualize different qualities of various LMs grouping sentences by classes. Figure 7 presents a visualization based on a principal component analysis, which indicates that the Glove-based framework was unable to clearly divide classes, whereas the RoBERTa large scalar mix LM was able to distinguish groups with significantly higher quality. and RoBERTa large scalar mix (right) LMs. Class labels are represented by colors: red-performance assessment, green-subject, blue-recommendations, black-technical, and magenta-time. Conclusions A comparison of the results obtained for the TREC6 set and the MPD suggested that they differed strongly in the magnitude of the MCC scores. For the TREC6 set, almost all LMs achieved MCC scores greater than 0.9, whereas on the MPD, the highest values were only slightly above 0.6, and the lowest values were below 0.4. The finding that even LMs creating static embeddings achieved very good results on the TREC6 data set was due to the simplicity of the classification tasks. Consequently, there was little room for improvement of state-of-the-art LMs, thus making any conclusions regarding the quality of newer LMs difficult to draw. In contrast, the MPD was challenging because using simple static word embeddings allowed for a 0.47 ± 0.047 MCC score at best. However, interestingly, the sophisticated context-aware LMs demonstrated their superiority on the MPD since they were able to locate the necessary information in sentences full of contextual meaning and achieve scores of up to 0.603 ± 0.023. From these results, we postulate that for context-aware LMs to fully demonstrate their advantages, a sufficiently difficult task must be used. A similar phenomenon was encountered in a deep learning task from another domain, namely the detection of objects in image analysis. Owing to the progress in the quality of solutions offered by models using complex convolutional neural network architectures, the differences in performance on once broadly addressed Pascal Voc data set became very small. Many models were able to achieve up to an 80 percent mean average precision score [33]. A solution was proposed by the MS CoCo challenge [34], in which the metrics emphasized the need for more precise localization of detected objects, thus increasing the task difficulty. As a result, the state-of-the-art models achieved a less than 50 percent mean average precision score in this challenge [35], thus leaving a lot of room for further improvement and demonstration of high-quality solutions. Unfortunately, owing to the very small size of the MPD, other explanations for this phenomenon cannot be excluded: some newer deep learning models might enable the framework to learn properly even given the small amount of data, or a framework based on baseline LMs might simply need more data. To rule out these possibilities, a larger yet still challenging data set would be required. Study Limitations and Future Work The presented comparison was carried out on two small data sets, thus resulting in a high standard deviation of the results. In assessing differences between models achieving similar median MCC scores, i.e., differing by less than 0.01 point on the TREC6 set, before drawing any conclusions, the standard deviation exceeding 0.01 points in some cases must be considered. The TREC6 data set appeared to be insufficiently difficult for examining differences in state-of-the-art LMs. Therefore, our future work will aim at increasing the MPD size to rule out all the negative effects of the small analyzed data sample and to allow us to draw stronger conclusions on a more challenging data set. Another limitation of the study was associated with our preliminary study, in which we decided to select one set of hyperparameters and training parameters for all selected LMs. Such a decision provided a common ground for the whole comparison. However, it also had a negative impact, namely the selected training regime was most likely favorable for some of the LMs and we have no information about which ones. A contradictory approach could be to search for training parameters separately for each selected LM. Pursuing this line of the experiment would lead to a situation where probably almost all LMs should have their original training regimes and one would end up comparing "training-optimized" versions of each LM. Such an approach would also have its imperfections; for example, it can be very time-consuming for state-of-the-art LMs, and in the end, would put the researcher in a situation of comparing LMs trained with entirely different procedures.
7,028
2020-05-14T00:00:00.000
[ "Computer Science" ]
Maternal Education and Their Offspring’s Income in China : In this paper, I study the relationship between maternal education and offspring income higher education are more likely to cultivate offspring with higher education. Using 2012 and 2014 China Labor Force Dynamics Survey, I find that one year increase of maternal education is associated with a 1.7% increase in children’s income. In studying the mechanism, I suggest that maternal education improves offspring’s income via improving children’s education. Hence, I conclude that women with a strong individual educational background devote more time and energy to their children. Consequently, their offspring earn a higher income. Introduction In the past, people believed that women should not have the same educational opportunity as men because they are not responsible for making money and only take charge of domestic duties. This prejudice gradually formed the discrimination towards females. Education is, however, a fundamental right for all. A specific objective of my study is to determine whether females with higher education are more likely to cultivate offspring with higher education. Carneiro et.al (2012) pointed out in their study that parental education has a significant effect on children's human capital. They concluded that maternal education reduces the incidence of behavioral problems by 8.6% and grade repetition by 3.2% in their children. Also, more-educated mothers delay childbearing by 1 year on average, are more likely to be married, have significantly better-educated spouses and have higher family incomes. Kalil et.al (2012) found that higheducated mothers spend an average of more than 50% of their time caring for children and also alter the composition of that time to suit children's developmental needs more than their less-educated counterparts. Those who have higher education are more likely to pay attention to "investing in children" by 42%. Additionally, Harding (2015) found that maternal education increases were positively correlated with children's standardized cognitive scores, as well as higher teacher-reported externalizing behavioral problems in 1st grade. Increases in externalizing behavioral problems were greater among children whose mothers had less than a college degree at baseline. According to Katherine et.al (2009), in-creases in maternal education are also associated with concurrent improvements in children's school readiness and language skills by 12.8%. Prior studies focused predominantly on mothers' education and infant/baby/primary school development, which have relatively short-term effects. In reality, however, many mothers, especially economically and educationally disadvantaged mothers, returned to school after they gave birth to their children. In the present study, I use data of 2012 and 2014 China Labor Force Dynamics Survey and investigate the effect of mothers' finally educational level on their grown-up children's income. Preliminary results suggest that one year increase of maternal education is associated with a 1.7% increase in children's income. The mechanism study suggests that the improvement in offspring's education is the major channel in that one-year increase in mother education is associated with 0.305 year increase in children education. Data and Empirical Specification In this study, I use the data from China Labor Force Dynamics Survey in 2012 and in 2014. The total number of the observations is 22880. Table 1 summarizes the descriptive statistics. From Table 1, we find that the average year of mother's education is only 2.8 years, suggesting that most females have not completed primary school education yet. This is consistent with the social norm in Asian country that females are commonly exempted from consecutive and supported education. Table 1. Descriptive Sample Statistics Note: (1) Working experience is measured using age -years of education; (2) In-come is calculated by the real income of RMB; (3) Years of education is 0 for no education, 6 for primary school, 9 for middle school, 12 for high school, 14 for some college, 16 for university, and 19 for masters above; (4) Health Index: healthy= 1 o r2; fair = 3; unhealthy = 4 or 5; (5) Social Support Connection: strong (30 -50 people) ;fair (10 -30 people); weak(1 -10 people); none (0 people) To investigate the relationship between maternal education and children's income, we use a baseline regression that takes the form of lnYi,j = β1Edui,m,j + β2Xi,j + γj + ϵi,j where lnYi,j indicates the logarithm annual income of individual i who lives in province j . Edui,m,j indicates the final educational level of individual i's mother m. Here, we use the total years of a mother receiving education to measure her educational level. Xi,j denotes the control variables that include demographic variables and productivity. For demographic variables, we include age, gender, health status and social support. γj is a province level fixed effect. we add this fixed effect to eliminate the bias caused by provincial attributes, for instance, the education policy/income level of each province. ϵi,j is the error term. In this regression, the key coefficient is β1. If β1 is significantly positive, then it indicates that maternal education will bring a positive influence on children's income, which is consistent with our original hypothesis. Main Results According to Column (5) in Table 2, we can find that one year increase of maternal education is associated with a 1.7% increase in children's income. The results are robust in both magnitude and significance regardless of adding different controls. We would like to note that we do not add children's education into the regression because children's education is the outcome of maternal education and hence can be considered as a bad control as proposed by Angrist and Pichke (2009). If adding children's education into the regression, it will reintroduce selection bias into the regression. Let us consider a simplified version, that is, we divide children's education to two categories: high education and low education. If controlling children's education, then we are investigating the impact of maternal education on children's income given the level of children's education is fixed. This is inappropriate. For instance, if we focus on children whose educational level is high, then we are also very likely to focus on mothers whose educational level is high, which reintroduces selection bias. Table 2 . Years ofMaternal Education and Children's Income Note: (1) *** indicates being significant at the 1% significance level, ** indicates being significant at the 5% significance level, and * indicates being significant at the 1% significance level; (2) exp = age -years of education -16; (3) standard errors are robust standard errors. From Table 2, we can drive the conclusion that the relation between maternal education and the offspring's income is significantly positive. Moreover, maternal education can be considered exogenous in this setting. First, there is no reverse causality that children's income cannot affect maternal education. Second, the birth of a child can be considered randomized. Admittedly, the children's intelligence might be a confounder here. It is reasonable to assume that maternal education and children's intelligence are correlated, and children's intelligence can affect their income as well. However, due to data limitations, currently, I cannot add intelligence as a control to the regression. It renders a task for my future study. Mechanism Study Why would maternal education affect children's income? Magunson (2007) showed that children of mothers with higher levels of education perform better academically at middle school because women with a strong individual educational background have more time and energy to invest into their children's education. Further, children's cognition regarding studies and self-improvement would be potentially and greatly influenced by their mothers' consciousness and knowledge. Therefore, we propose that there must be a relationship between maternal education and her children's education, as the children would reap the benefits of human capital premium if their mothers have higher educational backgrounds, and they are also likely to have higher incomes in the workplace. To investigate the mechanism of how maternal education affects children's income, we use another regression: Edui,j = β1Edui,m,j + β2Xi,j + γj + ϵi,j Where Edui,j indicates the years of education of the individual i. Similarly, if β1 is significantly positive, then it states that maternal education will bring a positive influence on children's education, which can exactly explain how maternal education affects children's income. Table 3. Years ofMaternal Education and Children's Education Note: (1) *** indicates being significant at the 1% significance level, ** indicates being significant at the 5% significance level, and * indicates being significant at the 1% significance level; (2) exp = age -years of education -16; (3) standard errors are robust standard errors. According to Table 3, we find that when a mother receives 1 more year of education, then her children would receive 0.305 more year of education. The results are robust in both magnitude and significance regardless of adding different controls. This explains that human capital is one critical mechanism that maternal education can benefit children's income, thus, if a mother hopes her children to receive higher education and then to get higher income, she should improve her educational level at first. Conclusion This research is actually enlightened by one of China's Inspirational Role Models in 2021, Zhang Guimei. In what she is appealing to the public, women's education is not only tied to their own future, but also to the prospect of every family, and even the flourish and growth of a nation. The results of our research directly support this conclusion, as we observed that when maternal education increases, the offspring's educational level increases as well, and the relationship between maternal education and the offspring's income is also positive. We can therefore attest to this view-maternal education plays an important role in the development and achievement of children. Accordingly, the obsolete tradition that women are not sup-posed to pursue their education as much as possible is really a mistake. Women's education determines not only their own destiny, but also that of their offspring. The inherent and genetic intelligence has also been indicated in Mechanism Study as a significant variable that greatly influences children's income, but owing to limited data, we cannot discuss about it in this paper. In fact, it will provide a valuable insight into our future research-whether the increase in maternal education will also result in an increase in children's intelligence. Therefore, we will still use the words of Zhang Guimei as the conclusion, "education for women can influence three generations of individuals", hence, we hope this research can inspire more reflection and care for women's education.
2,385.6
2023-01-01T00:00:00.000
[ "Economics", "Education" ]
Chimpanzee communities differ in their inter- and intrasexual social relationships Male and female human social bonding strategies are culturally shaped, in addition to being genetically rooted. Investigating nonhuman primate bonding strategies across sex groups allows researchers to assess whether, as with humans, they are shaped by the social environment or whether they are genetically predisposed. Studies of wild chimpanzees show that in some communities males have strong bonds with other males, whereas in others, females form particularly strong intrasex bonds, potentially indicative of cultural differences across populations. However, excluding genetic or ecological explanations when comparing different wild populations is difficult. Here, we applied social network analysis to examine male and female social bonds in two neighbouring semiwild chimpanzee groups of comparable ecological conditions and subspecies compositions, but that differ in demographic makeup. Results showed differences in bonding strategies across the two groups. While female–female party co-residence patterns were significantly stronger in Group 1 (which had an even distribution of males and females) than in Group 2 (which had a higher proportion of females than males), there were no such differences for male–male or male–female associations. Conversely, female–female grooming bonds were stronger in Group 2 than in Group 1. We also found that, in line with captive studies but contrasting research with wild chimpanzees, maternal kinship strongly predicted proximity and grooming patterns across the groups. Our findings suggest that, as with humans, male and female chimpanzee social bonds are influenced by the specific social group they live in, rather than predisposed sex-based bonding strategies. Graphical Abstract Research suggests that some nonhuman primate species exhibit differences in intrasex and intersex-based social bonding strategies across communities (Borgeaud et al., 2017;Davila-Ross et al., 2022;Stevens et al., 2007). Group differences in male and female social strategies appear to be particularly pronounced across chimpanzee populations, however. For instance, some research in Gombe National Park (Tanzania) and Kibale National Park (Uganda) shows that male-male chimpanzee social bonds are particularly strong compared with bonds among females and serve fitness benefits including to facilitate protection from other chimpanzee communities, increase status, sire offspring, boundary patrols and hunting cooperation, as well as food sharing between males (Feldblum et al., 2021;Gilby et al., 2013;Mitani, 2006Mitani, , 2009Mitani & Amsler, 2003;, and can last over a decade (Bray & Gilby, 2020). However, other work, including those in Taï National Park (Côte d'Ivoire) and Budongo Forest (Uganda), show that females can also be highly socialespecially with other females, forming long-term bondsand display varying sociality across communities (Lehmann & Boesch, 2009;Newton-Fisher, 2006 ;Wakefield, 2013). Bonds among females may provide protection from male aggression and from dominance competition within communities (Newton-Fisher, 2006;Wakefield, 2013). Whether nonhuman primates who live in highly similar ecological environments show group-level variation in intrasex and intersex boding strategies remains unclear. Research of this kind would shed light on the extent to which they are shaped by the social environment, in ways similar to humans, rather than being explained by ecological or genetic factors. We therefore examined the influence of the social group on male and female social bonding behaviours by comparing chimpanzees of two social groups at Chimfunshi Wildlife Orphanage, Zambia. The two groups live in highly similar naturalistic environments and they are comparable in their subspecies composition (Rawlings et al., 2014;van Leeuwen et al., 2012;van Leeuwen et al., 2018), meaning that ecological or genetic factors are unlikely to explain any cross-group differences in intra and inter sex bonding strategies. Previous research investigating male and female chimpanzee social bonding behaviours have generally focussed on single communities. In one exception, when assessing long-term association patterns across five wild populations which differed in group sizes, sex ratio, and general demographic makeup, chimpanzees' predominantly associated with same-sex partners (Surbeck et al., 2017). These findings are in line with other studies of single populations. For example, the male Ngogo chimpanzees (who have a high proportion of females) display close male associative bonds (Mitani & Amsler, 2003) and more frequent and successful cooperative behaviours (Mitani & Watts, 1999;. Male presence is also suggested to reduce female aggression towards immigrating females, and males intervene in female-female aggression (Kahlenberg, Thompson, Muller, & Wrangham, 2008b). Social network analysis has also shown that as the Taï community group size decreased over time, females become more central to their group, ostensibly as competition and threat of aggression decreased (Lehmann & Boesch, 2009). However, the latter study only examined female sociality, meaning the role males played in such changes is unclear. Finally, other work suggests that social constraints and demographics, including group size, immigration of new group members, and differences in age and rank impact chimpanzee social behaviours and bonding patterns, particularly alliance formations (Kahlenberg, Thompson, & Wrangham, 2008a;Mitani, 2006;Mitani et al., 2002). In sum, while these studies hint that chimpanzee bonding behaviours differ across populations, ruling out ecological or genetic explanations remains difficult when comparing different communities in the wild. For a comprehensive assessment of social bonding, we applied social network analysis (SNA). SNA allows scientists to measure social group structures and is a robust quantitative approach for constructing group social relationships at group and individual levels (Puga-Gonzalez et al., 2019). SNA has been previously applied to describe social relationships of several primate, species including humans (Dufour et al., 2011;Gradassi et al., 2022;Migliano et al., 2020;Pasquaretta et al., 2014;Puga-Gonzalez et al., 2019;Salali et al., 2016;Schel et al., 2013;van Leeuwen et al., 2018). We collected social network data based on proximity and grooming, which are widely used predictor of chimpanzee bonds (Díaz et al., 2020;Kanngiesser et al., 2011;Roberts & Roberts, 2016a;Schel et al., 2013;van Leeuwen et al., 2018;Wakefield, 2013). However, some studies have reported that proximity and grooming networks differentially predict other social behaviours, such as successful transmission of information (Hasenjager et al., 2021;Hoppitt, 2017;van Leeuwen et al., 2020). Thus, including both measures allowed us to examine whether they similarly or differentially predicted male and female bonding strategies across the study groups. It also allowed some comparisons with human social network studies, which use proximity and communication to measure association patterns (Guo et al., 2015;Migliano et al., 2020;Page et al., 2017;Van Cleemput, 2012). In addition, we examined the potential impact of kinship and age on associations within and across sex groups. Maternal kinship influences chimpanzee cooperation, affiliation, and prosociality (Clark, 2011;Langergraber et al., 2009;Samuni et al., 2021) and age-related differences have been shown to affect chimpanzee proximity and social behaviours (Benenson, 2019;Kawanaka, 1989;Mitani et al., 2002). Previous studies at Chimfunshi Wildlife Orphanage (CWO) have reported substantial group differences in chimpanzees' grooming behaviours (van Leeuwen et al., 2012), extractive foraging techniques (Rawlings et al., 2014), play vocalizations (Davila-Ross et al., 2011), and social dynamics more generally (van Leeuwen et al., 2018). The four main study groups at CWO show consistent differences in attributes of their sociality (e.g., co-feeding tolerance), with corresponding effects on behaviours known to affect fitness (van Leeuwen et al., 2021). As such, we conducted our study testing the hypothesis that the two largest groups of chimpanzees at CWO differed in their sex-specific sociality. Methods Subjects, study site, and data collection Subjects were 61 chimpanzees housed in two groups at Chimfunshi Wildlife Orphanage (CWO), Zambia. Group 1 comprised 22 subjects: 11 males (mean age = 18.22, SD = 11.14) and 11 females (mean age = 17.82, SD = 9.70), Group 2 comprised 39 subjects: 10 males (mean age = 13.06 years, SD = 7.93) and 29 females (mean age = 17.59, SD = 8.38), see Table 1 for group demographics. Chimpanzees under 4 years of age were not considered in this study as their location and behaviour was strongly contingent on their mothers'. The chimpanzees of Group 1 live in a 65-hectare enclosure and Group 2 chimpanzees in a 72-hectare enclosure. The two enclosures are approximately 200 meters apart, formed of the Miombo Woodland, providing large, naturalistic environments which are separated by fencing. Data were collected in 2013 (July-September), between the hours of 06:30-18:00. For more details of the CWO chimpanzees and their environment, see Forrester et al., 2015;Rawlings et al., 2014;Van Leeuwen et al., 2012;van Leeuwen et al., 2018). Proximity data were collected through focal sampling individuals for 5 minutes and recording all individuals within 10 meters of the focal subject. Following Cronin et al. (2014) and Whitehead (2008), we took a 1/0 sampling per day approach to maximize data independence (i.e., if two individuals were observed associating once or more on the same day they were scored 1, and if not this dyad scored 0). Focal order was randomized before each day of data collection, providing a balance between morning and afternoon data for individuals. There was a total of 460 focals for Group 1 (mean per individual = 20.91, SD = 2.76), and 845 focals for Group 2 (mean per individual = 20.12, SD = 0.53). We also constructed sociograms for both groups to visualize their respective network structures. For proximity data, we distinguished party co-residence (proximity to focal <10 meters) and direct association (proximity to focal <1 meter). For grooming data, we recorded each time a focal individual was involved in a grooming bout (either giving or receiving). Association measures To assess social bond strength, association matrices based on the simple-ratio index were calculated. The simple-ratio index is calculated as follows: where x is the number of sampling periods A and B were observed associated; Y A , represents the number of sampling periods with just A identified; Y b , represents the number of sampling periods with just B identified; and Y ab , is the number of sampling periods with A and B identified but not associated (Whitehead, 2008). As noted above, to optimize data independency, the sampling period was set to "date" (i.e., 24hrs). The association index score for each dyad is between 0 and 1 (0 = never observed together; 1 = always observed together). In Group 1, the number of dyads examined was N = 55, 121, and 55 for male-male, male-female, and female-female dyads, respectively. In Group 2, the number of dyads compared was N = 45, 208, and 488 for male-male, male-female, and female-female dyads (FF, FM, MM), respectively. Statistical analysis General linear mixed models (GLMM) were used to examine whether the two study groups of chimpanzees differed in the relationship between dyad sex type (FF, FM, MM) and association index (simple ratio association [SRA]; Hoppitt & Farine, 2018) while including maternal kinship and age difference between dyads as covariates. Specifically, we ran three GLMMs (Baayen, 2008) in the R statistical environment (Version 4.1.2; R Core Team, 2020). First, we modelled SRA based on party co-residence (i.e., proximity to focal <10 meters) with beta error distribution and logit link function. Second, we modelled SRA based on direct association (i.e., proximity to focal <1 meter). Given that more than half of the resulting SRAs were 0, here, we applied a hurdle approach where we first modelled yes/no association with binomial error structure and logit link function, and subsequently the nonzero associations (henceforth 'magnitude') with beta error distribution and logit link function. Third, we modelled SRA based on grooming associations (i.e., comprising both grooming given and received by and from the focal). Here, for the same reason, we applied the same hurdle approach. For kinship, we identified all individuals that were maternally related (binary coded -yes/no), such that a mother and an offspring, and maternal siblings would be coded as maternally related (grandmothers, 'aunts,' and 'uncles' were not). For Group 1, 17/231 dyads were maternally related, and for Group 2, 35/741 dyads were maternally related. Age differences between dyads were calculated in years and months apart. The main fixed variable was dyad sex type in interaction with group, whereas maternal relatedness and age difference were entered as covariates. To account for non-independence of the response variable owing to repeated observations, we included both the focal and partner (together making up the dyad) as random intercept variables. We applied a standard regression method capable of accounting for repeated measures of individuals as well as controlling for influential variables, while assessing the strength of the predicted variables on the response (Baayen, 2008;Bolker et al., 2009). Given that we worked with observational data collected on different groups, with inherent biases regarding the selecting and therefore the assessment of certain individuals (e.g., less neophobic individuals, or individuals with high gregariousness; Farine & Whitehead, 2015;Whitehead, 2008), we additionally treated the inputted datastream (i.e., the data used as response for the GLMMs) in order to minimize the influence of such biases on the inferential framework (Farine & Aplin, 2019;van Leeuwen et al., 2019). This treatment has been proposed to benefit from permutations before the data are condensed into network indices (Bejder et al., 1998), hence the name "prenetwork" or "datastream" permutations (Farine, 2017). The preferred relationships (based on the different input measures) are computed following standard social network methods (i.e., association indices; Whitehead, 2008), where we chose to use the currently most supported form of "simple-ratio" indices (Hoppitt & Farine, 2018). However, given that we were interested in which social and demographic variables determined these indices, we furthermore regressed them onto our variables of interest, specifically dyad sex combinations. In order to obtain unbiased p values for the central question of whether the two groups of chimpanzees differed in the extent to which the dyad sex types associated, we applied data-stream (aka prenetwork) permutations (n = 1,000; Farine, 2013) in which we randomly assigned associations across the group members in a given day, while retaining the original frequency of associations per given day. The generated random networks were each analyzed with the same GLMMs as the original data (see above). We applied a model comparison between a full model including the interaction between dyad sex type and group, and a reduced model without the respective interaction yet with the main effects retained (Dobson & Barnett, 2018). For each iteration, we extracted the deviance difference between the models and compared these with the deviance difference of the original models (i.e., sum(Δdeviance ≤ Δdeviance random )/1000) to obtain a p value for the respective interaction (henceforth "P rand "). This approach was chosen to acknowledge the bias in observation effort due to certain focal subjects being more likely to be observed than others (e.g., owing to differences in enclosure usage). GLMMs were run using the R packages lme4 (Bates et al., 2015) and glmmTMB (Brooks et al., 2017). Separate dyad sex contrasts were analyzed with the emmeans package (Lenth, 2020). Sociograms were generated using the R package igraph (Csárdi & Nepusz, 2006). The generated sociograms depict the simple ratio association indices where the nodes represent individuals (red = females; blue = males) and the edges represent the dyadic tie-strength based on the association data. Networks were laid out using the Fruchterman-Reingold weighted algorithm, which increases the uniformity of edge-length and minimizes edge crossings. The graphs display communities generated by the spinglass algorithm (Reichardt & Bornholdt, 2006). Discussion To better understand how group demographics impact sex differences in chimpanzee sociality, we provide an in-depth analysis on male and female social bonding in populations that share ecological conditions and do not genetically differ. The results showed that the dyad types bonded differently across the two chimpanzee groups, both in terms of patterns of party co-residence and grooming patterns. While female-female proximity associations were significantly stronger in Group 1 (which had an even distribution of males and females) than Group 2 (which had a higher proportion of females than males), there were no such group differences for male-male or male-female associations. Conversely, female-female grooming bonds were stronger in Group 2 than Group 1. These group differences cannot be explained by ecological and genetic influences, as the groups live in similar ecological environments and are comparable in their subspecies composition (Rawlings et al., 2014;van Leeuwen et al., 2012;van Leeuwen et al., 2018). Thus, we provide robust evidence that the social bonding of chimpanzees is shaped differently depending on the social group they live in. In turn, these results progress the debate regarding whether nonhuman primates show sex-specific, or more flexible, bonding behaviours (Bray & Gilby, 2020;Mitani, 2006Mitani, , 2009Surbeck et al., 2017;Wakefield, 2013) by directly comparing two groups of neighbouring chimpanzees in the same study rather than carrying out indirect comparisons or comparisons of communities who live in different locations. Distal proximity (within 10 m) and grooming are different forms of bonding, potentially serving different functions, while both contributing to social cohesion. In chimpanzees, grooming between dyads has been associated with reduced aggression (Schel et al., 2013), coalition forming, postconflict resolution and agnostic support (Muller & Mitani, 2005;Schel et al., 2013), and has been argued to be an especially strong indicator of social bonding (Fedurek & Dunbar, 2009;Roberts & Roberts, 2016b). It is thus plausible that in Group 2, which was larger and had a high proportion of females, female-female dyadic grooming may serve to minimize intrasex aggression and competition and facilitation stronger bonds. Indeed, in the Ngogo chimpanzees, which also has a high proportion of females compared with males, females form comparatively strong association bonds and cluster together (Wakefield, 2013). The findings that female-female showed stronger proximity associations in Group 1 than Group 2 may reflect a different strategy by females in this group. Previous research has shown that chimpanzees' distal proximity does not predict grooming patterns, which was suggested to reflect that grooming reflects more targeted, richer bonding strategies while distal proximity allows individuals to maintain a larger set of social relationships (Roberts & Roberts, 2016b). Thus, in Group 1, which was smaller and had a higher concentration of males, it is possible that the females used proximity to maintain relationships with most or all other females in the group, whereas females in the larger Group 2 used grooming to form particularly strong bonds with targeted other females. This in turn may suggest that different bonding strategies are differentially optimal in different social environments. Indeed, studies of social transmission have reported that proximity and grooming networks differentially predict the spread of information, where one is highly predictive of social transmission and the other is less so, or not at all (Hasenjager et al., 2021;Hoppitt, 2017;van Leeuwen et al., 2020). Future work could investigate how demographics, including female estrous cycles (Surbeck et al., 2021), may impact the function of social behaviours such as proximity and grooming, and in turn, the expression of groupspecific bonding dynamics. Maternal kinship was a strong predictor of both proximity and grooming patterns. This contrasts work with wild chimpanzees where females disperse from their communities. For example in the Ngogo chimpanzees, most female social bonds were outside of kinship lines (Langergraber et al., 2009) and kinship did not meaningfully impact male affiliation or cooperation patterns (Langergraber et al., 2007). Likewise, kinship did not predict reciprocal grooming in the Tai chimpanzees (Gomes et al., 2009). However, studies with captive chimpanzees appear to show stronger proximity bonds and grooming associations along kinship lines (Clark, 2011;Díaz et al., 2020;Kanngiesser et al., 2011). It is possible in environments such as zoos and sanctuaries (like CWO) where there is no dispersal, mothers and their offspring form strong bonds into adulthood, and, in turn, provide social support during conflicts or in cooperative contexts (Clark, 2011). Previously, researchers have discussed the role of group demographics on male and female social bonding based on indirectly comparing results drawn from one community in Africa, such as Gombe National Park or Kibale National Park to data reported from other communities such as Budongo Forest or Taï National Park (Langergraber et al., 2009;Lehmann & Boesch, 2004;Mitani & Amsler, 2003;Newton-Fisher, 2006;Wakefield, 2013;, or by comparing different communities across Africa (Surbeck et al., 2017). However, in such cases, ruling out factors such as ecological and genetic variation, among other explanations, remains difficult. Based on our findings involving chimpanzee groups in shared ecological environments and with comparable genetic composition, we conclude that male and female social bonds that may be shaped by the social environment, in line with previous work on the CWO chimpanzees (Rawlings et al., 2014;Van Leeuwen et al., 2012;van Leeuwen et al., 2014van Leeuwen et al., , 2018van Leeuwen et al., , 2019. It is important to note however, that other factors we have not considered here such as levels of within-group aggression and personality types (Massen & Koski, 2014;Rawlings et al., 2020), or polymorphic variation in receptor genes that are related to the expression of social behaviour in chimpanzees (Staes et al., 2014) may also impact bonding in chimpanzees. Future research could investigate how such variables influence associations within and between sex groups in these semi-wild groups as well as other chimpanzee communities. In addition, it is important to consider to what extent methodological differences across studies may impact results on social relationships. For example, treatment of proximity measures differs between studies. Here, party co-residence was calculated as proximity focal <10 meters. While some studies in the wild have also used this approach (Roberts et al., 2019), others have differed-using, for example, within 50 m (Langergraber et al., 2013;Rushmore et al., 2013) or simply within visual range of the focal individual (Wakefield, 2013). Likewise, as here, some studies have used focal follow protocols (Langergraber et al., 2013;Lehmann & Boesch, 2004;Rushmore et al., 2013;Schel et al., 2013;Wakefield, 2013), while others have also included group scan sampling to collect proximity data (Funkhouser et al., 2018). Although data from these approaches are correlated, group scan sampling has been shown to be slightly less accurate in predicting chimpanzee foraging behaviour (Gilby et al., 2010). It is thus important to consider the methodological approaches taken when comparing across studies, and whether this may impact results. Further, although the chimpanzees at CWO live in large, naturalistic environments, systematic comparisons between sanctuary living chimpanzees and wild communities are needed to examine whether, and how, living environment impacts bonding strategies. In conclusion, we examined the social bonding strategies of sanctuary-living chimpanzees that are comparable in ecological and subspecies composition. Our findings on these strategies also add to an already large body of work showing that the CWO chimpanzees exhibit group differences in a range of domains including extractive foraging, play vocalizations, co-feeding tolerance, prosociality, and grooming behaviours. We conclude that male and female chimpanzee social bonding strategies at least in part shaped by social factors, possibly culturally, in ways comparable to humans. Social bonding has played an essential role in human evolution, facilitating cooperation and maintaining cohesion in expanding group sizes, and our results shed light on how the social environment influences intra-and intersex/ gender-based sociality. (general manager) and the Chimfunshi Research Advisory Board for logistical help and assistance throughout data collection at Chimfunshi Wildlife Orphanage. Thanks go to Hannah Roome for helpful comments on the manuscript. Code availability The R code used to run the analyses are available upon request. Conflicts of interest/Competing interests We declare no conflicts of interest. Ethics approval The data collection at Chimfunshi was approved by the University of Portsmouth Psychology Research Ethics Committee and the Chimfunshi Wildlife Orphanage research committee and thus complies with all regulations regarding the ethical treatment of research subjects including the American Association of Physical Anthropologists Code of Ethics, as it pertains to human and nonhuman animals. Consent to participate Data collection was observational and noninvasive. Consent for publication This study was approved for publication by the Chimfunshi Research Advisory Board. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
5,615.4
2023-02-01T00:00:00.000
[ "Biology", "Psychology" ]
The zebrafish (Danio rerio) anxiety test battery: comparison of behavioral responses in the novel tank diving and light–dark tasks following exposure to anxiogenic and anxiolytic compounds Rationale Triangulation of approaches (i.e., using several tests of the same construct) can be extremely useful for increasing the robustness of the findings being widely used when working with behavioral testing, especially when using rodents as a translational model. Although zebrafish are widely used in neuropharmacology research due to their high-throughput screening potential for new therapeutic drugs, behavioral test battery effects following pharmacological manipulations are still unknown. Methods Here, we tested the effects of an anxiety test battery and test time following pharmacological manipulations in zebrafish by using two behavioral tasks: the novel tank diving task (NTT) and the light–dark test (LDT). Fluoxetine and conspecific alarm substance (CAS) were chosen to induce anxiolytic and anxiogenic-like behavior, respectively. Results For non-drug-treated animals, no differences were observed for testing order (NTT → LDT or LDT → NTT) and there was a strong correlation between performances on the two behavioral tasks. However, we found that during drug treatment, NTT/LDT responses are affected by the tested order depending on the test time being fluoxetine effects higher at the second behavioral task (6 min later) and CAS effects lower across time. Conclusions Overall, our data supports the use of baseline behavior assessment using this anxiety test battery. However, when working with drug exposure, data analysis must carefully consider time-drug-response and data variability across behavioral tasks. Introduction In behavioral research, triangulation of approaches (i.e., using several tests of the same construct) can be extremely useful for increasing the robustness of the findings, and thus increase the confidence in the validity of the results (Stegenga 2009). In light of this, behavioral test batteries are widely employed, where animals are tested in multiple behavioral tasks either on the same day or across weeks, and the results are triangulated to gain a more robust operational definition of the target behavior (Paylor et al. 2006). While behavioral test batteries are common in rodent research, they have been less widely employed in zebrafish behavioral studies. Instead, there is a tendency to use a larger sample of animals. A drawback of this approach in zebrafish is that it increases the number of potentially unnecessary animals used in research (Born et al. 2017;McIlwain et al. 2001;Tammimäki et al. 2010). However, the systematic assessment of the impact of multiple behavioral tests on zebrafish performance in the assays is yet to be carried out. Anxiety is a transdiagnostic trait observed across many affective disorders, and understanding more about its underlying biology would assist in the development of novel or repositioned pharmacotherapeutics (Demetriou et al. 2021;Newby et al. 2015). To address this, zebrafish (Danio rerio) have been widely used in the translational neuroscience of affective disorders using anxiety as a core subject of investigation Maximino et al. 2010b;Stewart et al. 2012). The two most commonly employed assays for studying anxiety-like behavior in zebrafish are the novel tank diving task (NTT) and the light-dark test (LDT) . The NTT and LDT have been extensively validated using drugs that induce anxiolytic and anxiogenic effects across species, including humans (Egan et al. 2009;Parker et al. 2012;Rosemberg et al. 2012). The NTT exploits the natural tendency of zebrafish to dive to the bottom of a novel environment, gradually exploring the top zone of the tank as they habituate to the environment (Levin et al. 2007). In the NTT, anxiety can be operationally defined in terms of (either) time spent in the bottom (↑ time = = ↑ anxiety), in the top half (↑ time = = ↓ anxiety), or top third (↑ time = = ↓ anxiety) (Egan et al. 2009;Gerlai et al. 2000;Parker et al. 2012;Rosemberg et al. 2012) of a novel tank. Similarly, the LDT evaluates the extent of a fishes natural tendency for scototaxis (aversion to bright areas and natural preference for the dark) in a novel environment (Blaser and Penalosa 2011;Facciol et al. 2017). In the LDT, anxiety is operationally defined either by time spent in the light portion (↑ time = = ↓ anxiety) or more time spent in the dark compartment (↑ time = = ↑ anxiety) Facciol et al. 2019;Gerlai et al. 2000;Maximino et al. 2010a;Mezzomo et al. 2016). In both tasks, factors such as lighting, handling, pre-test housing, and the color of the tank play an important role in fish behavioral response Parker et al. 2012). Song et al. (2016) evaluated the impact of carrying out the NTT and LDT as a test battery, and found no impact on baseline performance of their animals. In addition, they tested the impact of repeated testing following a mild (bright light) and strong (transportation in a car) stressor. They found no significant differences between the responses of the fish across the two test times, confirming (a) that fish showed very little evidence of a test battery effect, and (b) that this was stable even after a stress challenge. Despite these promising findings, what was not clear from the Song et al.'s (2016) study was whether the test battery effect would be observed following pharmacological manipulations: this is critical to know, as zebrafish are commonly used for psychopharmacology experiments (Cassar et al. 2020;MacRae and Peterson 2015). In addition, and critically, previous studies did not examine individual fish performance across these two behavioral tasks. Because of this, it is not clear if the group effects were also observed in individual animals, or indeed, if individual animals show robust test-retest reliability across the battery. Here, we had three aims. First, we aimed to evaluate whether testing the same individuals on different anxietyrelated tasks (NTT and LDT) would affect either their baseline behavior on the task or the effect size of two well-characterized anxiolytic and anxiogenic interventions (fluoxetine, and conspecific alarm substance; CAS) on their performance. Second, we aimed to examine correlations between NTT and LDT performance endpoints to better understand the value of test batteries for anxiety in zebrafish. Third, we looked at the effects of each drug after a time delay to differentiate if the effect is caused whether by the test battery or time effect. Animals and experimental design Zebrafish (AB wild-type) were bred in-house and reared in standard laboratory conditions on a re-circulating system (Aquaneering, USA) on a 14-/10-h light/dark cycle (lights on at 9:00 a.m.), pH 8.4, at ∼28.5 °C (± 1 °C) in groups of 10 animals per 2.8 L. Fish were fed three times/day with a mixture of live brine shrimp and flake food. All behavioral tests were performed between 10:00 and 15:00 h (Mon-Sun). Figure 1 depicts the experimental design. Adult zebrafish (4 mpf; 50:50 female:male ratio) were first transferred to 300-mL beakers containing either aquarium-treated water for 5 min (handling control), fluoxetine (100 µg/L; 30-min exposure), or conspecific alarm substance (CAS; 5-min exposure), and then transferred either to NTT, LDT, or to new beaker (6 min-time delay groups) (see below). Animals were then immediately transferred to the second anxiety test (NTT or LDT depending on the first task assessed). Animals from the time delay group were tested only in one behavioral task. Fluoxetine was obtained from Sigma-Aldrich (Dorset, UK). After behavioral recording, fish were euthanized using 2-phenoxyethanol from Aqua-Sed (Aqua-Sed™, Vetark, Winchester, UK). Required sample size of ~ 64 for each drug exposure (n = 16 NTT → LDT + n = 16 LDT → NTT + n = 16 time delayed + NTT + n = 16 time delayed + LDT) was calculated a priori following pilot experiments and previous sample size used for testing drug effects in the NTT and LDT in our lab (d = 1.25, power = 0.8, alpha = 0.05). To ensure data reliability, two independent batches were tested (choosing n = 8 fish from several housing tanks each batch). All behavioral testing was carried out in a fully randomized order, randomly choosing fish from one of four housing tanks for drug exposure followed by behavioral testing. After each behavioral trial, the water from the NTT, LDT, and beakers was changed. All experiments were carried out following approval from the University of Portsmouth Animal Welfare and Ethical Review Board, and under license from the UK Home Office (Animals (Scientific Procedures) Act, 1986) [PPL: P9D87106F]. Conspecific alarm substance (CAS) extraction CAS is a fear cue that has been successfully used to trigger stress-related responses at physiological and behavioral levels in different fish species (Abreu et al. 2016;Canzian et al. 2017;Fraker et al. 2009;Hall and Suboski 1995;Quadros et al. 2016;Speedie and Gerlai 2008;Wong et al. 2010). Briefly, CAS exposure was performed by individually exposing fish to 1.05 mL of CAS preparation in 300-mL beakers for 5 min. In order to obtain CAS, a phenotypically similar donor fish was killed using rapid cooling (submersion in 2 °C water). The epidermal cells were then cut with 10 shallow slices on both sides of the body using a razor blade. Ten milliliters of distilled water was then added into a Petri dish and mixed to fully cover the fish's body. All procedures were performed on ice and controlled to avoid drawing blood and any external contamination (Canzian et al. 2017;Egan et al. 2009;Quadros et al. 2016;Speedie and Gerlai 2008). After CAS exposure, fish were tested in the NTT → LDT or LDT → NTT (counterbalanced 50:50). Novel tank diving test (NTT) Animals (n = 144) were placed individually in a purposebuilt transparent tank (20 cm length × 17.5 cm height × 5 cm width) containing 1 L of aquarium water. Behavioral activity was analyzed using the Zantiks AD system's purpose-built NTT (Zantiks Ltd., Cambridge, UK) for 6 min (Egan et al. 2009;Parker et al. 2012;Rosemberg et al. 2012). The Zantiks AD system was fully controlled via a web-enabled device during behavioral training. The tank was separated into three virtual zones (bottom, middle, and top) to provide a detailed evaluation of vertical activity. The following endpoints were analyzed: distance traveled, and time spent in the top zone. Light-dark test (LDT) The LDT was performed in a black tank (20 cm length × 15 cm height × 15 cm width) divided into two equally sized partitions where half of the tank area contained a bright white light and the other half was covered with a purpose-built black partition to avoid light exposure. Animals (n = 144) were place individually into the behavioral apparatus and their activity was analyzed using the Zantiks AD system's purpose-built LDT equipment (Zantiks Ltd., Cambridge, UK) for 6 min to determine the time spent in Fig. 1 Experimental design illustration showing the behavioral test battery of NTT followed by LDT or vice-versa, and the time delay groups (6 min). For the fluoxetine and conspecific alarm substance (CAS) groups, animals were pretreated a priori behavioral assessment for 30 and 5 min, respectively the dark area Maximino et al. 2010a;Mezzomo et al. 2016). Statistics Normality and homogeneity of variances were ascertained by the Kolmogorov-Smirnov and Bartlett's test, respectively. Control groups NTT and LDT data were analyzed using one-way ANOVA (baseline behavior NTT/LDT for 1 st vs. 2 nd vs. delay (6 min)). Two-way ANOVA with multiple testing (two levels: 1 st vs. 2 nd tested in a new environment or 1 st tested vs. time delay for behavioral testing) and substance exposure (between-subjects factor-three levels: control, fluoxetine, CAS) as fixed factors were used to compare anxiety endpoints (NTT, time spent in top of the tank and distance traveled; LDT, time in the dark area). Results were expressed as means ± standard error of the mean (S.E.M). Tukey's test was used as post-hoc analysis and all the groups were compared between each other. Results were considered significant when p ≤ 0.05. Heat maps were used to summarize differences between groups in the NTT or LDT comparing 1 st , 2 nd , and time delayed groups. Pearson correlation analysis was used to assess the association between time spent in the top zone and in the lit area for the control, fluoxetine, and CAS groups, independently of testing order. Figure 2 shows the distance traveled, time spent in top, and time spent in the lit area for control zebrafish tested in both NTT and LDT. For locomotion, no significant effect was observed for the distance traveled in the NTT (F (2, 45) = 0.1485; p = 0.8624). Similarly, no significant difference was observed for controls' time spent in top (tested 1 st vs. 2 nd vs. delay (6 min) ; F (2, 45) = 0.02521; p = 0.9751) ( Fig. 2A). Regarding animals' scototaxis, no significant difference was observed for animals tested in the light-dark test 1 st vs. 2 nd vs. delay (6 min) (F (2, 45) = 0.2282; p = 0.7969) (Fig. 2B). CAS effects are decreased over time in a test battery and after time delay CAS was used as an anxiogenic control, and its effects on anxiety-like behavior in the NTT test followed by light-dark test and vice-versa are depicted in Fig. 4. A significant interaction effect (test order * CAS exposure) was observed for the distance traveled (F (1,60) = 4.434; p* = 0.0394) but there was no main effect of test order (F (1,60) = 2.312; p = 0.1336) or CAS exposure (F (1,60) = 0.2362; p = 0.6287). However, no significant effect was observed through Tukey's post-hoc analysis. Regarding animals' time spent in the top zone, there was a significant main effect of CAS exposure (F (1,60) = 6.326; p* = 0.0146) independent of test order. No significant interaction between factors (F (1,60) = 0.5602; p = 0.4571) or test order (F (1,60) = 1.059; p = 0.3077) effect was observed for CAS exposure. A significant decrease in the time spent in top was only observed for controls 1 st vs. CAS 1 st after post-hoc analysis (p* = 0.0484) and no significant difference was observed for controls 2 nd vs. CAS 2 nd (p = 0.3861) (Fig. 4A). Finally, although no interaction (test order vs. CAS exposure; F (1,60) = 0.03571; p = 0.8508) and test order effect (F (1,60) = 0.6211; p = 0.4338) were Fig. 3 The effects of behavioral test battery in novel tank diving task (A) and light-dark test (B) of wild-type (WT) zebrafish acutely exposed to fluoxetine 100 µg/L. The effects of time delay in novel tank diving task (C) and light-dark test (D) of WT zebrafish acutely exposed to fluoxetine 100 µg/L. Data were represented as mean ± S.E.M and analyzed by two-way ANOVA (test order and fluoxetine as factors), followed by Tukey's test multiple comparison test (n = 16 per group) observed for time spent in the lit zone, a significant CAS exposure effect was observed (F (1,60) = 18.47; p < 0.0001). Briefly, CAS exposure decreased the time spent in the lit area comparing CAS-exposed tested 1 st and 2 nd with their own controls (p* = 0.0124 and p* = 0.0257, respectively). When looking at the effect of time delay on CAS exposure ( Fig. 4C and D Figure 6 displays intercorrelations between the endpoints for both the behavioral measures, both in the presence of CAS and fluoxetine, and with no-drug treatment. There was a strong positive correlation for the time spent in top (NTT) and time spent in the lit zone (LDT) for the no-drug-treated group (r = 0.6954; p**** < 0.0001; n = 32), and a moderate positive correlation for the fluoxetine group (r = 0.3736; p* = 0.0352; n = 32). However, there was no correlation between endpoints in the tests in the CAS-exposed animals (r = 0.0754; p = 0.6917; n = 32). Discussion Here, for the first time, we tested how using the same individuals in two anxiety-related tasks affects their behavioral responses to the protocols, both in the absence of drugs and following exposure to an anxiolytic and anxiogenic compound (fluoxetine and CAS, respectively). Additionally, we examined whether introducing a time delay plays a role in the drug response of fluoxetine and CAS, or it is the test Fig. 4 The effects of behavioral test battery in novel tank diving task (A) and light-dark test (B) of wild-type (WT) zebrafish acutely exposed to CAS for 5 min. The effects of time delay in novel tank diving task (C) and light-dark test (D) of WT zebrafish acutely exposed to CAS for 5 min. Data were represented as mean ± S.E.M and analyzed by two-way ANOVA (test order and CAS as factors), followed by Tukey's test multiple comparison test (n = 16 per group) battery that increases or decreases animals' behavioral response in a second task. We also examined, for the first time, how individuals performed across the different tasks to better understand individual performance characteristics in the two tests. We found that parameters linked to anxietylike behavior, such as the time spent in top and time spent in the lit area, are not affected by testing wild-type (WT) fish in the NTT followed by LDT and vice-versa. However, when animals are exposed to fluoxetine, a larger effect size was observed in the second test, independent of the test (i.e., NTT or LDT) which suggests that there is an impact of the time in which these fish were tested, rather than multiple testing. This hypothesis was later confirmed by testing animals in the NTT or LDT after a time delay with no previous behavioral testing. Differently from fluoxetine, CAS had its higher effect in the when immediately tested; however, similar to what was found for fluoxetine, the results were consistent, independent of the task. Moreover, lower correlation values were found when comparing the time spent in the top zone vs. time spent in the lit area for both fluoxetine-and CAS-exposed animals, suggesting higher data variability when animals are tested in both behavioral tasks and these parameters are compared independently of testing order. The NTT and the LDT are widely used to assess anxietylike behavior in zebrafish Maximino et al. 2010b;Stewart et al. 2012). The comparison across tasks has been previously studied, with both tasks demonstrating good cross-test correlation in vivo and similar sensitivity to zebrafish anxiety-like states (Kysil et al. 2017). In rodents, behavioral test battery is commonly used to study several behaviors including anxiety-related tasks using open field and light-dark transitions (Okuda et al. 2018). In zebrafish, studies have used a combination of social behavior, memory, and anxiety tests in order to examine inter-domain correlations, but also to minimize the use of animals (Fontana et al. 2020(Fontana et al. , 2021. Zebrafish have previously been shown to display a similar behavioral phenotype when tested in the NTT → LDT or LDT → NTT (Song et al. 2016). In the same study, an acute stressor (30-min car transportation) increased anxiety-related patterns in both tasks (Song et al. 2016). These data suggest that a strong effect can still be observed when performing behavioral test batteries in zebrafish, with no impact of test order or multiple testing by itself. Here, we found that the overall baseline response of our control WT zebrafish is kept the same across tasks when testing NTT and LDT, which supports the use of this species across anxiety-related tasks in order to reduce the number of animals used in research. However, when animals were acutely exposed to CAS or fluoxetine, different effects were observed when performing a behavioral test battery. A significant role for fluoxetine in decreasing anxiety is more pronounced in the 2 nd behavioral task or after 6 min of time delay, independently of it being NTT or LDT. No significant differences were observed in the 1 st behavioral task. Fluoxetine is a selective serotonin reuptake inhibitor (SSRI) commonly used to treat several psychiatric disorders in humans. The role of fluoxetine in zebrafish anxiety is somewhat controversial. For example, Stewart et al. (2011b) showed that fluoxetine had no effect on anxiety-like behavior at a concentration of 0.1 mg/L (the same concentration used here). However, these animals had an increased tendency to spend time in the top zone of the tank, which can be an indicator of decreased anxiety. The authors discussed that the lack of anxiolytic effects following acute fluoxetine in zebrafish contradicts clinical and rodent findings (Hascoët et al. 2000;Lightowler et al. 1994;Varty et al. 2002). Here, the anxiolytic effect of fluoxetine was only observed when fish were tested in the second task or after 6 min. This could explain the data variability across papers, since we observed no effect when fish were immediately tested in the NTT or LDT. Altogether, these data suggest that there is a temporal delay in the effects of fluoxetine. The pharmacokinetics and pharmacodynamics of fluoxetine vary depending on administration route and time/duration of its exposure (Caccia et al. 1990;Sawyer and Howell 2011). For example, a study with non-human primates showed that the peak of fluoxetine in serum is achieved at different times depending on the drug concentration (15 min for 31 ng/mL, 30 min for 70 ng/mL, and 60 min for 165 ng/mL). Meanwhile, its main metabolite, norfluoxetine, was only found at 120 min for all the doses tested lasting up to 24 h after fluoxetine exposure (Sawyer and Howell 2011). Although fluoxetine has been commonly used as an anxiolytic drug in neuropsychiatric studies using zebrafish, time-dose-response studies looking at the effects of fluoxetine using water exposure as the administration route in this species is lacking. A recent study has explored the long-term effects of fluoxetine in zebrafish behavior up to 28 days after acute exposure to different concentrations, where the authors found that the fluoxetine effects vary depending on time of behavioral testing (Al Shuraiqi et al. 2021). However, fluoxetine short time-dose-response in anxiety-related paradigms is still unknown being an important step for the understanding of data variability across labs and the mechanisms underlying fluoxetine inducing anxiolytic and anxiogenic phenotypes. Although a significant decrease in the time spent in the lit area could be observed when animals were tested 1 st , 2 nd , or after time delay in the LDT, in the NTT, no significant effects were observed between controls and CASexposed animals when tested 2 nd or after time delay. CAS is an effective acute stressor which is produced and stored in the epidermal "club" cells and is naturally released into the water after skin injuries provoked by predator bouts (Chivers and Smith 1994;Korpi and Wisenden 2001). The different concentrations and effects of CAS on zebrafish fear and anxiety-related behavior were first described by Speedie and Gerlai (2008) being its anxiogenic effects well-characterized in behavioral neuroscience research. For example, in the light-dark test, zebrafish exposed to CAS for 5 min showed increased scototaxis (preference for dark areas) (Abreu et al. 2016;Quadros et al. 2016) which is a behavioral change often observed after the exposure to anxiogenic drugs (Stewart et al. 2011a). Similarly, we found that CAS significantly decreased the time spent in the lit area and in the top zone, which indicates an increased "anxious" response. Interestingly, we found that this effect is attenuated in the second task only when NTT is the second behavioral analysis in the test battery, where a strong effect is maintained across tasks for the LDT. Similarly, when considering time delay as a factor, no significant differences were observed for CAS in the NTT, but a strong effect was still observed in the LDT even after a time delay. However, in both behavioral tasks, the effect size of CAS-induced anxiogenic behavior is decreased when animals are tested after 6 min suggesting that these effects could potentially decrease across time. When looking at the correlation between these tasks, independent of the test order, there was a strong positive correlation between the time spent in the top zone and time spent in the lit area, making the behavioral phenotypes in those tasks comparable. Similarly, a previous study has showed that those tasks show good cross-test correlation with NTT only differing from LDT in terms of cortisol responses after tasks where NTT is correlated to higher stress-related responses (Kysil et al. 2017). In addition, here, we showed that the correlation between NTT and LDT anxiety-related variables is not always good showing low values when animals are exposed to different molecules such as fluoxetine and CAS. Although this data could indicate that data is less reliable when comparing the animals' response in both tasks (NTT → LDT or LDT → NTT), the main effects for these drugs when tested firstly (CAS) and secondly (fluoxetine) were similar across behavioral tasks. Conclusion Overall, the use of behavioral battery testing for anxietylike behavior can indeed influence behavioral response when fish are previously exposed to a chemical substance, such as CAS or fluoxetine. However, our data indicate that the effects are not caused by the test battery per se but rather by the test time. For example, fluoxetine has higher anxiolytic-like effects when tested secondly or after a time delay. Meanwhile, CAS effects are higher in the first behavior task compared to the second behavioral task or after 6 min. Importantly, WT behavior was not influenced by testing animals in both new environments. Our findings may be particularly important for characterization of mutant lines, where a reduced number of animals could potentially be used to evaluate baseline behavior when there is no influence of drug exposure. However, further studies are still necessary to compare data between WT animals and genetically altered fish. Altogether, this supports the use of baseline behavior assessment using multiple tasks; however, researchers must carefully prepare their experimental design when testing drugs and conducting behavioral battery considering the drug time-dose-response.
5,950
2022-01-01T00:00:00.000
[ "Biology" ]
Normal limit laws for vertex degrees in randomly grown hooking networks and bipolar networks We consider two types of random networks grown in blocks. Hooking networks are grown from a set of graphs as blocks, each with a labelled vertex called a hook. At each step in the growth of the network, a vertex called a latch is chosen from the hooking network and a copy of one of the blocks is attached by fusing its hook with the latch. Bipolar networks are grown from a set of directed graphs as blocks, each with a single source and a single sink. At each step in the growth of the network, an arc is chosen and is replaced with a copy of one of the blocks. Using P\'olya urns, we prove normal limit laws for the degree distributions of both networks. We extend previous results by allowing for more than one block in the growth of the networks and by studying arbitrarily large degrees. Introduction Several random tree models have been studied where at each step in the growth of the network, a vertex v is chosen amongst all the vertices of the tree, and a child is added to v. When the choice of v is made uniformly at random, these trees are called random recursive trees. When the choice of v is made proportionally to its degree deg(v), these trees are called random plane-oriented recursive trees. Both models are examples of preferential attachment trees, where the choice of v is made proportionally to χ deg(v) + ρ for real parameters χ and ρ (notice that a preferential attachment tree is a random recursive tree when χ = 0 and is a random plane-oriented recursive tree when ρ = 0). Pólya urns were used to prove multivariate normal limit laws for the degree distributions in all of these random tree models [9,10,6,4]. Asymptotic normality of degree sequences of similar types of preferential attachment models have also been established without the use of Pólya urns [11,12]. The process of adding a child to a vertex v in a tree can instead be thought of as taking the graph K 2 (two vertices joined by an edge) with one of the vertices labelled h, and fusing together the vertices v and h. Hooking networks are grown in a similar manner from a set of graphs C = {G 1 , G 2 , . . . , G m }, called blocks, where each block G i has a labelled vertex h i called a hook. At each step in the growth of the network, a vertex v called a latch is chosen from the network, a block G i is chosen, and the hook h i and the vertex v are fused together. A more precise formulation is laid out in Section 1.2.1. Several graphs can be thought of as hooking networks. Any tree can be grown as a hooking network with K 2 as the only block. A block graph (or clique graph) is a hooking network whose blocks are complete graphs, and a cactus graph is a hooking network whose blocks are cycles. We prove multivariate normal limit laws for the degree distributions of hooking networks as the number of blocks attached tends to infinity (see Theorem 1.3). We allow for a preferential attachment scheme for the choice of the latch (i.e., the latch v is chosen proportionally to χ deg(v) + ρ). We also assign to each block G i a value p i such that p 1 + p 2 + · · · + p m = 1, and choose the block G i to be attached with probability p i . Along with the results for degree distributions of the random tree models described above, Theorem 1.3 also generalizes other results on previously studied hooking networks. Gopaladesikan, Mahmoud, and Ward [3] introduced blocks trees, which can be thought of as hooking networks grown from a set of trees as blocks, where the root of each block has a single child and acts as the hook. In their model, the latch is chosen uniformly at random at each step, and the block to be attached is chosen according to an assigned probability value. They proved a normal limit law for the number of leaves (vertices with degree 1) in blocks trees. Mahmoud [8] proved multivariate normal limit laws for the number of vertices with small degrees in self-similar hooking networks, which are hooking networks grown from a single block called a seed. Both the case where the latch is chosen uniformly at random and the case where the latch is chosen proportionally to its degree were studied in [8]. In the extended abstract [2], we presented a proof of multivariate normal limit laws in the specific cases of hooking networks grown from several blocks when the choice of the latch as well as the choice of the block to be attached are made uniformly at random The methods used to prove our results for hooking networks also apply to prov-ing multivariate normal limit laws for outdegree distributions of bipolar networks (see Theorem 1.7). Bipolar networks are grown from a set C = {B 1 , B 2 , . . . , B m } of directed graphs, each with a single source N i : a vertex with zero indegree (deg − (N i ) = 0), and a single sink S i : a vertex with zero outdegree (deg + (S i ) = 0). At each step in the growth of the network, an arc (v, u) is chosen and is replaced with one of the blocks B i , by fusing N i to v and S i to u; see Section 1.2.2 for a more precise description. Previously, results were obtained for vertices of small outdegrees in bipolar networks grown from a single block, and where the arc (v, u) to be replaced is chosen uniformly at random [1]. We extend previous results by looking at bipolar networks grown from more than one block, by generalizing the choice of the arc to be replaced, and by studying arbitrarily large degrees. Composition of the paper The networks studied are described in more detail in Section 1.2. Alongside the descriptions of the networks, running examples of hooking networks and bipolar networks are described in Sections 1.2.1 and 1.2.2 respectively. Our main results are stated in Section 1.3. These include multivariate normal limit laws for the vectors of degrees of hooking networks and vectors of outdegrees of bipolar networks. The theory of generalized Pólya urns developed by Janson in [5], which is the main tool used in the proofs, is summarized in Section 2. The proofs of our main results are presented in Section 3. This is done in three steps. We start by describing how we study the vertices in our networks as balls in urns in Section 3.1. Properties of the intensity matrices for these urns are gathered in Section 3.2. In Section 3.3, we prove that the matrices studied in 3.2 are indeed the intensity matrices for the urns we are studying and, with the help of theorems proved in [5] and stated in Section 2, we finish the proofs of our main results. The networks studied In the growth of hooking networks and in the growth of bipolar networks, a vertex v is chosen at every step. The choice of the vertex v is made with probability proportional to χ deg(v) + ρ in the case of the hooking networks and proportional to χ deg + (v) + ρ in the case of the bipolar networks, where χ ≥ 0 and ρ ∈ R so that χ + ρ > 0. Since these choices are made proportionally, without loss of generality, we can limit the choice of χ to 0 or 1 (simply divide the numerators and denominators of (1) and (2) below by χ if this value is nonzero). When χ = 1 we let ρ > −1, while we let ρ be strictly positive when χ = 0 to avoid the cases where χ deg(v) + ρ ≤ 0 or χ deg + (v) + ρ ≤ 0 (from the descriptions below we see that the hooking networks studied are connected and so the vertex v has degree deg(v) > 0; we also see below that all vertices v that are candidates for being a latch in the bipolar networks studied satisfy deg + (v) > 0). For a positive integer k, we let w k := χk + ρ. Hooking networks Let C = {G 1 , G 2 , . . . , G m } be a set of connected graphs, each with at least 2 vertices, and each with a labelled vertex h i . We allow for the graphs to contain self-loops and multiple edges. The graph G i is called a block, and the vertex h i is called its hook. Each block G i is also assigned a positive probability p i such that p 1 + p 2 + · · · + p m = 1. For example, consider the set of blocks in Figure 1, with their hooks labelled and their probabilities written underneath. Figure 1: A set of simple graphs as blocks Let χ and ρ be real numbers satisfying the conditions set above. A sequence of hooking networks G 0 , G 1 , G 2 , . . . is constructed as follows: one of the blocks G i is chosen, and we set G 0 to be a copy of G i (the choice of the first block does not need to be done at random for our methods to work). The vertex H that corresponds to the hook of this first block copied to make G 0 is called the master hook of the hooking networks constructed afterwards; when all the blocks are trees the master hook acts as the root of the network. Recursively for n ≥ 1, the hooking network G n is constructed from G n−1 by first choosing a latch v at random proportionally to χ deg(v) + ρ amongst all the vertices of G n−1 , that is, with probability where V (G n−1 ) is the vertex set of G n−1 . Once the latch is chosen, a block G i is chosen according to its probability p i . A copy of G i is attached to G n−1 by fusing together the latch v with the hook h i of the copy of G i ; that is, h i is deleted and edges are drawn from v to the former neighbours of h i . Figure 2 is a sequence of hooking networks constructed from the set of blocks in Figure 1 by taking a copy of G 3 and attaching copies of G 4 , then G 2 , and finally a copy of G 1 . The master hook of the network is labelled H, and at each step the vertex chosen to be the latch is denoted by * . Figure 2: A sequence of hooking networks grown from the blocks G 1 , G 2 , G 3 and G 4 of Figure 1 be a directed acyclic graph containing a unique source N called the north pole of B, a unique sink S called the south pole of B, and a directed path from every vertex v = S in B to S. The methods presented here also apply to a more relaxed definition of bipolar directed graphs: connected directed graphs with a single source and a single sink. Let C = {B 1 , B 2 , . . . , B m } be a set of bipolar directed graphs, each with their north pole N i and south pole S i identified. Each B i is called a block, and is assigned a probability p i such that p 1 + p 2 + · · · + p m = 1. For example, consider the set of blocks in Figure 3, with their north and south poles labelled as well as their probabilities. Figure 3: A set of bipolar directed graphs as blocks Once again, we let χ and ρ be real numbers satisfying the conditions set at the beginning of this section. We choose a block B i and set the bipolar network B 0 to be a copy of B i (once again, the choice of the first block need not be made at random). The vertices corresponding to the north and south poles of B 0 serve as the master source N and master sink S respectively of the bipolar networks constructed afterwards. For n ≥ 1, the bipolar network B n is constructed from B n−1 in a manner similar to that of hooking networks. First, a latch v is chosen proportionally to χ deg + (v) + ρ amongst all the vertices in B n−1 that are not the master sink, that is, with probability where V (B n−1 ) is the vertex set of B n−1 . Once the is latch chosen, one of the arcs (v, u) leading out of v is chosen uniformly at random amongst all the arcs leading out of v, and finally a block B i is chosen according to its probability p i . The arc (v, u) is deleted, and a copy of the block B i is added by fusing the north pole N i with v, and fusing the south pole S i with u. We never allow the master sink to be chosen as a latch (since it has no arcs leading out of it). Figure 4 is a sequence of bipolar networks constructed from the blocks in Figure 3. The master source N and the master sink S are labelled, and at each step, the latch v is denoted by * , and the arc (v, u) to be removed is dashed. Previously, Chen and Mahmoud [1] studied what they called self-similar bipolar networks. These are bipolar networks grown from a single bipolar directed graph as the only block. At each step in the growth of their networks, an arc (v, u) is chosen uniformly at random amongst all the arcs to be deleted before being replaced with a copy of the block. This is equivalent to choosing v proportionally to its outdegree deg + (v), and then choosing an arc (v, u) uniformly at random amongst all the arcs leading out of v. Therefore, the model of bipolar networks introduced here extends their model. Main results Before we state the main results, we need a useful definition. In the interest of length, the notation (out)degree is used in the following discussion, and is interpreted as degree for hooking networks and outdegree for bipolar networks. Depending on the set of blocks that are used to grow the hooking networks or bipolar networks, it is possible for some positive integers to never appear as the (out)degree of a vertex in the network, while some integers are only the (out)degree of at most one vertex at some point in the growth of the network. By ignoring these so-called nonessential (out)degrees, formally defined below, the proofs using Pólya urns are simplified. We also show by a simple argument below (see Proposition 1.2) that only the master hook or master source may have a nonessential (out)degree. Excluding this single vertex from the (out)degree distributions does not affect the asymptotic behaviour of these distributions. Definition 1.1. Given a set C of blocks, a (strictly) positive integer k is called an essential (out)degree if with positive probability, there is some n so that the n-th iteration of the network grown out of C has at least two vertices with (out)degree k. A positive integer is called a nonessential (out)degree if it is not an essential (out)degree. Remark 1.1. Our definition of essential (out)degrees differs slightly from the definition of admissible (out)degrees used in [1] and [8], where any (out)degree that may appear in the network is considered an admissible (out)degree. In the example of hooking networks grown in Section 1.2.1 from the blocks in Figure 1, all of the hooks of the blocks have even degrees, and all other vertices in the blocks have odd degrees. As a result, during the growth of the hooking networks, only the master hook has even degree, while every other vertex has odd degree (as is evidenced by the hooking networks in Figure 2). In that case, the odd numbers are essential degrees, and the even numbers are nonessential. Proposition 1.2. The only vertex in a hooking network (or bipolar network) that can have a nonessential (out)degree is the master hook (or master source) of the network. Proof. We only prove the proposition for hooking networks; the argument is similar for bipolar networks. Suppose there is a positive probability that a vertex v which is not the master hook has degree k in the hooking network G n , and without loss of generality let n be the smallest number for which G n has a vertex v with degree k. We will show that with positive probability, another vertex that is not the master hook will have degree k in a later iteration of the hooking network. The vertex v first appears in the network as a non-hook vertex with degree k 0 of a newly added block; say the block was G i 0 and v is a copy of the vertex v 0 in G i 0 . If k 0 = k, then that means hooks of other blocks were fused to v, say the first hook fused to v belonged to G i 1 , the second belonged to G i 2 , and so on until the last hook fused to v which belonged to G ir (which was the last block added to create G n ). With positive probability, a copy of the block G i 0 is joined to G n by fusing the hook of G i 0 with a vertex that is not v, say the master hook. Let u be the newly added vertex in the hooking network that is a copy of v 0 in G i 0 . For j = 1, . . . , r, there is a positive probability that the block G i j is added to the hooking network G n+j by fusing the hook of G i j with u. In this case, u has degree k in G n+r+1 , and so there is a positive probability that 2 vertices (v and u) have degree k in G n+r+1 . Therefore, k is an essential degree. Also note that in the case of bipolar networks, only the master sink of the network has outdegree 0, and we therefore ignore this vertex completely. Main results for hooking networks . . , G m } be a set of blocks, each with an identified hook h i , and let G 0 , G 1 , G 2 , . . . be a sequence of hooking networks grown from C, with the master hook of the network labelled H. We allow for the latches and the blocks added at each step to be chosen in the manner laid out in Section 1.2 (that is, with linear preferential attachment with parameters χ and ρ, and probabilities p i assigned to each block G i ). For a positive integer r, let and The value f (k) is the expected number of new vertices of degree k (that are not hooks) added at any step, and g(k) is the probability that the degree of the latch chosen at any step is increased by k after fusing with the hook of the newly attached block. For example, for the blocks in Figure 1 we have that f (1) = 2 and f (3) = 5/3, while g(2) = 1/3 and g(4) = 2/3. Define The value λ 1 is the expected change in the denominator of (1) at each step in the growth of the hooking network. For our running example of hooking networks grown from the blocks in Figure 1, if we let χ = 1 and ρ = 0, then Let ν 1 := f (k 1 )/(λ 1 + w k 1 ), and define recursively for i = 2, . . . , r The value λ 1 ν i is the limit of the expected proportion of vertices with degree k i (see Remark 1.4 below). Let ν be the vector For our running example of hooking networks grown from the blocks in Figure 1 with χ = 1 and ρ = 0, and if we let r = 3, then the first 3 essential degrees are 1, 3, 5 (recall that only odd numbers are essential in this example), and We have the following multivariate normal limit law for the degrees of hooking networks. Theorem 1.3. Let X n = (X n,1 , X n,2 , . . . , X n,r ), where X n,i is the number of vertices with essential degree k i in G n , where G n is a hooking network grown from the set of blocks C using linear preferential attachment with parameters χ and ρ. Let λ 1 be defined as in (5) and let ν be the vector defined in (7) and (8). Then for some covariance matrix Σ. (10), we see an immediate weak law of large numbers, Furthermore, since the number of blocks is finite and each block has a finite number of vertices, there is a constant C such that 0 ≤ X n,i ≤ Cn for all i = 1, 2, . . . , r and all n. Therefore, the random vectors X n /n are uniformly integrable which, along with (11), imply EX n /n → λ 1 ν. The convergence in (11) also holds almost surely (see Remark 2.3). In some special cases, we can say even more about the convergence in (10). For each block G i , let E(G i ) be the set of edges of G i , and let Corollary 1.5. Let X n = (X n,1 , X n,2 , . . . , X n,r ), where X n,i is the number of vertices with essential degree k i in G n , where G n is a hooking network grown from the set of blocks C using linear preferential attachment with parameters χ and ρ. Let λ 1 be defined as in (5), let ν be the vector defined in (7) and (8), and let s i be defined as in (12) for each block G i . Suppose that there exists a constant s so that s i = s for all blocks G i . Then the convergence (10) holds in all moments. In particular, n −1/2 (EX n − nλ 1 ν) → 0, and so nλ 1 ν in (10) can be replaced by EX n . There are several cases where Corollary 1.5 applies. An obvious example is when there is only one block to choose from. Other examples include when χ = 0 and all the graphs have the same number of vertices, or when ρ = 0 and all the graphs have the same number of edges. To compare Theorem 1.3 with previous results on random recursive trees and preferential attachment trees, consider a hooking network grown from K 2 as the only block and where χ = 0 and ρ = 1; as discussed earlier this produces random recursive trees. In this case, f (1) = 1, g(1) = 1, and λ 1 = 1, and so for any positive integer r the vector ν = (ν 1 , . . . , ν r ) defined in (8) is given by We see that Theorem 1.3 extends previous results on random recursive trees [9,6]. More generally, suppose that we look at a preferential attachment tree, where the latch v is chosen with probability proportional to χ deg v + ρ. We once again have f (1) = 1 and g(1) = 1, and we have that λ 1 = w 1 + χ = w 2 . We see that ν 1 = 1/(w 2 + w 1 ) and by following the recursion of (7) we see that for any i = 2, 3, . . . , ν i is given by In particular when χ = 1 and ρ = 0, then nλ 1 ν i = 4n i(i+1)(i+2) , and so we see that Theorem 1.3 extends previous results on random plane-oriented recursive trees [10,6], while (13) along with Theorem 1.3 is the result stated in [4,Theorem 12.2]. Remark 1.6. In the literature on random recursive trees and preferential attachment trees, the choice of the latch is usually made proportionally to χ deg is the number of children of v. But we can simply let ρ = ρ ′ − χ to get the same model, and replace w k with w ′ k−1 = χ(k − 1) + ρ ′ so that (13) resembles more the statements of the previous results [9,10,6,4]. The only vertex where this does not translate is the root (or master hook) of the network, since deg(H) = deg + (H) in this case, but see Remarks 2.2 and 3.4 below for why this does not affect the limiting distribution. Main results for bipolar networks Let C = {B 1 , B 2 , . . . , B m } be a set of blocks each with a north pole N i and a south pole S i identified, and let B 0 , B 1 , B 2 , . . . be a sequence of bipolar networks grown from C, with the master source labelled N and the master sink labelled S. The latches v, arcs (v, u), and blocks B i are chosen in the manner laid out in Section 1.2 (by linear preferential attachment with parameters χ and ρ for the latch, uniformly at random amongst arcs leading out of v for (v, u), and according to its probability p i for B i ). For a positive integer r, let k 1 < k 2 < · · · < k r be the first r essential outdegrees. We introduce similar notations as for the hooking network case. Again, recall that for a positive integer k, we let and for a nonnegative integer k, define The value f (k) is the expected number of new vertices of outdegree k added at any step, and g(k) is the probability that the outdegree of a latch v is increased by k when (v, u) is replaced with a block (note here that g(0) = 0 if there is a block whose north pole has outdegree 1). For the blocks of Figure 3 we have that f (1) = 1, f (2) = 1, and f (3) = 1/2, while g(0) = 1/2 and g(1) = 1/2. For a set of blocks C, define The value λ 1 is the expected change in the denominator of (2) at each step in the growth of the bipolar network. For our running example of bipolar networks grown from the blocks in Figure 3, if we let χ = 0 and ρ = 1, then Let (0))), and define recursively for i = 2, . . . , r The value λ 1 ψ i is the limit of the expected proportion of vertices with outdegree k i (see Remark 1.8 below). Define For our running example of bipolar networks grown from the blocks in Figure 3 with χ = 0 and ρ = 1, and if we let r = 3, then the first 3 essential outdegrees are 1, 2, 3, and We have the following multivariate normal limit law for the outdegrees in the growth of bipolar networks. where B n is a bipolar network grown from the set of blocks C using linear preferential attachment with parameters χ and ρ. Let λ 1 be defined as in (16) and let ψ be the vector defined in (18) and (19). Then for some covariance matrix Σ. Remark 1.8. With the same reasoning as in Remark 1.4, we have a weak law of large numbers and a convergence of the means The convergence in (22) also holds almost surely (see Remark 2.3). Once again, we can say something more about the convergence in (21) in certain cases. For each block B i , let E(B i ) be the set of arcs of B i , and let Corollary 1.9. Let Y n = (Y n,1 , Y n,2 , . . . , Y n,r ), where Y n,i is the number of vertices with essential outdegree k i in B n , where B n is a bipolar network grown from the set of blocks C using linear preferential attachment with parameters χ and ρ. Let λ 1 be defined as in (16), let ψ be the vector defined in (18) and (19), and let s i be defined as in (23) for each block B i . Suppose that there exists a constant s so that s i = s for all blocks B i . Then the convergence (21) holds in all moments. In particular, n −1/2 (EY n − nλ 1 ψ) → 0, and so nλ 1 ψ in (21) can be replaced by EY n . Remark 1.10. We could choose to study the indegrees of bipolar networks instead. , and the arc to be replaced with a block is chosen uniformly at random amongst the arcs leading into v (instead of leading out of v). The multivariate normal limit law for the indegree distribution of such networks is the same as that for the outdegree distribution of bipolar networks B 0 , B 1 , B 2 , . . . grown in the manner laid out in Section 1.2.2 from the blocks C = {B 1 , . . . , B m }, where the arcs of B ′ i are reversed to make B i . Pólya urns A generalized Pólya urn process (X n ) ∞ n=0 is defined as follows. There are q types (or colours) 1, 2 . . . , q of balls and for each vector X n = (X n,1 , X n,2 , . . . , X n,q ), the entry X n,i ≥ 0 is the number of balls of type i in the urn at time n, starting with a given (random or not) vector X 0 . Each type i is assigned an activity a i ∈ R ≥0 and a random vector ξ i = (ξ i,1 , ξ i,2 , . . . , ξ i,q ) satisfying ξ i,j ≥ 0 for i = j and ξ i,i ≥ −1. At each time n ≥ 1, a ball is drawn at random so that the probability of choosing a ball of type i is If the drawn ball is of type i it is replaced along with ∆X n,j balls of type j for each j = 1, . . . , q, where the vector ∆X n = (∆X n,1 , ∆X n,2 , . . . , ∆X n,q ) has the same distribution as ξ i and is independent of everything else that has happened so far. We allow for ∆X n,i = −1, in which case the drawn ball is not replaced. The intensity matrix of the Pólya urn is the q × q matrix By the choice of ξ i,j , the matrix αI + A has non-negative entries for a large enough α, and so by the standard Perron-Frobenius theory, A has a real eigenvalue λ 1 such that all other eigenvalues λ = λ 1 satisfy Reλ < λ 1 . The following assumptions (A1)-(A7) are used in [5]. In the interpretation of balls in an urn, the random vectors ξ i and ∆X n are integer-valued. However, for our applications, this is not necessarily the case, which is why our assumption (A1) below takes a slightly different form from the standard assumption (A1) in [5], taking instead the form discussed in [5,Remark 4.2] (note the indices of the variables in (A1) below). A type i is called dominating if in an urn starting with a single ball of type i, there is a positive probability that a ball of type j can be found in the urn at some time for every other type j. If every type is dominating, then the urn and its intensity matrix A are irreducible. (A1) For each i, either (a) there is a real number d i > 0 such that X 0,i and ξ 1,i , ξ 2,i , . . . , ξ q,i are multiplies of d i and In the Pólya urns we use, it is obvious that (A1) and (A2) hold. Our intensity matrices are also irreducible, and so (A5) and (A6) hold trivially, while the Perron-Frobenius theorem along with irreducibility guarantee that (A3) and (A4) hold. Our urns always have balls of positive activity, and so (A7) holds by the irreducibility of the urns. (ii) Suppose further that, for some c > 0, a · E(ξ i ) = c for every i = 1, . . . , q. Then the covariance matrix is given by Σ = cΣ I , where Σ I is defined in (25). (iii) Suppose that (ii) holds and that the matrix A is diagonalizable, and let {u and {v ′ i } q i=1 be dual bases of left and right eigenvectors respectively, i.e., Then the covariance matrix Σ is given by where B is defined in (24). Remark 2.2. So long as (A5) is satisfied, the initial configuration X 0 of the urn does not have any effects on the limiting distribution. Proofs We start by setting up Pólya urns so that balls in the urn correspond to vertices in the growth of our network. Next, we prove important properties of the intensity matrices associated with these Pólya urns. Finally, the pieces are placed together to prove our main results. Vertices as balls In this section, we outline how we use the evolution of generalized Pólya urns to describe the evolutions of the degree distributions in the networks that we study. Throughout the section the notation (out)degree is used so that the discussion applies to both types of networks simultaneously. Recall that Theorem 1.3 and Corollary 1.5 apply to degrees of hooking netwoks, while Theorem 1.7 and Corollary 1.9 apply to outdegrees of bipolar networks. We start by first looking at an urn with infinitely many types. We assign a type to each (out)degree in the network so that a ball of type k represents a vertex of (out)degree k. We initiate each network by choosing a block from the list of blocks. This corresponds to starting a Pólya urn with a ball of the matching type for the (out)degree of each vertex in the block. In the evolution of the network, when a block is attached, this corresponds to choosing a ball in the urn of type corresponding to the (out)degree of the latch v and replacing it with a ball representing the new (out)degree of v along with balls representing the (out)degrees of the rest of the vertices of the newly attached block. Since a latch of (out)degree k is chosen at random proportionally to w k = χk + ρ, then all balls of type k have activity w k in the Pólya urn so that a ball of type k is chosen at random proportionally to its activity w k . The Pólya urn described above has infinitely many types, and so Theorem 2.1 does not apply. Therefore, we would like to instead use an urn with finitely many types in the same manner as is done in [6] and [4]. The urn is replaced with the following Pólya urn: let d be a positive integer corresponding to the largest (out)degree we wish to study in this instance of the model. A new ball of special type * with activity a * = 1 is introduced, and for every k > d, each ball of type k is replaced with w k balls of special type * . In this way, the probability of choosing a ball of special type in the new urn is equal to the probability of choosing a ball of type greater than d in the old urn. If a latch v with (out)degree k ≤ d is chosen, and a block is attached so that v now has (out)degree k + j > d, then the ball of type k is removed and w k+j balls of special type are added. If instead v has (out)degree k > d and a block is attached so that the (out)degree of the vertex is now k + j, then the ball of special type that was chosen is placed back in the urn, along with χj balls of special type. The final change we will make to our urn is to represent the master hook of the hooking network or the master source of the bipolar network, say with (out)degree k, with w k balls of special type in our urn. This guarantees that all types of balls in the urn that are not special types correspond to (out)degrees that are essential; recall from Definition 1.1 that a positive integer k is an essential (out)degree if there is a positive probability that at some point in the growth of the network at least two vertices have (out)degree k, and recall from Proposition 1.2 that only the master hook of the hooking network or the master source of the bipolar network may have a nonessential degree. For a positive integer d, the possible types of balls present in the urn are exactly the essential (out)degrees less than or equal to d, together with a ball of special type * . In our intensity matrix, we can then omit the rows and columns corresponding to types that are never present in the urn. By restricting to essential (out)degrees, it can be verified that now every ball in the urn is of dominating type. No matter the initial network (or initial configuration of the urn), there is a positive probability that a ball representing a vertex with the essential (out)degree k will be present in the urn. Therefore the urn (and its intensity matrix) is irreducible. As discussed in Section 2, it is easy to verify that the assumptions (A1)-(A7) are satisfied for irreducible urns. To avoid confusion, we label the type of a ball with the (out)degree of the vertex it represents. We illustrate how to calculate the intensity matrices for the urns associated with our running examples of hooking networks and bipolar networks given in Section 1.2. A Pólya urn for our running example of a hooking network Consider the blocks in Figure 1, and a sequence of hooking networks grown from these blocks. Let's look at the instance of the model where the choice of a latch is made proportionally to its degree (i.e., when χ = 1, ρ = 0 and so w k = k). Suppose we look at vertices with degrees less than or equal to 5. As discussed after the definition of essential degrees (Definition 1.1), the essential degrees for these hooking networks are the odd numbers; and so 1, 3, 5 are the essential degrees less than or equal to 5. The images in Figure 5 illustrate the possibilities for replacing a ball of type k, corresponding to attaching a block to a latch with degree k. The probabilities in the figure are the probabilities p i assigned to the blocks in Figure 1. Type 3, 1 Type k + 4 Figure 5: The replacements of a ball of type k in a hooking network grown from the blocks in Figure 1 The intensity matrix for this urn has 4 rows and columns: one of each for balls of type 1, 3, 5, and the last row and column for balls of special type * . Let's consider what happens when a block is attached to a latch with degree 1; this corresponds to choosing a ball of type 1. The probability that the block G 1 is attached is 1/6. The hook of G 1 has degree 2 and the two other vertices have degree 1. The ball of type 1 is removed and replaced with a ball of type 3 (the new degree of the latch v) along with two new balls of type 1. Performing similar calculations for the other blocks with the help of Figure 5, we get that Recall that the rows and columns for nonessential degrees are removed, and so the first row represents balls of type 1, the second row for balls of type 3, the third for balls of type 5, and the final row for balls of special type * . Now consider what happens when a ball of type 3 is chosen, i.e., if a vertex v with degree 3 is chosen as a latch. If a hook with degree 4 is attached to v, the degree of v is increased to 7. Recall that we instead place w 7 = 7 balls of special type when this happens. Performing similar calculations as above with the help of Figure 5 yields Performing similar calculations when a ball of type 5 is chosen gives Finally let's consider attaching a block to a vertex of degree greater than 5, or to the master hook of the network. In either case, this corresponds to choosing a ball of special type. If the hook of the block G i attached has degree two, then the ball of special type is replaced along with another 2χ = 2 balls of special type, while 4χ = 4 balls of special type are added if the hook has degree 4. Therefore, we calculate for the special type * The activities for the types are w 1 = 1, w 3 = 3 and w 5 = 5 for types 1, 3, 5 respectively, while the special type * has activity 1 (as discussed earlier). The intensity matrix A consists of Eξ 1 , 3Eξ 3 , 5Eξ 5 for the first 3 columns, and Eξ * for the last column, thus we get One can verify that the eigenvalues of A are λ 1 = 31/3 and −1, −3, −5 and we see that λ 1 is what was calculated in (6). By Theorem 2.1, we have a multivariate normal limit law. One can also verify that the right eigenvector v 1 of A associated with λ 1 satisfying a · v 1 = 1, where a = (1, 3, 5, 1) is the vector of activities, is Restricted to the first 3 entries, the vector v 1 is exactly the vector ν calculated in (9), and so by Theorem 2.1, Theorem 1.3 is true in this particular case. A Pólya urn for our running example of a bipolar network Now consider the blocks of Figure 3 and a sequence of bipolar networks grown from these blocks. Let's look at the instance of the model where the choice of the latch is made uniformly at random (i.e., when χ = 0, ρ = 1, and so w k = 1). All positive integers are essential outdegrees. The images of Figure 6 illustrate the possibilities of replacing a ball of type k, corresponding to choosing a latch v with outdegree k and one of the arcs leading out of v uniformly at random. The probabilities in the figure are the probabilities p i assigned to the blocks in Figure 3. 2 Type 2, 1 Type k + 1 Figure 6: The replacement of a ball of type k in a bipolar network grown from the blocks in Figure 3 Suppose we look at vertices with outdegrees less than or equal to 3. We can calculate the intensity matrix in the same way as the intensity matrix for the hooking network example above. The main difference in this case is that there is a positive probability that the outdegree of a latch v is not changed. For example, if a ball of type 2 is chosen; that is, if a latch v with outdegree 2 is chosen, then with probability 1/2, the degree of v is not changed after the block B 1 is attached. In this case, the ball of type 2 is replaced in the urn, along with 2 balls of type 1 and one ball of type 3. We can calculate For the urn in this case, a vertex with outdegree greater than 3 is represented by a single ball of special type * . The intensity matrix is The eigenvalues for A are λ 1 = 5/2 and −1/2, and we see that λ 1 is precisely what was calculated in (17). The right eigenvector v 1 of A associated with λ 1 whose entries sum to 1 is Restricted to the first 3 entries, the vector v 1 is exactly the vector ψ calculated in (20). The multivariate normal limit law claimed by Theorem 1.7 holds by Theorem 2.1 in this case. Properties of the intensity matrices Recall that w k = χk + ρ. Let A = (a ij ) r+1 i,j=1 be the (r + 1) × (r + 1) matrix with entries where f (k) was introduced in (3) and (14), g(k) was introduced in (4) and (15), and k 1 , . . . , k r are essential degrees. We prove properties of A that are useful to the proofs of our main results. From Theorem 2.1, we see that to prove our main result, we need to prove properties of the eigenvalues and eigenvectors of A. The eigenvalues and eigenvectors of A depend on properties of the values f (k) and g(k). These properties are gathered in the following proposition. (3) and (14), and g(k) defined in (4) and (15), the following properties hold: (F) If k ≤ k r and k = k i for all i = 1, . . . , r, then f (k) = 0. Proof. In the interest of space, the lemma is proved for both hooking networks and bipolar networks simultaneously. The notation (out)degree is used, and is interpreted as degree for hooking networks and outdegree for bipolar networks. If f (k) = 0, then there is a positive probability that at any step in the growth of the network, a new vertex (that is not the master hook or the master source) appears with (out)degree k. By Definition 1.1 and by Proposition 1.2, k is an essential (out)degree in this case, and so if k ≤ k r , then k ∈ {k 1 , . . . , k r }, proving that (F) holds. The property (G1) holds since k≥0 g(k) = p 1 + · · · + p m = 1, where p i is the probability of the block G i or B i . As for the property (G2), assume that g(k − k j ) = 0 for some essential (out)degree k j ≤ k r . Since k j is an essential (out)degree, there is a positive probability that some vertex v (that is not the master hook or the master source) has (out)degree k j . By definition, there is a probability of g(k − k j ) that the (out)degree of v is increased to k if a hook is fused to v. Therefore, there is a positive probability that there is a vertex with (out)degree k, and so k is an essential (out)degree, again by Definition 1.1 and Proposition 1.2. If k ≤ k r , then k ∈ {k 1 , . . . , k r }, and so (G2) holds. Let be the value defined in (5) and (16). We calculate the eigenvalues of A in the following lemma. Lemma 3.2. The matrix A has eigenvalues Proof. We can calculate the eigenvalues of A directly. For any λ, look at the matrix A − λI. For each i = 1, . . . r, add w k i times row i to row r + 1 of A − λI to get the matrix A ′ λ . Using properties (F) and (G2), along the (r + 1)-th row of A ′ λ , the j-th entry for j = 1, . . . , r is while the (r + 1)-th entry is Next, subtract w k j times column r + 1 from column j in A ′ λ for every j = 1, . . . , r to get the matrix A ′′ λ . Since the j-th entry for j = 1, . . . , r of the (r + 1)-th row is For every i, j ≤ r, the i, j-th entry of A ′′ λ is simply a ij − w k j f (k i ) when i = j and a ii − λ − w k i f (k i ) on the diagonals, where a ij is given in (30). Therefore, A ′′ λ is the following (r + 1) × (r + 1) matrix Since the determinant of a matrix is unchanged by adding one row to another or by subtracting a column from another, both A−λI and A ′′ λ have the same determinant. We can calculate the determinant of A ′′ λ by expanding along the bottom row, and since the upper r × r matrix of A ′′ λ is lower triangular, we see immediately that A has characteristic polynomial from which we can read off the eigenvalues stated in the lemma. Proof. We verify that v 1 is a right eigenvector of A associated with λ 1 . We can look instead at A ′ λ which is introduced in the previous proof. Since only row operations were used to get from A − λI to A ′ λ , we get that (A − λI)v 1 = 0 if and only if A ′ λ v 1 = 0. We therefore need only to verify that A ′ λ 1 v 1 = 0 (where all instances of λ are replaced with λ 1 ). Along the (r + 1)-th row of A ′ λ 1 for any j = 1, . . . , r the j-th entry is given by (31), but with λ replaced by λ 1 , which is exactly (34), and so is equal to 0 by the calculations performed above. From (32), the (r + 1)-th entry in the (r + 1)-th row is simply λ 1 − λ 1 = 0. Therefore, the last row of A ′ λ 1 is all zeros and the (r + 1)-th entry of the vector A ′ λ 1 v 1 is 0. The top r × (r + 1) submatrix of A ′ λ 1 is the same as the top r × (r + 1) submatrix of A − λ 1 I. After rearranging the equality (35) as and recalling the entries a ij of A from (30), we see that for i = 1, . . . , r, the i-th entry of the vector Since λ 1 has algebraic (and geometric) multiplicity 1, then v 1 is the unique vector satisfying the statement of the lemma. Proofs of main results Recall the definitions of f (k) from (3) and (14), and g(k) from (4) and (15) for a set of blocks C. Recall also that w k = χk + ρ. Let k 1 < · · · < k r be the first r essential (out)degrees for hooking networks or bipolar networks grown from C. We now prove Theorem 1.3; the multivariate normal limit law for the degrees of hooking networks. Our main results for bipolar networks can be proved in a very similar manner, and we only outline the differences in the proofs. Proof of Theorem 1.3. We look at two cases: when a block is attached to a latch that is not the master hook of the network with degree less than or equal to k r , and when a block is attached to a latch of degree greater than k r or to the master hook of the network. Recall that the master hook of the network is represented by balls of special type in the urn. Case I: Let k j ≤ k r be an essential degree and suppose that at some step in the growth of the network a vertex v is chosen as a latch where deg(v) = k j and v is not the master hook of the network. Suppose a block is attached to v. This corresponds to choosing a ball of type k j . Let k i ≤ k r be an essential degree. Other than the latch, the expected number of new vertices of degree k i added to the network is equal to f (k i ). If k i > k j , the probability that the degree of v is increased to k i is equal to the probability of choosing a block whose hook has degree k i − k j , which is exactly g(k i − k j ). For k i , k j ≤ k r and with E(ξ k j ,k i ) being the expected change in the number of balls of type k i in the networks when a ball of type k j is chosen, the arguments above show that For every k that is an essential degree greater than k r , balls of special type are added instead of balls of type k. By a similar argument as above, the expected number of new balls of special type added corresponding to vertices of degree k when a latch of degree k j is chosen is w k (f (k) + g(k − k j )). Summing over all essential degrees greater than k r , the expected number of balls of special type added when a ball of type k j is chosen is Case II: Now suppose at some step the latch v is either the master hook of the network or that deg(v) > k r . In either case this corresponds to choosing a ball of special type in our urn; recall that the master hook is represented by balls of special type. Suppose that a block is attached to v. For an arbitrary essential degree k i ≤ k r , the expected number of new vertices added with degree k i is f (k i ). Therefore with E(ξ * ,k i ) being the expected number of balls of type k i added when a ball of special type is chosen, For any k ≥ 1, the probability that the degree of v is increased by k is g(k). In this case, the ball of special type is placed back in the urn along with χk new balls of special type. For any k > k r , the expected number of new vertices with degree k is once again f (k). Therefore, summing over all values of k, the expected change in the number of balls of special type in the urn is Let E(ξ k j ) := (Eξ k j ,k 1 , . . . , Eξ k j ,kr , Eξ k j , * ) for j = 1, . . . , r and for the special type * let E(ξ * ) := (Eξ * ,k 1 , . . . , Eξ * ,kr , Eξ * , * ). The activity of each ball of type k j ≤ k r is w k j , and the activity of the ball of special type * is 1. The intensity matrix is therefore the matrix A whose columns are w k j E(ξ k j ) for j = 1, . . . , r and whose (r + 1)-th column is E(ξ * ). This is precisely the matrix given in (30), with g(0) = 0. The vector v 1 defined in (37) with g(0) = 0 and restricted to the first r entries is exactly the vector ν defined in (8). Theorem 1.3 now follows immediately from Lemma 3.3, and Theorem 2.1. Proof of Corollary 1.5. Every time a new block G i with hook h i is attached to the hooking network by fusing h i with the latch v, any new vertex u of G i added to the network is represented either by a ball of type deg(u) (with activity χ deg(u) + ρ) or by χ deg(u) + ρ balls of special type (with activity 1). As for the latch v, one of the following cases applies: • a ball of activity χ deg(v) + ρ is removed and replaced with a ball of activity χ(deg(v) + deg(h i )) + ρ, • a ball of activity χ deg(v) + ρ is removed and replaced with χ(deg(v) + deg(h i )) + ρ balls of special type (with activity 1), or • an additional χ deg(h i ) balls of special type are added. In any case the change in the total activity of the urn is where the last equality holds thanks to the handshaking lemma (the sum of the degrees in a graph is twice the number of edges). Suppose that all s i are equal. The change in total activity is equal at every step, independent of which block is attached. Therefore, the corresponding urn is balanced. By [7, Remark 1.9], the urn satisfies the conditions of [7, Theorem 1.1], and so by Remark 2.4, Corollary 1.5 holds. Theorem 1.7 and Corollary 1.9 are proved in a similar manner to the two proofs above. We therefore omit the details, and only specify where the proofs differ. Proof of Theorem 1.7: The probability that the degree of a latch v is increased by k is now the probability of choosing a block whose north pole had outdegree k + 1 (since an arc is removed from v when a block is attached). This probability is exactly defined to be g(k). If a north pole has outdegree 1, then the outdegree of v is not changed, and so the probability that the outdegree of v is unchanged is g(0). With similar arguments as in the proof of Theorem 1.3, we can calculate the intensity matrix. The only differences between the intensity matrix for bipolar networks and that for hooking networks are the first r diagonal entries, which are for i = 1, . . . , r in the case of bipolar networks. The value E(ξ * , * ) is the same as before since χkg(k) = 0 when k = 0. Since g(0) ≤ 1, each eigenvalue λ = λ 1 is non-positive, and so is less than λ 1 /2. The vector v 1 defined in (37) restricted to the first r entries is exactly the vector ψ defined in (19), and the result now follows just as in the proof of Theorem 1.3. Proof of Corollary 1.9. Since an arc is removed at each step, the total change in activity when block B i is attached is (by similar argument to the proof of Corollary 1.5) If all the s i 's are equal for every block, then once again the urn is balanced and Corollary 1.9 holds by [7, Theorem 1.1] and Remark 2.4. Remark 3.4. From Remark 2.2 we know that the initial configuration of our urn does not effect the limiting distribution. This means that we may let the original block used to make G 0 or B 0 to be chosen at random, or to be deterministic. It also means that if we wanted to change the probability of choosing the master hook of a hooking network or the master source of a bipolar network, we can simply change the number of balls of special type at the beginning of the urn process. Remark 3.6. Furthermore, if χ > 0, then the values w k = χk + ρ are all different, and so from Lemma 3.2, all of the eigenvalues of A are different. In this case, the matrix A is diagonalizable, and so Theorem 2.1 (iii) applies and Σ can be calculated from (27). The diagonalizability of A does not hold in general, see for example the matrix A of (29).
14,301
2019-10-30T00:00:00.000
[ "Mathematics" ]
Determination of the Reaction Rate Controlling Resistance of Goethite Iron Ore Reduction Using CO/CO 2 Gases from Wood Charcoal : In the present work, an attempt is made to use non-contact charcoal in the reduction of run-off mine goethite ore at heating temperatures above 570 °C. The reduction mechanism was adopted, following Levenspiel’s relations for the shrinking core model at different stages of reduction. The non-contact charcoal reduction approach is adopted to maximize the benefit of using CO/CO 2 gases from charcoal for reduction without the need for beneficiation and concentration. The rate-controlling steps for the reduction kinetics of average particle sizes 5, 10, 15, and 20 mm at 570, 700, 800, 900, and 1000 °C were studied after heat treatment of the ore-wood charcoal at a total reduction time of 40 min using activated carbon reactor. Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray (EDX) analyses were done to investigate the spectrometric phase change and metallic components of the ore sample after reduction, respectively. The average percentage of the metallic iron content (56.6, 60.8, and 61.7%) and degree of metallization (91.62, 75.96, and 93.6%) are achieved from the SEM/EDX analysis of the reduced ore sample at reduction temperatures of 570, 800, and 1000 °C, respectively. The results indicate the tendency for high carbon deposit at the wustite stage of the reduction process at the lowest of temperature 570 o C and the residence time of at 10 min. This study demonstrates that diffusion through the ash layer is the controlling resistance of the overall reduction process. Introduction It is a well-established assertion that iron is rated as the fourth most abundant element, Okoro, et al. [1], and it is considered the most abundant rock-forming element, constituting 5% of the earth's crust, Kiptarus, et al. [2], that is being explored worldwide for engineering applications.To acquire iron metal in its elemental form, the impurities must be removed from its ore via a chemical reduction process since it is rarely found in its free state (Alamsari,et al. [3]; Babich, et al. [4].The need to extract it from its oxide form and convert it to its pure metallic form through direct or indirect reduction of iron-oxide has become a crucial research concern in recent years [5].Aside from the many conventional techniques for carrying out reduction reactions on sourced iron ore, the associated reduction reaction kinetics using natural reductants (coal, charcoal, etc.) are attracting intense research interest [6,7].Arising from the challenges associated with acquisitions of coking coal in many ore mines of iron, research attempts for experimental, investigative analyses of the kinetics of direct reduction of iron ore using reducing gases from charcoal as the reducing agent has become highly quintessential.The attempts are necessary to bypass the coking coal Blast Furnace (BF) process.The use of a BF is considered the popular reduction process for iron ore, worldwide [8].The blast furnace is a plant used for iron ore reduction by charging iron ore with metallurgical coke (Heikkila et al. [8], Cecca et al. [9]) and the usual limestone for the removal of its impurities.The exorbitant operation cost implications of the BF further hinder the use of the BF process [10,11].In sequel to the aforementioned, it is vital to have a basic understanding of the basic mechanism involved in the sulfur-free Direct Reduction process using charcoal as the reductant.Some chemical governing equations would be required to estimate reduction time steps of the ore, rate of reaction, reaction control time, conversion factor, swelling extent, degree of metallization, and other relevant parameters at the selected time step intervals and operating temperatures. The kinetics of the reduction of iron ore generally involve a study of the rate of iron oxide conversion to metallic iron by the removal of oxygen as the chemical reaction rate increases with temperature, whereas in indirect reduction processes, iron is reduced in its solid-state, the maximum temperature is lower than the melting temperature and the reaction rates are slower.In the direct reduction of iron ore, the mechanisms are complex because the oxide is expected to undergo a series of stepwise changes for the process of conversions to be complete [12].The slowest step in the process is the determinant of the overall reaction rate.This is often referred to as the rate-controlling step.Dydo, et al., [13] reiterates that the path of reduction of hematite iron ore using CO/CO2 at low reduction temperature can take the following schemes: Fe2O3 → Fe3O4 → Fe (below 570°) and Fe2O3 → Fe3O4 → FeO → Fe (above 570°).The mechanism of Fe2O3 → Fe3O4 reduction necessitates the transformation of an oxygen sublattice combined with iron atoms' dislocation.The second step of Fe3O4 → Fe reduction entails the nucleation sites of the metallic Fe phase.The third step of Fe3O4→ FeO reduction does not require any transformation of the oxygen sublattice.It can take place at higher temperatures of above 570 °C.As a result of the reverse disproportionation reaction, the reduction mechanism becomes a more intricate process in a two-step sequence, such as FeO → Fe3O4 → Fe, or even a three-step sequence, such as FeO → Fe3O4 → FeO → Fe reduction at a temperature higher than 570 °C [14].Although iron oxide's reduction behaviors are similar for all characterized iron ores, they are strongly influenced by the particle size, crystallinity, and conditions of the temperature-, time-, and reaction rate control-dependent reduction [15]. In Jozwiak et al. [16] and Kowitwarangkul et al. [17], reduction kinetics of iron ore lumps by H2 and CO mixtures at different chemical compositions are reported.Their study revealed that the reaction rate is linearly proportional to the reactant gases (H2 and CO).Mania [18] and Mania [19] carried out an experimental analysis on the reduction kinetics of different types of samples, such as pellets, fines, and powders by thermo-gravimetric analysis (TGA) methods.The influence of iron oxide density on the reduction extent and reduction rate of hematite ore using the H2-CO gas mixture as the reducing agent was investigated by Levenspiel [20] and Kumar et al. [21].The outcome garnered from the study describes how the iron ore pellets prepared at different reduction temperatures (700 °C and 950 °C) often contain some carbon deposition [18,22].Reduction kinetics of iron ore using coal and charcoal placed simultaneously in an externally heated cylindrical container positioned in a muffle furnace have been studied.Reduction temperatures ranging from 850 to 1000 °C were used to study the reduction kinetics potential of coal and charcoal as reductants [19,23].Despite the laudable research efforts that have been made, as indicated in the foregoing review, the literature neither explains how to ascertain the reaction controlling the resistance of the reduction processes, nor gives a detailed description of the stage-by-stage reaction kinetics of the direct reduction process of the iron ore samples.Therefore, this paper entails the kinetic study of the reduction of a selected goethite iron ore lump by CO/CO2 gases from wood charcoal using a locally fabricated reactor (activated carbon furnace) under isothermal conditions for a specified reduction time.The reduction kinetics of the goethite hematite ore is also extensively investigated by visual inspection using appropriate reduction kinetic equations for solid-solid reactions.The Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray (EDX) analyses are used to determine the rate-controlling resistance of the reaction with the experimental procedure as described below. Experimental Procedure and Sample Preparation In this present work, three sets of commercially obtained goethite iron ore lumps of several weights and sizes were used.Each set of samples was categorized into Set-I (5-9.99 mm), Set-II (10-14.99mm), and Set-III (15-20 mm), as shown in Figure 1.The initial sizes and weights of the iron ore samples were measured individually using the digital Vernier caliper and the weighing balance.Babalola et al. [30], and Omole et al. [31].The composition of the commercially acquired goethite hematite ore lumps was analyzed using an X-ray diffraction diffractometer with the goethite hematite ore chemical formula: Goethite Ore (Fe2H4O5, Fe1.698O3Sn0.228)and Fe2O3 • H2O•xH2O, Fe1.698Sn0.228O3[32].The particle chemical compositions by weight are shown in Table 1.The sizes of the particles linked with the gas-solid contact surface area.The larger the area was, the tendency to experience a higher reduction degree could be obtained from similar work done [33].The Activated Carbon Furnace used in this present work consists of hollow chambers with a rectangular cross-section and convenient spacing, which can readily contain the crucibles made of molded clay with the samples to be heat-treated in the reactor.A reasonable amount of charcoal was placed in the hollow chamber where heat was generated before the reduction time of 10, 20, 30 and 40 min could be measured under an airtight condition at a specified temperature, based on Gibbs free energy temperature profile for hematite reduction to wustite (Ogbezode et al. [32], Mousa et al. [34]) using a digital stop-watch.The properties of charcoal mixed with the ore were not described in this present work, but the effect of carbon monoxide penetration on the different goethite iron ore sample sizes was investigated.The reduced ore samples were also characterized using the SEM/EDX analysis to ascertain the degree of metallization and microstructure elemental composition of the iron ore constituents [35]. The reduced samples were investigated in their lump form after successful measurement of the weight and size (radius) of the sample using the digital Vernier caliper and weighing balance, respectively.The iron ore in the right proportion by weight (grams) was placed into the reactor while the reactor was heated to the right temperatures of 570, 700, 800, 900, and 1000 °C for a partial reduction time step of 10, 20, 30, and 40 min.The heated ore lump samples were cooled under the atmospheric condition, and the size (mm) and weight (grams) measurements were retaken using the same aforementioned measuring instrument.The partially reduced heated samples with the longest reduction time of 40 min for 570, 800 and 1000 °C, were taken to the laboratory where the X-ray Diffraction (XRD), Scanning Electron Microscopy (SEM), and Energy Dispersion X-ray (EDX) techniques were used to reveal the microstructural phases formed at these temperatures, and to further study the variation in the relative composition of the phases in the three different shapes of iron ore-charcoal composite lumps before and after the reduction experiment, as shown in Figures 16-20.The kinetics of the reduction analysis of the specimen is examined for a different residence time step of the particle, rate of reaction, conversion factor, reaction control time, degree of metallization, and swelling extent of the reduced ore; they are progressively determined using useful kinetic equations as previously described.The ratios of the partial reduction time to complete reduction time for the fractional radius of the reduction were ascertained. Degree of Reduction and Swelling Extent The reaction kinetics used in this present work is the shrinking core model equations, as illustrated in Equations ( 1)- (11).The derived kinetic model equations from the existing literature of Alamsari et al. [3], Levenspiel [20], and Srinivasan [36] showed the relationship between the residence time, reaction control time, and conversion factor expression for various shapes of the direct reduced iron (DRI) particles for the shrinking-core model.All the reduced hematite ore samples maintained their original lump shape, and the degrees of reduction and swelling were calculated by using the following formulas, as deduced by Prakash [37] and Yunyun [38]: where W0 is the initial weight of the sample, Wt is the weight of the sample at time t, and ∞, W∞ is the theoretical weight of the oxygen present in the samples for complete reduction; where is the initial volume of the pellet/lump ore and is the volume of the ore lump after reduction for a given time. Kinetics and Mechanism of Reduction Developed according to Levenspiel's relations for the shrinking core model, , the five steps occurring in succession for fluid-solid and solid-solid reactions are illustrated in Figure 2.For consistency, the description of the five (5) steps involved in the reaction kinetics was adopted as follows: Step 1: Surrounding film diffusion of reactant A through solid particle surface. Step 2: Penetration of A through the blanket of ash to the surface of the unreacted core. Step 3: Reaction of gaseous reactant A with solid at the reaction surface. Step 4: Solid exterior surface by diffusion of gaseous products through ash. Step 5: Diffusion of gaseous products through the gas film back into the main body of the fluid.In some situations, some of these steps do not exist.For example, if no gaseous products are formed, steps 4 and 5 do not contribute directly to the resistance of the reaction.Additionally, the resistances of the different steps usually vary greatly from one to another.In such cases, there is a need to consider that step with the highest resistance to be the rate-controlling resistance.In this treatment, conversion equations were formulated for spherical particles in which steps 1, 2, and 3, in turn, were rate-controlling stages.The analysis was then extended to non-spherical particles and to situations where the combined effect of these three resistances was required to be considered.The conversion equation for spherical particles in which steps 1, 2, and 3 were the rate-controlling steps was developed.Further analysis was then extended to non-spherical particles in a situation where the combined effect of the three resistances was required to be considered. Diffusion through Gas Film Controls Attention is focused on the unchanging exterior surface of a particle from the stoichiometry Equations ( 1)-(3) as shown in Equations ( 3) and (4). A(fluid) + bB(Solid) Fluid product Solid product Fluid and Solid product The stoichiometry of the equation: − Taking the limit of the integral from 0 , where is the time for complete conversion of the particle, then = 0. Therefore, the radius of the unreacted core in terms of fractional time or conversion factor for the complete conversion is given as: Diffusion through Ash Layer Controls Here, the resistance to diffusion through the ash layer is assumed to control the rate of reaction.The relationship between the time and radius of the ore with a constant heat flux is described in Equations ( 6) and (7). Integrating across the layer from R to and taking the limits = 0 and = where other variables remain constant, thus, taking further integration between = to and from 0 to any particular time 't' at the complete conversion of particle = 0, the required fractional time/conversion factor is expressed as: Recall Equation ( 5) and substitute it into Equation ( 7), we have: Chemical Reaction Controls Consider the chemical reaction itself as the rate resistance control based on the unreacted core unit surface for the stoichiometry.This yields Equation (9). Integrate and make the subject of the relation, and let be the required time for complete conversion = 0. Therefore, the decrease in radius or increase in fractional conversion in terms of is given as: Recall Equation ( 5) and substitute into Equation ( 10) to obtain the relationship for time conversion as follows: Equations ( 5), ( 8) and (11) are the analogy to the general symbols that refer to the diffusion through gas film diffusion through ash layer c, the chemical reaction controls are the relation used to determine the rate-controlling process to ascertain the possible reduction resistance between the ore particles and the CO/CO2 gases are the rate-controlling resistance, where = radius of the unreacted core, = initial residence time, = final residence time, = radius of the particle, = time fraction, = conversion rate. Thus, the derived kinetic reduction equations as expressed above are used to analyze the relationship between the time and radius of the reduced iron ore to ascertain the ratecontrolling steps at each stage of the reduction processes, as culled from Levenspiel [20] and Srinivasan [36]. Method of Data Analysis of Reduced Hematite product The method of data collection and analysis used in this research work was derived by taking visual inspection of the reduced ore lump, which entails the measuring of the initial and final weight, and particle sizes.Other reduction parameters, such as volume change, which was measured manually, swelling index, and percentage of the degree of reduction in the process are achieved and, using Microsoft Excel version 2010, manually inputted for easy plotting of graphs and variations of other relevant parameters as related to the objectives of this work. The microstructures and chemical compositions of samples were investigated by using Scanning Electron Microscopy (SEM) equipped with Energy Dispersive X-ray (EDX) spectroscopy.Combustion tests were applied for the analysis of carbon contents in the number of ore lumps reduced by H2-CO mixtures from the charcoal at a specified preheated temperature.X-ray powder diffraction was used for the identification of different phases in samples for their empirical content by chemical composition. Effect of Reduction Time on Conversion Factor The reaction rate conversion factor, which is defined as the ratio between the reaction residence time and reaction control time, is used to determine the rate-controlling stage (i.e., gas film, ash layer, and chemical control stage) of the reduction process of iron oxide to metalized iron, as shown in Figures 3-6.This depicts that the rate of conversion increases with the reaction residence time for all sizes of iron oxide undergoing a reduction reaction in the furnace.This is due to an increase in gaseous diffusion into the hematite ore sample, as it stays longer in the reactor over time, irrespective of the size of the iron oxide and furnace temperature.Consequently, the effect of the conversion factor on the reaction residence time shows that the rate of conversion of hematite to magnetite to wustite to iron is fastest at the gas film stage and the slowest at the ash layer control.This implies that the rate of formation of reducing gas films around the iron ore inside the reactor is fastest compared to the rate of carbon deposition on the outer surface of the ore, irrespective of the sizes of the lump ore and the preset temperature of the furnace.Thus, the rate of formation and infiltration of reducing gas (CO/CO2 and H2) around the lump surface may occur much faster than the formation of the ash layer and chemical reaction stage, as shown in Figures 3-6. Effect of Reduction Time on Reaction Control Time The reduction time is the specified time required for the ore lump samples to be exposed to the reduction of gases in the furnace at specified firing temperatures.It is also known as the reaction residence time.Equations ( 5), ( 7) and ( 10) are used to calculate the reaction control time to determine the time required for the complete conversion of hematite to metallic iron.This is also used to know the rate-determining stage of reaction (i.e., gas film, ash layer, and chemical reaction controls) as shown in Figures 7-9, considering the iron ore sample of sizes ranging 10-14.99mm and 15-20 mm, as depicted in Figures 7-9, at firing temperatures of 570, 800, and 1000 °C.The graphical illustrations (Figures 7-9) show that the total reaction control time increases with the reduction time at the firing temperatures, except at 800 and 1000 °C.A steep slope curve is observed at 570 °C.This indicates the easy bypass of the wustite stage during the reduction of ore samples at a firing temperature below 570 °C with little or less carbide formation of carbon deposition at the said temperature.This may likely be more rampant when iron ore lump of intermediate diameter sizes (10-15 mm) is used. Effect of Firing Temperature on Degree of Reduction and Swelling Extent The reduction of iron oxide or any ferruginous raw material with a reducing material involves stepwise reactions, as oxygen atoms mostly undergo severe adjustment, which increases the volume by more than 25 percent.Furthermore, the study of the kinetics of reduction based on the phase-boundary reaction control, as expressed in the equations of the shrinking core model described earlier, in terms of the sample fraction already reacted XB reveals that the rate-controlling stage of the process with respect to the required firing temperature, where a greater degree of reduction can be attained with the minimal swelling extent of the ore lump sample.The change in volume fraction and reacted ore fraction can also be envisaged under isothermal/non-isothermal conditions regardless of the reduction time.Therefore, the result in The Gas Film Control When the resistance of the gas film controls the reactions, the concentration profile for the gas-phase reactant is as shown in Figure 1.The mass transfer of the boundary-layer gas film becomes negligible in the reaction of solids with a gas stream flowing above critical gas velocities.Different kinetic models can be used for the various reduction mechanisms whose graphical illustrations are based on the effect of rate conversion factor on the degree of reduction of the iron ore samples at specified firing temperatures, as shown in Figure 10. The result obtained in Figures 10 and 11 reveal that, for the smallest sized iron ore lump samples (5-9.99 mm), the rate conversion factor under the gas film condition increases with the degree of reduction and the swelling extent at any firing temperature.Thus, other firing temperatures have proved to maintain their stabilized conditions, regardless of the increment in their degree of reduction and swelling extent, while the case seems different at 700 and 570 °C, respectively.For more samples of medium-sizes (10-14.99mm), as shown in Figure 10, the required firing temperature for a sustainable increase in the degree of reduction is observed at 700 °C, while subjecting the ore lump sample to temperatures above 800 °C could generate an abnormal swelling index of 30% or more.The highest degree of reduction of more than 40 percent is observed at 570 °C, as shown in Figure 11, and with an increase in the swelling extent at 1000 °C.This indicates that at the wustite stage, the ash layer formation is conveniently bypassed for larger ore samples (15-20 mm) than the smaller sized samples due to the large contact surface area of the ore sample.This, in turn, aids the diffusivity of the reductant gas (CO-H2) mixture into the system for more effective penetration of such gases into the ore lump samples at a low firing temperature. The Ash Layer Control The effect of ash layer control on the rate of reduction of iron oxide is based on the phase boundary change of the hematite sample using CO/CO2 and H2 gases as reductants.These have shown an appreciable increase in the degree of reduction and swelling extent of their samples [32,37].It is also worthy to note that the results illustrated in Figures 13 and 14 indicate an apparent increase in the degree of reduction with a rate conversion factor.This implies that the rates of reaction increase linearly with the hydrogen contents of the gaseous mixtures.Thus, the presence of large CO/CO2 contents in the charcoal, since the reaction was conducted under an airtight condition, may cause an apparent decrease in rates of reaction, as all reduction experiments with CO and H2-CO mixtures are always accompanied by carbon deposition which usually results in ash layer formation on the reduced iron ore samples, which results in low reducing gas penetration, thereby generating a little degree of reduction, high swelling extent, and incomplete reduction. Figure 13 also affirmed that the large size (15-20 mm) samples do possess the highest degree of reduction at the lowest firing temperature of 570 °C with the highest swelling index at 1000 °C.This may be due to the large, reducing gas-contact surface area of the ore sample.This also established a relationship between the degree of reduction and ore sample size, as the most sustainable firing temperature to derive a progressive increase in the degree of reduction with minimal swelling extent (less than 20%) may likely be achieved at this stage.Figures 12 and 13 also show the progressive tendency for an increase in the degree of reduction at 700 °C, while reduction remains stabilized at other firing temperatures, and the abnormal swelling extent of 50% and above at 1000 °C for the smaller size (5-9.99mm) ore samples, and greater than 20% at 700 °C with a swelling index of 35% or more for the medium size (10-14.99mm) ore samples, as obtained experimentally.This indicates the tendency for an increase in large carbon depositions on reduced samples for the small size range (5-9.99 mm) of iron ore lumps. The Chemical Reaction Control In the case where a direct chemical reaction controls the rate of reduction of the iron oxide sample, it was noticed that the inlet reducing gas (CO/H2) does have little or no restriction on the ash layer region around the iron ore, which implies that the use of a catalyst, such as quicklime, may not be necessary as the in-gas penetration suffers minimum or less restriction due to the absence of carbide formation in the process (i.e., no carbon deposition).Figures 13 and 14 show a progressive increase in the degree of reduction as a chemical reaction controls the rate of reaction of the reduction process, as the sustainable temperature, which guarantees progressive reduction, was noticed at 570 and 700 °C.This attests to the fact, previously mentioned, that the wustite stage where the ash layer possesses its highest concentration is easily bypassed at a low firing temperature with less carbide formation on the reduced iron ore sample.The size range of the iron oxide also determines its tendency for a faster degree of reduction as iron ore sizes of more than 15 mm can easily be metalized to iron when heat-treated at 700 °C or less.Thus, introducing a firing temperature above 700 °C can generate an abnormal swelling index above 50% where a chemical reaction controls the rate of reaction, regardless of the size of the iron ore samples (5-9.99 mm).This is due to the presence of carbon deposition and ash layer formation which may slow down the reaction rate and degree of reduction of the process.However, the swelling extent of the sample decreases with the ore sample size and increases with the firing temperature. Correlation between Degree of Reduction and Swelling Index The relationship between the swelling index (%) and degree of reduction (%) to the reduction of time is shown in Figures 16-20.From the figures, it can be seen that the abnormal swelling extent (25-30%) is observed at around 15-45% reduction at firing temperatures 800, 900 and 1000 °C, which led to the disintegration of the rescued ore sample after a short reduction time of 40 min.The iron particles are shown in Figures 16 and 17 for the FeO → Fe reduction step at 800 and 900 °C, respectively.However, the swelling was highest in the reduced iron ore lump, as observed at 900 and 1000 °C, respectively, but this is quite contrary to similar reports obtained from the existing literature.Thus, the reduction process is expected to be done at high temperatures with high carbon deposition on the ore sample which may likely lead to crack formation in the samples, as shown in Figures 16a and 17a.The SEM/EDX analysis of this present work was done to practically identify the microstructural segregation in the components of the hematite ore lump sample, as well as the presence of silicon in the different shapes and sizes of the reduced iron ore lump.The SEM/EDX analysis of the reduced iron ore presents the spectrum form of the lump-shaped ore-silicon composites as reduced at 570, 800 and 1000 °C, as shown in Figures 18-20 It is worthy to note that the microstructural spectrum of virtually all the tested samples shows a high percentage of silicon (39.20, 29.59 and 33.90%, respectively).This connotes the presence of a slag region on the reduced ore surface where the reduced iron oxide is mixed with the silicate.It is also observed that the silicon present in the spectrum analyzed in the fired ore-charcoal mixture at 570 and 800 °C is as high as 39.2% and 29.56%, respectively.This also implies some significantly unreduced oxide, especially at 800 °C, due to the presence of a slag zone regardless of the size range of the reduced ore sample, as some micro-spots in the ore-charcoal lump-shaped composite can also be identified.Such a spot only contains metallic irons due to it being devoid of slags after the reduction time steps of 10, 20, 30 and 40 min, which is shown in Figure 20. Effect of Rate Controlling Resistance on Reduced Samples The respective percentages of iron in the metallic composition as derived from Scanning Electron Microscopy (SEM) and Energy Dispersion X-ray (EDX) spectrometry are illustrated in Figures 17-20.It can be deduced from the figures that the reaction control stages, or step-by-step mechanism of the reduction process, can fully be ascertained based on these two factors, which include an increase in weight due to carbon deposition and incomplete/partial reduction of the samples.It can be deduced from Figure 21 that the higher infiltration of CO/CO2 gases into the hematite lump sample produces large amounts of oxide which are entrapped in the iron crystal layers, and are a direct possibility for the incomplete reduction of the iron oxide when heat-treated in the reactor.The sharp drop in the degree of metallization of the reduced ore samples is better observed at intermediate temperatures (i.e., 700, 800, 900 °C) of the reduction process.This indicates the tendency for high carbon deposit at the said wustite stage of the reduction process, coupled with the possibility of high oxygen-silicon content as contained in the remains of the heat-treated reduced iron ore samples at the lowest temperature and residence time, at 570 °C and 10 min, respectively. Conclusions In this study, non-contact direct reduction using wood charcoal fines was carried out at heating temperatures above 570 °C.Levenspiel's relations for the shrinking core model were used for describing the reduction kinetics and mechanism.The reaction kinetics involves the diffusion through gas film; diffusion through ash layer; and the chemical reaction itself as the rate resistance control based on the unreacted core unit surface.The ratecontrolling steps for the reduction kinetics of average particle sizes, 5-20 mm at 570-1000°C, were studied after heat treatment of the ore-wood charcoal in an activated carbon reactor at a total reduction time of 40 min.The conclusions include: 1.The adapted kinetic model provides good analysis for describing the degree of ore reduction, the swelling extent of the reduced iron ore by rate contact, and resident time of reaction.2. This work was able to show the ash layer as the controlling resistance for the reduction of goethite iron ore with the possibility of high carbon depositions or carbide formation alongside the reduced iron ore.The foregoing discussion indicates an incomplete reduction of the direct reduced iron ore.The most convenient firing temperature to sustain the controlling resistance of the ash layer formation for this work is 700 °C.There is a uniform increase in the degree of reduction in the reduced samples with a steady swelling index; the most firing is likely to be obtained at the temperature of 700 °C.3. The degree of metallization was found to be enhanced as the CO/CO2 composition, reduction firing temperature, and reduction time, increase.However, a reduction firing temperature of more than 1000 °C is prone to the formation of undesirable sticky iron (whiskers).4.This study revealed that an increase in firing temperature, as well as reducing time, increases the degree of reduction and swelling extent of the reduction process. 5.It was established that an increase in the fixed carbon content of reduction gases increases the degree of reduction and swelling extent as a final degree of metallization of more than 90 percent was achieved in the overall reduction process at 570 Figure 2 . Figure 2. The shrinking core model showing contact surface area of reduction reaction [3,20]. Figure 10 . Figure 10.Effect of particle firing temperature and gas film rate control stage of reduced iron ore lump size ranged 10-14.99mm.(a) Degree of reduction; (b) the swelling extent. Figure 11 . Figure 11.Effect of particle firing temperature and gas film rate control stage of reduced iron ore lump size ranged 15-20 mm.(a) Degree of reduction; (b) the swelling extent. Figure 12 .Figure 13 .Figure 14 .Figure 15 . Figure 12.Effect of particle firing temperature and ash layer rate control stage of reduced iron ore lump size ranged 10-14.99mm.(a) Degree of reduction; (b) the swelling extent. . Figure 16 .Figure 17 . Figure 16.Reduction and SEM micrographs of Set-III hematite lump sample 15-20 mm at reduction time 40 min for 800 °C: (a) Cracks and disintegration; (b) external surface structure; (c) internal surface structure. Figure 21 . Figure 21.Degree of metallization as a function of reduction gas firing temperature at residence time of 40 min.
7,536.2
2021-03-01T00:00:00.000
[ "Materials Science" ]
Fabric Defect Detection Using L0 Gradient Minimization and Fuzzy C-Means In this paper, we present a robust and reliable framework based on L0 gradient minimization (LGM) and the fuzzy c-means (FCM) method to detect various fabric defects with diverse textures. In our framework, the L0 gradient minimization is applied to process the fabric images to eliminate the influence of background texture and preserve sharpened significant edges on fabric defects. Then, the processed fabric images are clustered by using the fuzzy c-means. Through continuous iterative calculation, the clustering centers of fabric defects and non-defects are updated to realize the defect regions segmentation. We evaluate the proposed method on various samples, which include plain fabric, twill fabric, star-patterned fabric, dot-patterned fabric, box-patterned fabric, striped fabric and statistical-texture fabric with different defect types and shapes. Experimental results demonstrate that the proposed method has a good detection performance compared with other state-of-the-art methods in terms of both subjective and objective tests. In addition, the proposed method is applicable to industrial machine vision detection with limited computational resources. Introduction Fabric defect detection plays a crucial role in the automatic inspection in textile production processes. However, traditional fabric defect defection is often dependent on human inspection, and quality controls often rely to experience of specialized workers. It is noted that the human workers are prone to fatigue and boredom due to the repetitive nature of their tasks [1]. Thus, the human inspection involves limitations in terms of accuracy, coherence, and efficiency when detecting defects. Since the fabric textures are so complicated (including plain weave fabric, knitted fabric, twill fabric, laces, and pattern fabric) [2], the fabric colors are variable, and the contrasts between fabric defects and background are low, generalized defect detection exploration is highly challenging. Currently, automated defect detection methods based on machine vision have drawn much attention. Gaussian mixture entropy modeling [3] and wavelet transform [4] were used to detect defect in simple plain and twill fabric images via transformation and reconstruction processes. However, most of these methods designed for the simplest plain and twill fabrics, which cannot be effectively applied on complicated patterns fabric, such as the dot-patterned fabric, star-patterned fabrics and statistical-texture fabrics. The entropy-based automatic selection of the wavelet decomposition level (EADL) [5] method and the automatic band selection method [6] achieved defect defection in statistical and structural textures. Bollinger bands (BB) [7] and image decomposition (ID) methods [8] have been shown to perform robustly for dot-patterned, star-patterned and box-patterned fabrics. However, it remains unknown whether these two methods can be used for plain, twill, and statistical-texture fabrics. In a preliminary evaluation, the BB and ID methods failed to recognize some defective samples. As shown in Figure 1, the BB and ID methods are weak at differentiating defects with directional features. These methods achieve good results on a certain texture, but it remains challenging to robustly To address these problems, we present a novel method based on L0 gradient minimization (LGM) and fuzzy c-means (FCM), which provides a new perspective for the detection of fabric defects. Usually, a defect-free fabric image in industrial products has consistent texture, and the defect can be considered as the defective structure information and texture information. In our work, we first used the LGM method to filter the input image to eliminate the influence of texture information on fabric defects. Then, the filtered results with just defective information were segmented by applying the FCM. The proposed method can handle the defect with the plain fabric, twill fabric, star-patterned fabric, dot-patterned fabric, box-patterned fabric, striped fabric, and statistical-texture fabric (as shown in Figure 2). To address these problems, we present a novel method based on L0 gradient minimization (LGM) and fuzzy c-means (FCM), which provides a new perspective for the detection of fabric defects. Usually, a defect-free fabric image in industrial products has consistent texture, and the defect can be considered as the defective structure information and texture information. In our work, we first used the LGM method to filter the input image to eliminate the influence of texture information on fabric defects. Then, the filtered results with just defective information were segmented by applying the FCM. The proposed method can handle the defect with the plain fabric, twill fabric, star-patterned fabric, dot-patterned fabric, box-patterned fabric, striped fabric, and statistical-texture fabric (as shown in Figure 2). methods are weak at differentiating defects with directional features. These methods achieve good results on a certain texture, but it remains challenging to robustly and accurately handle the fabric defect image if it has a complicated patterns texture, low contrast between defect object and background, various colors, and a low signal-to-noise ratio. To address these problems, we present a novel method based on L0 gradient minimization (LGM) and fuzzy c-means (FCM), which provides a new perspective for the detection of fabric defects. Usually, a defect-free fabric image in industrial products has consistent texture, and the defect can be considered as the defective structure information and texture information. In our work, we first used the LGM method to filter the input image to eliminate the influence of texture information on fabric defects. Then, the filtered results with just defective information were segmented by applying the FCM. The proposed method can handle the defect with the plain fabric, twill fabric, star-patterned fabric, dot-patterned fabric, box-patterned fabric, striped fabric, and statistical-texture fabric (as shown in Figure 2). The remainder of this paper is organized as follows. Section 2 briefly discusses some related works. Section 3 mainly focuses on presents the proposed method. The experiment results and discussions are given in Section 4. Conclusions and future works pertaining to this work are presented in Section 5. Related Works Plain and twill fabric detection methods can be classified into five aspects: Spectral [9,10], learning [11,12], statistical [13][14][15], model-based [16,17], and structural methods [18,19]. The spectral method based on the Wavelet transform [20] achieved 97.5% detection accuracy with five known defect types and a 93.3% detection accuracy (a slight drop) with three unknown defect types in an evaluation. The statistical method applied gray relational analysis with co-occurrence Matrix (CM) features [21] on Jacquard fabric images, reaching 94% detection accuracy for 50 defective samples. The learning method via three-layer back-propagation neural network and thresholding of the image analysis [22] was tested on the same kind of fabric; it achieved 94.38% accuracy, using 240 samples of the four defect classes. The limitation of their method includes a longer training time, because of the larger number of inner layers and the danger of over-training. In addition, model-based approach using the Gaussian mixture model [23] was successfully applied to Brodatz mosaic image segmentation and fabric defect detection. Structural approaches based on normalized cross-correlation algorithm [24] obtained a higher detection success rate of 95% on twelve defective plain and twill fabrics' images. In general, many pieces of research on plain and twill fabric inspection works have achieved fruitful results; however, these methods were not efficiently evaluated on complicated patterned fabrics and statistical-texture fabrics. The defect detection of complicated patterned fabric has been increasing during the last decade. The Bollinger bands (BB) and regular bands (RB) [25] methods employed the regularity property in the patterned texture to carry out defect detection on dot-, box-and star-patterned fabrics. They also obtained accuracy rates of 98.59% (167 defect-free and 171 defective images) and 99.4% (80 defect-free and 86 defective images), respectively. The wavelet-preprocessing golden image subtraction (WGIS) method [20] achieved 96.7% accuracy on 30 defect-free and 30 defective patterned images by using a golden image to perform moving subtraction of each pixel along each row of every wavelet-pre-processed tested image. The ID [8] method obtained the detection accuracies range from 94.9-99.6% for dot-(110 defect-free and 120 defective samples), star-(25 defect-free and 25 defective samples) and box-patterned fabrics (30 defect-free and 26 defective samples), which decomposed a fabric image into structures of cartoon (defective objects) and texture (repeated patterns). A recent Elo rating method [26] achieved an overall 97.07% detection success rate based on databases of [8]. However, it remains unknown whether the patterned fabric defect detection method can also be applied to twill, plain fabric, and statistical-texture fabrics. In our work, we take full consideration of the original fabric images, which can be seen as defective structure information and texture information. Texture information often affects the fabric defect detection. LGM is used to filter the images to remove the texture information. In this way, the filtered fabric images would quickly locate and segment the defects. The proposed method can detect the defect in the plain fabric, twill fabric, star-patterned fabric, dot-patterned fabric, box-patterned fabric, striped fabric, and statistical-texture fabric. Methods In this section, procedures of the proposed LCM and FCM algorithm are described in detail. Figure 3 shows the overview of the presented method. It consists of two steps: Firstly, the L0 gradient minimization is applied to eliminate the influence of the background texture of fabric defects. Then the fuzzy c-means clustering is used to determine whether each pixel is defective. Texture Removal by the L0 Gradient Minimization (LGM) Due to the complexity of background texture information, it often increases the challenge of fabric defect detection. The L0 gradient minimization method [27] is widely used to smooth texture information. It is often adopted for filtering the image while preserving edge feature. The L0 gradient minimization method enhances the significant edge portion of the image by increasing the steepness of the transition portion of the image while removing the low-amplitude detail portions. Inspired by the L0 gradient minimization, we apply it to remove the background texture of fabric. As shown in Figure 4a, the fabric defect sample has three different directions of texture, and it is a mesh diagram, as shown in Figure 5a. After smoothing via LGM, the unimportant background texture of the fabric is removed, as shown in Figure 4b. Notice that the high-contrast edges on the defect are preserved, and the defect feature is more prominent, as shown in Figure 5b. Texture Removal by the L0 Gradient Minimization (LGM) Due to the complexity of background texture information, it often increases the challenge of fabric defect detection. The L0 gradient minimization method [27] is widely used to smooth texture information. It is often adopted for filtering the image while preserving edge feature. The L0 gradient minimization method enhances the significant edge portion of the image by increasing the steepness of the transition portion of the image while removing the low-amplitude detail portions. Inspired by the L0 gradient minimization, we apply it to remove the background texture of fabric. As shown in Figure 4a, the fabric defect sample has three different directions of texture, and it is a mesh diagram, as shown in Figure 5a. After smoothing via LGM, the unimportant background texture of the fabric is removed, as shown in Figure 4b. Notice that the high-contrast edges on the defect are preserved, and the defect feature is more prominent, as shown in Figure 5b. Texture Removal by the L0 Gradient Minimization (LGM) Due to the complexity of background texture information, it often increases the challenge of fabric defect detection. The L0 gradient minimization method [27] is widely used to smooth texture information. It is often adopted for filtering the image while preserving edge feature. The L0 gradient minimization method enhances the significant edge portion of the image by increasing the steepness of the transition portion of the image while removing the low-amplitude detail portions. Inspired by the L0 gradient minimization, we apply it to remove the background texture of fabric. As shown in Figure 4a, the fabric defect sample has three different directions of texture, and it is a mesh diagram, as shown in Figure 5a. After smoothing via LGM, the unimportant background texture of the fabric is removed, as shown in Figure 4b. Notice that the high-contrast edges on the defect are preserved, and the defect feature is more prominent, as shown in Figure 5b. Texture Removal by the L0 Gradient Minimization (LGM) Due to the complexity of background texture information, it often increases the challenge of fabric defect detection. The L0 gradient minimization method [27] is widely used to smooth texture information. It is often adopted for filtering the image while preserving edge feature. The L0 gradient minimization method enhances the significant edge portion of the image by increasing the steepness of the transition portion of the image while removing the low-amplitude detail portions. Inspired by the L0 gradient minimization, we apply it to remove the background texture of fabric. As shown in Figure 4a, the fabric defect sample has three different directions of texture, and it is a mesh diagram, as shown in Figure 5a. After smoothing via LGM, the unimportant background texture of the fabric is removed, as shown in Figure 4b. Notice that the high-contrast edges on the defect are preserved, and the defect feature is more prominent, as shown in Figure 5b. In order to illustrate the method clearly, we briefly summarize the theory of the L0 gradient minimization model. Let I be the input fabric image, its smoothed output result is S, ∂ x S p and ∂ y S p are the partial derivative of the processed image in the x and y directions at p respectively, and the gradient of image S at pixel p is denoted by ∇S p = ∂ x S p , ∂ y S p T . The image L0 gradient specific objective function is defined as: where λ is a non-negative parameter, which directly controls the weight of the smoothing term. β is an automatically adapting parameter. h is auxiliary variable. By alternatively computing h and S, it can obtain the output result. Figure 6 shows the result using the LGM method with their corresponding mesh diagrams. It is observed that the output result retains the defect information and removes the background texture information. In order to illustrate the method clearly, we briefly summarize the theory of the L0 gradient minimization model. Let I be the input fabric image, its smoothed output result is S, x p S ∂ and y p S ∂ are the partial derivative of the processed image in the x and y directions at p respectively, and the gradient of image S at pixel p is denoted by The image L0 gradient specific objective function is defined as: where λ is a non-negative parameter, which directly controls the weight of the smoothing term. β is an automatically adapting parameter. h is auxiliary variable. By alternatively computing h and S, it can obtain the output result. Figure 6 shows the result using the LGM method with their corresponding mesh diagrams. It is observed that the output result retains the defect information and removes the background texture information. Hole Slack End Cotton balls Wool LGM Matrices Fuzzy C-Means Clustering Algorithm (FCM) The output results via LGM are obtained in the previous section. We can easily process the defect information, which becomes more obvious. Then the FCM algorithm is applied to segment the defects. The FCM algorithm [28,29] is a fuzzy and unsupervised clustering algorithm. Its classification capacity is flexible and simple to implement. The FCM algorithm defines the objective function represented the sum of squares of the weighted distances from each pixel in the target image to each cluster center, which is given by: where c is the number of clusters, n is the number of pixels in the image, Fuzzy C-Means Clustering Algorithm (FCM) The output results via LGM are obtained in the previous section. We can easily process the defect information, which becomes more obvious. Then the FCM algorithm is applied to segment the defects. The FCM algorithm [28,29] is a fuzzy and unsupervised clustering algorithm. Its classification capacity is flexible and simple to implement. The FCM algorithm defines the objective function represented the sum of squares of the weighted distances from each pixel in the target image to each cluster center, which is given by: where c is the number of clusters, n is the number of pixels in the image, µ j (x i ) is the membership degree of the i-th pixel belonging to the j-th class, normally 0 ≤ µ j (x i ) ≤ 1 and The physical meaning of the objective function J represents the sum of squares of the weighted distances from each pixel in the target image to each cluster center. When the Euclidean-distance-weighted value of each pixel-point in the target image to the cluster center is the minimum, and the Euclidean distance from other cluster centers is as large as possible. The basic principle of the FCM algorithm is to find a set of suitable clustering centers and the membership matrix, such that the objective function J takes the minimum value min (J). When calculating the minimum value of the objective function, it is necessary to continuously update the membership matrix and the cluster center according to Equations (3) and (4) until the minimum value is obtained: When the objective function obtains the minimum value, the membership degree is retained, and each pixel of the target image is clustered. In the iterative process, the appropriate iteration termination condition should be selected correctly, otherwise, it cannot obtain the ideal segmentation result. When the FCM algorithm is convergent, it can achieve the different clustering centers which are the clustering center of defects information and the clustering center of the normal fabric information. Experimental Results and Discussion The testing code was implemented under the MATLAB version R2014B. The proposed method was carried out on a standard workstation equipped with an Intel Core i5-4460 3.2 GHz CPU with 8 GB of main memory, an NVIDIA GeForce GT 745 graphics card and Windows 8.1 OS. In our work, we used fabric defect images from the automation laboratory sample database of Hong Kong University, TILDA Textile Texture Database and Guang Dong Esquel Textiles with a resolution of 600 dpi, scanned by Canon Scanner 9000F. The images have a size of 256 × 256 pixels and an 8-bit grey level. Various fabric images (including plain fabric, twill fabric, star-patterned fabric, dot-patterned fabric, box-patterned fabric, striped fabric, and statistical-texture fabrics) were used for evaluating our method. Parameter Setting The parameter λ plays a key role in detecting defects accurately in our method. The influence of λ that changed the defection results is shown in Figure 7. The best filter results using LGM are marked with a red block. From the Figure 7, it can be seen that if the parameter λ is set too small, the noise and background texture are almost unchanged. On the contrary, if the parameter λ is set too large, the defect is smoothed over. If the parameter λ is set moderate, the defect area will be easily distinct from the surrounding background in the following segmentation, and we can obtain a better detection result, as shown in Figure 7. To verify the impact of value of parameter λ on various fabric types, the parameter λ is 0.008, 0.015, 0.02, 0.03, 0.04, 0.07, and 0.08 respectively, and the seven fabric types' detection accuracies are presented by a bar chart as shown in Figure 8. The experiment proved that the number of λ selected should be between 0.008 and 0.08 to meet the requirements of various fabric types. As the Figure 8a,c shown, when λ is set between 0.07 and 0.08, the plain fabric and star-patterned fabric will be excessively smooth, so the accuracy rate cannot be detected. Figure 9 shows the results of plain fabric defect detection; it can be seen that the position and shape of fabric image defects have been successfully detected. Furthermore, nine twill fabric defects were tested, and the defect detection results are shown in Figure 10. The fabric defects were well detected; it reveals that the defection method can detect the twill fabric defects. To verify the impact of value of parameter λ on various fabric types, the parameter λ is 0.008, 0.015, 0.02, 0.03, 0.04, 0.07, and 0.08 respectively, and the seven fabric types' detection accuracies are presented by a bar chart as shown in Figure 8. The experiment proved that the number of λ selected should be between 0.008 and 0.08 to meet the requirements of various fabric types. As the Figure 8a,c shown, when λ is set between 0.07 and 0.08, the plain fabric and star-patterned fabric will be excessively smooth, so the accuracy rate cannot be detected. Figure 9 shows the results of plain fabric defect detection; it can be seen that the position and shape of fabric image defects have been successfully detected. Furthermore, nine twill fabric defects were tested, and the defect detection results are shown in Figure 10. The fabric defects were well detected; it reveals that the defection method can detect the twill fabric defects. Appl. Sci. 2019, 19, x FOR PEER REVIEW 8 of 17 Star-patterned defects, such as linear defects and blob-shaped defects, are also successfully segmented using the proposed method, as shown in Figure 11. The results highlight the utility of our technique of accurate defect detection and segmentation. Our method can defect the box-patterned fabric defect, which includes broken ends, thick bars, and thin bars, as shown in Figure 12. Star-patterned defects, such as linear defects and blob-shaped defects, are also successfully segmented using the proposed method, as shown in Figure 11. The results highlight the utility of our technique of accurate defect detection and segmentation. Our method can defect the box-patterned fabric defect, which includes broken ends, thick bars, and thin bars, as shown in Figure 12. Star-patterned defects, such as linear defects and blob-shaped defects, are also successfully segmented using the proposed method, as shown in Figure 11. The results highlight the utility of our technique of accurate defect detection and segmentation. Our method can defect the box-patterned fabric defect, which includes broken ends, thick bars, and thin bars, as shown in Figure 12. The Figure 13 shows a representative set of different types of defects in dot-patterned fabric obtained from a group of images. In addition, even in a complicated background with a pattern, striped fabric, and statistical-textures, our method also can achieve an outperform the results of others, as shown in Figures 14 and 15. It can be seen that the proposed method can detect a variety of fabric samples with different defect types, shapes and textured backgrounds. The Figure 13 shows a representative set of different types of defects in dot-patterned fabric obtained from a group of images. In addition, even in a complicated background with a pattern, striped fabric, and statistical-textures, our method also can achieve an outperform the results of others, as shown in Figures 14 and 15. It can be seen that the proposed method can detect a variety of fabric samples with different defect types, shapes and textured backgrounds. The Figure 13 shows a representative set of different types of defects in dot-patterned fabric obtained from a group of images. In addition, even in a complicated background with a pattern, striped fabric, and statistical-textures, our method also can achieve an outperform the results of others, as shown in Figures 14 and 15. It can be seen that the proposed method can detect a variety of fabric samples with different defect types, shapes and textured backgrounds. The Figure 13 shows a representative set of different types of defects in dot-patterned fabric obtained from a group of images. In addition, even in a complicated background with a pattern, striped fabric, and statistical-textures, our method also can achieve an outperform the results of others, as shown in Figures 14 and 15. It can be seen that the proposed method can detect a variety of fabric samples with different defect types, shapes and textured backgrounds. The Figure 13 shows a representative set of different types of defects in dot-patterned fabric obtained from a group of images. In addition, even in a complicated background with a pattern, striped fabric, and statistical-textures, our method also can achieve an outperform the results of others, as shown in Figures 14 and 15. It can be seen that the proposed method can detect a variety of fabric samples with different defect types, shapes and textured backgrounds. We further provide the computational time complexity of the proposed method, which is shown in Figure 16. We fit the curve. From the trend of the curve, the time complexity is ( ) Experimental Results where N is the size of image. Qualitative Comparison We compared our method with other state-of-the-art detection methods, including the EADL method [5] and the automatic band selection method [6], and detection carried out by human inspectors (the defects are marked by experienced factory workers). In each process, we used the parameters suggested in the original papers and followed the instructions provided in the authors' code distributions. The comparison results are shown in Figure 17, where the input original seven texture types' fabric defect images are given in Figure 17a. From top to bottom, they are plain fabric, twill fabric, star-patterned fabric, box-patterned fabric, dot-patterned fabric, statistical-texture fabric, and striped fabric respectively. The columns in Figure 17b-e are the defection results by our method, the ground-truth images from segmentation carried out by human inspectors, the Tsai [6], and the EADL method. It is found that the detection results using the EADL method [5] are located accuracy than using the automatic band selection method [6]. Figure 17b is the detection result using our method, it can be seen that the detection results are consistent with those carried out by human inspectors and outperform other methods. We further provide the computational time complexity of the proposed method, which is shown in Figure 16. We fit the curve. From the trend of the curve, the time complexity is O N 2 , where N is the size of image. We further provide the computational time complexity of the proposed method, which is shown in Figure 16. We fit the curve. From the trend of the curve, the time complexity is ( ) where N is the size of image. Qualitative Comparison We compared our method with other state-of-the-art detection methods, including the EADL method [5] and the automatic band selection method [6], and detection carried out by human inspectors (the defects are marked by experienced factory workers). In each process, we used the parameters suggested in the original papers and followed the instructions provided in the authors' code distributions. The comparison results are shown in Figure 17, where the input original seven texture types' fabric defect images are given in Figure 17a. From top to bottom, they are plain fabric, twill fabric, star-patterned fabric, box-patterned fabric, dot-patterned fabric, statistical-texture fabric, and striped fabric respectively. The columns in Figure 17b-e are the defection results by our method, the ground-truth images from segmentation carried out by human inspectors, the Tsai [6], and the EADL method. It is found that the detection results using the EADL method [5] are located accuracy than using the automatic band selection method [6]. Figure 17b is the detection result using our method, it can be seen that the detection results are consistent with those carried out by human inspectors and outperform other methods. Qualitative Comparison We compared our method with other state-of-the-art detection methods, including the EADL method [5] and the automatic band selection method [6], and detection carried out by human inspectors (the defects are marked by experienced factory workers). In each process, we used the parameters suggested in the original papers and followed the instructions provided in the authors' code distributions. The comparison results are shown in Figure 17, where the input original seven texture types' fabric defect images are given in Figure 17a. From top to bottom, they are plain fabric, twill fabric, star-patterned fabric, box-patterned fabric, dot-patterned fabric, statistical-texture fabric, and striped fabric respectively. The columns in Figure 17b-e are the defection results by our method, the ground-truth images from segmentation carried out by human inspectors, the Tsai [6], and the EADL method. It is found that the detection results using the EADL method [5] are located accuracy than using the automatic band selection method [6]. Figure 17b is the detection result using our method, it can be seen that the detection results are consistent with those carried out by human inspectors and outperform other methods. Quantitative Comparison Besides visual qualitative comparisons, we also did quantitative comparisons. Quantitative Comparison Besides visual qualitative comparisons, we also did quantitative comparisons. We also adopted the intersection-over-union (IOU) to quantitatively evaluate the performance of different methods. For the segmentation task, IOU is defined as: IOU = TP/(TP + FN + FP). For intersection-over-Union (IOU), the ideal case is a ratio of 100%, which usually stipulates that when the IOU value is greater than 50%, the detection is considered correct (which is also taken into consideration in the detection). Figure 18 illustrates the ACC, TPR, FPR, PPV, and IOU results of the plain fabric, twill fabric, star-patterned fabric, dot-patterned fabric, box-patterned fabric, striped fabric, and statistical-texture fabric. Our method achieves most of the high scores in 35 testing items. The proposed method is better than other methods in Figure 18a,c-e. As shown in Figure 18a, the ACC by the proposed method, for the plain fabric dataset, twill fabric dataset, the dot-patterned fabric dataset, box-patterned fabric dataset, and statistical-textures defect dataset, are 96.99%, 98.40%, 98.87%, 96.13%, and 97.63%, respectively. As shown in Figure 18b, the proposed method can obtain the lowest TPR value for the star-patterned fabric defect detection. Figure 18c shows that the FPR values of our method are almost smaller than other approaches. Figure 18d shows that our method obtains the highest PPV value for the twill fabric, box-patterned fabric, striped fabric, and statistical-texture fabric. Figure 18e clearly indicates that our method provides the optimal IOU for all types of fabric. Furthermore, it can be observed that the proposed method obtains higher ACC, PPV, and IOU values, and lower TPR, and FPR values. These results verify the effectiveness of our proposed method, which performed better than the EADL method [5] and the automatic band selection method [6]. Considering the fact that the captured fabric images are often affected by noise, light intensity, and blurring, we analyzed the robustness of each method in different conditions. Table 1 shows the detection results of different methods (the proposed method, the EADL method [5], and the automatic band selection method [6]) in noisy, luminously intense, and blurry conditions. According to Table 1, when signal-noise ratio (SNR) decreases gradually, the ACC and IOU can remain a high level, especially when SNR = 10 dB; then, ACC can remain around 0.85. It was shown that the proposed method is robust when dealing with noise. In addition, we found that when luminous intensity decreases 20% or increases 20%, ACC can remain above 0.90. When increasing the blur with a radius of 20, the ACC and IOU can remain at a high level. Considering the fact that the captured fabric images are often affected by noise, light intensity, and blurring, we analyzed the robustness of each method in different conditions. Table 1 shows the detection results of different methods (the proposed method, the EADL method [5], and the automatic band selection method [6]) in noisy, luminously intense, and blurry conditions. According The computational comparison result is shown in the Table 2, which reports the average computational time (in seconds) of four methods while processing plain, twill, star-patterned, dot-patterned, box-patterned, striped, and statistical fabrics. As can be seen from Table 2, our method is faster than the automatic band selection method [6] and human inspectors. Even though the EADL method is faster than our method, it has fatal limitations. Their method cannot segment defects, as shown in Figure 17d, and cannot maintain high accuracy, as shown in Figure 18a. From the average calculation speed of the algorithm, the method proposed in this paper takes less time in comparison with the other methods when detecting various types of textured fabrics. In addition, our method performed better than Pedro [5] in terms of TPR, FPR, and IOU. Conclusions We have proposed a novel method based on LGM and the FCM for fabric defect detection of a wide variety of textures. Extensive experimental results demonstrate that the proposed method could be applied to detect and segment fabric defects from a broad range of fabric defects datasets: Plain fabric, twill fabric, star-patterned fabric, dot-patterned fabric, box-patterned fabric, striped fabric, and statistical-texture fabric, with different defect types and shapes. It can achieve more accurate defect detection than other state-of-art competitors. Despite the effectiveness of the proposed method for fabric images with complicated patterns, it is still clumsy in computational time for detecting the defects. Our future work will be to improve our algorithm, to reduce the computational time in a real-time fabric defect detection system.
7,642.6
2019-08-26T00:00:00.000
[ "Computer Science", "Engineering", "Materials Science" ]
Recurrent Neural Network for Human Activity Recognition in Embedded Systems Using PPG and Accelerometer Data : Photoplethysmography (PPG) is a common and practical technique to detect human activity and other physiological parameters and is commonly implemented in wearable devices. However, the PPG signal is often severely corrupted by motion artifacts. The aim of this paper is to address the human activity recognition (HAR) task directly on the device, implementing a recurrent neural network (RNN) in a low cost, low power microcontroller, ensuring the required performance in terms of accuracy and low complexity. To reach this goal, (i) we first develop an RNN, which integrates PPG and tri-axial accelerometer data, where these data can be used to compensate motion artifacts in PPG in order to accurately detect human activity; (ii) then, we port the RNN to an embedded device, Cloud-JAM L4, based on an STM32 microcontroller, optimizing it to maintain an accuracy of over 95% while requiring modest computational power and memory resources. The experimental results show that such a system can be effectively implemented on a constrained-resource system, allowing the design of a fully autonomous wearable embedded system for human activity recognition and logging. Introduction Human activity recognition (HAR) using wearable sensors, i.e., devices directly positioned on the human body, is one of the most popular research areas, which focuses on automatically detecting what a particular human user is doing based on sensor data. To this end, photoplethysmography (PPG) is an optical technique commonly employed in wearables and other medical devices to measure the change in the volume of blood in the microvascular tissue. Light is emitted from a dedicated device and then reflected and absorbed at different rates during the cardiac cycle. The reflected light is read by a photo-sensor to detect those changes. The output from this sensor can then be processed to obtain a valid heart rate (HR) estimation. Being that PPG is a noninvasive method for HR estimation with respect to electrocardiography (ECG) and surface electromyography, requiring simpler body contact at peripheral sites on the body, such sensors are being more and more used in wearable devices, such as smart watches, as the preferred modality for HR monitoring in everyday activities. However, accurate estimation of the PPG signal recorded from the subject's wrist when the subject is performing various physical exercises is often a challenging problem, as the raw PPG signal is severely corrupted by motion artifacts (MAs). These are principally due to the relative movement between the PPG light source/detector and the wrist skin of the subject during motion. In order to reduce the MAs, a number of signal processing techniques based on data derived from different sensor types, especially accelerometer data, have proven to be very useful [12][13][14]. Among smartphones and smart watches, built-in triaxial accelerometers are probably the most widespread sensors that can be used for activity monitoring. Because smartphones and smart watches have become very popular, the data-fusion techniques of PPG and acceleration data can be used for providing accurate and reliable information on human activity directly on such devices [15,16]. PPG sensors alone are not usually applied in HAR classification since they are not designed to capture motion signals as opposed to inertial measurement units (IMU), typically comprising accelerometers and gyroscopes. However, using a PPG sensor for HAR presents several advantages [17]: (i) wearable devices are becoming ubiquitous and almost always embed a PPG sensor, so it makes sense to exploit the information that it can provide, as it comes at no additional cost to the user of one of these PPG-enabled smartwatches or wristbands; (ii) the PPG sensor can either be used alone when other HAR sensors are unavailable, or combined with them to augment recognition performance; and (iii) this sensor can be used to monitor different physiologic parameters (heart rate, blood volume, etc.) in one solution. For these reasons, we chose to also employ the PPG signal to predict human activities. HAR can be treated as a pattern recognition problem, and in this context, machine learning techniques have proven particularly successful. Due to recent advancements of deep learning techniques, these methods can be categorized in two main approaches: (i) conventional machine learning techniques, and (ii) deep learning-based techniques. In the first category, various machine learning methods, such as k-Nearest Neighbors (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), Hidden Markov Models (HMM) [18], Random Forests (RF) [19], and Molecular Complex Detection methods (MCODE) [20], have been adopted. Furthermore, recent advancements in machine learning algorithms and portable device hardware could pave the way for the simplification of wearables, allowing the implementation of deep learning algorithms directly on embedded devices based on microcontrollers (MCUs) with limited computational power and very low energy consumption, without the need for transferring data to a more powerful computer to be elaborated [36,37]. In recent years, edge computing has emerged to reduce communication latency, network traffic, communication cost, and privacy concerns. Edge devices are resourceconstrained devices and cannot support high computation loads. As previously mentioned, in the literature, various machine learning methods and DNN models have been developed for HAR. Particularly, deep learning algorithms have shown high performance in HAR, but these algorithms require high computation, making them inefficient to be deployed on edge devices. To our knowledge, there are still few works that have addressed this problem specifically for the HAR classification task [36,37], as most have tested the DNN architectures on high performance processor units [17,28,[38][39][40]. Thus, the main goal of this paper is to prove that the proposed RNN can be implemented in a low cost, low power core, while preserving good performance in terms of accuracy. To reach this goal, we proceed as follows: • We design an RNN using PPG and triaxial accelerometer data in order to detect human activity, using a publicly available data set for its design and testing. The design and hyper-parameter optimization is performed on a computer architecture. • After the RNN has been designed, we investigate the porting and performance of the network on an embedded device, namely the STM32 microcontroller architecture from ST, using their "STM32Cube.AI" software solution [41]. This framework allows the porting of a pre-built DNN, converting it to optimized code to be run on the constrained hardware platform. • When porting the RNN to the embedded system, we show how the network can be simplified to better fit the microcontroller limited resources. In particular, it is demonstrated that the input data can be downsampled to a significant degree, while maintaining good accuracy and requiring fewer hardware resources in order to be implemented. The rest of the paper is organized as follows. In Section 2, we summarize the related work and we state the motivations for our work. Section 3 summarizes the basic concepts of the RNNs. Section 4 describes the data set adopted in the experiments and the pre-processing data applied to improve the RNN performance. Section 5 reports the details of the proposed RNN architecture with a description of the main features, hardware and software to implement this network in the low-power, low-cost Cloud-JAM L4 board (STM32L476RG microcontroller). Finally, the experimental results are presented in Section 6. Related Work Nowadays, deep learning techniques have brought great improvements in signal recognition/classification and object detection. In References [42,43], an automatic target detection and recognition in the infrared images based on a CNN is studied. In Reference [44], a robust multi-camera multi-player tracking framework is presented. In this system, the player identity, which is commonly ignored in existing methods, is specifically considered, using a deep player identification model for players' identification, 2D localization and segmentation based on a Cascade Mask R-CNN model. In this section, we provide a summary of the most recent deep learning techniques adopted for classification of PPG signal. Heart Rate Variability (HRV) is the continuous fluctuation of period length between cardiac cycles, which can be used for the diagnosis of cardiovascular diseases, such as myocardial infarction and cardiac arrhythmia. In Reference [45], an RNN based on bidirectional long short-term memory (biLSTM) is introduced for accurate PPG cardiac period segmentation to derive three important indexes for HRV estimation. biLSTM is an improved version of long short-term memory (LSTM), which receives forward and backward feature inputs in order to gain information behind and ahead of a specific sample point. In the study [46], a new hybrid prediction model is proposed by combining ECG and PPG signals with an RNN to estimate blood pressure continually within the RNN structure; a biLSTM is used as the input hidden layer to look for contextual features both forward and backward, while a rectified linear unit (ReLU) layer is selected as the last hidden layer. In Reference [47], different CNN architectures for PPG-based heart rate estimation are investigated. To train the network, an end-to-end learning approach that takes the timefrequency spectra of synchronized PPG and accelerometer signals as the input and provides the estimated heart rate as the output, is adopted. A deep learning model for heart rate estimation using a single-channel wrist PPG signal is proposed in [48]. The model contains three components: a CNN, an LSTM, and a fully connected network (FCN). The input data segmented into eight windows of 1 second duration is passed to the CNN-LSTM feature extractor by performing five parallel convolutions, thereby providing diverse feature representations from the input signal at various receptive fields. In Reference [17], a novel method is adopted to extract meaningful features from the PPG to predict human activities that combines convolutional and recurrent layers. The convolutional layers are set as feature extractors and provide abstract representations of the three CS, RS and MS data in feature maps, while the recurrent layers model the temporal dynamics of the activation of the feature maps. Brief of RNNs While traditional neural networks are characterized by the complete connection between adjacent layers, recurrent neural networks (RNNs) can map target vectors from the entire history of the previous map. The structure of an RNN network is shown in Figure 1. In this architecture, each node produces a current hidden state h t and output o t by using current input x t and previous hidden state h t−1 as follows: where W and V are the weights for the hidden layers in recurrent connections, b denotes the bias for hidden and output states, and f is an activation function. Although an RNN is very effective in modeling the dynamic of a continuous data sequence, it may encounter the problem of gradient disappearance and explosion [49] when modeling long sequences. In order to overcome this issue, Hochreiter et al. [50] propose a variant type of RNN, based on the LSTM, which combines learning with model training without additional domain knowledge. The structure of the LSTM unit is shown in Figure 2. The following equations show the long-term and the short-term states and the output of each layer at each time step: where Data Set We used a recent data set that is publicly available [51] and includes PPG and tri-axis accelerometer data from seven different subjects performing five series of three different activities (resting, squat, and stepper). The four signals are simultaneously acquired with a sampling frequency of 400 Hz and include a total of 17,201 s of recording data. The seven adult subjects include three males and four females aged between 20 years and 52 years. The PPG and accelerometer signals were recorded from the wrist during some voluntary activity, using the Maxim Integrated MAXREFDES100 health sensor platform. This platform integrates one biopotential analog front-end solution (MAX30003/MAX30004), one pulse oximeter and heart-rate sensor (MAX30101), two human body temperature sensors (MAX30205), one three-axis accelerometer (LIS2DH), one 3D accelerometer and 3D gyroscope (LSM6DS3), and one absolute barometric pressure sensor (BMP280). Particularly, the PPG signals were acquired at the ADC output of the photodetector with a pulse width of 118 µs, a resolution of 16 bits and a full-scale range of 8192 nA, lighted with the green LED. The three-axis accelerometer signal values correspond to the MEMS output with a 10-bit resolution, left-justified, ±2 g scale and axes oriented as shown in Figure 3, with z pointing toward the experimenter's wrist. For the data acquisition, the following measurement set-up was followed as shown in Figure 4: (1) positioning of the sensor directly on the wrist; (2) insertion of the sensor inside a specific weight lifting bracelet, adjustable by a hook-and-loop closure, with optimal elastic characteristics that make it particularly suitable to guarantee perfect adherence of the sensor device to the skin surface; (3) verification of the correct wiring, as the loss of adhesion to the skin-device interface would cause the addition of high frequency noise in the acquired signals, making them unusable; (4) use of the sensor with the cable in "tethered" mode, where the cable comes out from the rear end of the band thus still guaranteeing freedom of movement. Of the data set, the first five subjects were used for the training phase, while the last two subjects were left for the final testing. Data Pre-Processing The PPG and accelerometer data from every single recording session are combined to obtain a series of four-dimension input data. A preliminary cleaning of the data is performed for the presence of occasional spikes, including NaN points, probably due to glitches in the communication channel during acquisition. Those are always single points, so they can easily be fixed in software by interpolating the two adjacent points. This cleaning is performed on the five training subjects only, to improve the training process. Data from the two test subjects are left unaltered, to account for transmission errors in real-life applications and to not add overhead to a possible embedded implementation (tests on the computer have shown this to make no difference on the results). The data are then split in partially overlapping windows of the same size. The choice of window size and overlapping is explained in detail in Section 4.3. Before feeding the neural network with the resulting inputs, preliminary tests have shown that some basic normalization of data is needed for PPG to achieve acceptable results. It has been already mentioned that it is extremely sensitive to movement. As an example, Figure 5 shows PPG data from a single subject performing five series of the same exercise. It can be seen that the signal varies greatly not only between series, but also in the short term during the same recording. To better isolate the PPG signal trend from the motion artifacts, we apply statistical standardization to the data, that is, we scale the data so that the resulting mean and standard deviation are 0 and 1, respectively, according to the following formula: with µ and σ being the original mean and standard deviation, respectively. In order to ensure that the data can be processed in real time when porting the RNN to the embedded system, µ and σ are computed independently for each window of the incoming data, and so is the standardized signal. Standardization thus transforms each input signal window into another vector of the same length but with predefined mean and variance. Moreover, per-window standardization has the added benefit of also compensating somewhat rapid signal variations between windows. The results of this perwindow standardization are still shown in the same Figure 5, where the right panel shows standardized data for 1200 sample windows with no overlapping. The non-overlapping output windows are simply juxtaposed on the graph for ease of representation. On the other hand, accelerometer signals are more regular than PPG, suffering only from low-magnitude noise, which is intrinsic in accelerometers. Figure 6 shows, as a matter of example, the accelerometer data from the recordings of a single subject in one activity. The only issue that must be addressed is that the data generally have a fixed offset, approximately constant, due to projection of gravity acceleration across the three spatial axes. Being that this offset is practically random for the purpose of data analysis, we remove it by subtracting the mean value from the data: with µ being the original mean. Moreover, as can be seen in the same figure, the offset can change abruptly during the same exercise, due to the subject unconsciously changing position. So again, we choose to subtract the mean value in single data windows, individually. The resulting processed signals for the same data are always shown in the right panel of the same Figure 6. While this procedure may not be optimal, for the few data windows crossing an offset change, nevertheless, it is computationally lightweight so as to be implemented in real time in an embedded system and, as the figure shows, it results in a good filtering of the signal. Preliminary tests have shown that normalization of acceleration values according to their standard deviation has a negative effect on final accuracy, with the normalization layer of the RNN itself leading to better results. Data Downsampling The original sample rate of the data (400 Hz) can impose a significant load on the processor and memory of an embedded device. Moreover, previous works show that classification of human activity does not require high sample rates [52]. For this reason, a crucial part of the work is examining varying degrees of downsampling of the original signals to find an optimal combination of accuracy and performance on constrained hardware platforms. To efficiently downsample data, we chose not to use resampling algorithms that require digital filters, which would add significant computational cost when implemented in the final embedded system. We instead used a simple decimation procedure in which 1 out of M samples are retained, discarding the rest. This leads to sample rates corresponding to integer decimation factors only. Mathematically, this is equivalent to transforming the original signal x[n] to a new signal y[n], such as the following: with M being the decimation factor. A new RNN must be built and trained for every sample rate because the size of the network layers depend on the size of input data windows. In the rest of the paper, when talking about the number of samples in data windows, we will always refer to the samples before downsampling in order to avoid confusion. Data Windowing The window length and overlapping are important hyper-parameters in neural networks, as well as other machine learning algorithms [53,54]. Being that w is the number of samples in a window and o is the number of overlapping samples between adjacent windows, the n-th data window corresponds to samples in the following range: with n >= 0. To find the best combination for our particular network, we conducted a series of tests with various values of the two parameters. It is common practice, when training a neural network, to further split the training data in two sets: data actually used to fit the network weights, and validation data to monitor the performance of the network during the various training epochs. Since the number of different subjects in the data set is small and different subjects inevitably have substantial differences in their data, the statistical distribution of the data might not be uniform enough, and so choosing a single partition of training and validation data might not lead to representative results. So, we decided to adopt a cross-validation strategy, that is, for every window length and overlapping combination we trained five networks, isolating each time a different subject for the validation (the network architecture is explained in detail in Section 5). The resulting accuracy of every combination was then computed as the average of the maximum accuracy obtained for the validation data in every test during the training epochs. Being that this process is quite time-consuming, we examined a limited combination of parameters in the neighborhood of what was already tested in [53], and with a downsampling factor of 10. As can be seen in Table 1, the best accuracy was reached with a window of 1200 samples (before downsampling), corresponding to 3 seconds and 50% overlapping. The final network used in the rest of the article was trained with the mentioned windowing parameters, and using all the 5 training subjects (no validation data). Data Augmentation Since the number of inputs belonging to the three different activities are not equally represented, the network might end up being biased towards a specific class. A simple technique to address this problem is oversampling [55], a form of data augmentation where the data from classes with less occurrences are duplicated as needed, so that the data used for training are more uniformly distributed among the different classes. ("Oversampling" in this context must not be confused with data resampling in time domain, performed independently). Table 2 shows the number of input windows for the 3 classes, limited to the 5 subjects used for training, before and after data augmentation. To summarize, Table 3 shows the number of inputs of the 7 subjects, before and after the oversampling applied to the first 5 ones. Oversampled data were used to train the final network. Rnn Architecture The RNN used in this paper is depicted in Figure 7. It is based on an architecture commonly used with time-based sensor data [54][55][56] and consisting of a combination of fully connected layers and LSTM cells. Input data are assembled from PPG and the three acceleration axes, resulting in four-dimension time-series. Data are then fed to the network in windows of size w × 4, with the parameter w being the size in time points of a single data window as described in Section 4.3. The first layer is a fully-connected one (dense), with the purpose of identifying relevant features in the input data. In this layer, the generic n-th neuron produces an output value y n , according to the x 1 , . . . x m inputs to the layer and the w nj neuron weights associated to every input. Specifically, where φ is an activation function and b n is a bias value. Next, there is a batch normalization layer, which normalizes the mean and standard deviation of the data globally, operating on single batches of data as the training progresses. For every input data batch x, its output is the following: where x and σ 2 are the mean value and variance of the data batch, respectively, and γ, , β are internal trainable parameters of the layer. The core of the recurrent neural network, then, is represented by three cascaded LSTM layers, whose internal architecture was briefly explained in Section 3. Each one is followed by a dropout layer that randomly discards a part of the input to reduce overfitting. Finally, there is a fully-connected layer of size 3 that, together with the Sparse Categorical Cross-entropy loss function assigned to the network, performs the classification in one of the three classes. The loss function, or cost function in more general terms of optimization problems, represents the error that must be minimized by the training process. The specific representation of the error depends on the particular function assigned to the network. For the Categorical Cross-entropy function, the error is as follows: where w is the set of model parameters, e.g., the weights of the RNN, N is the number of input test features, y i andŷ i are the true and predicted classes respectively, expressed numerically. The intermediate layers have size 32; this hyper-parameter was determined experimentally, starting with a larger value and decreasing it until the accuracy varied significantly. Table 4 shows the details of the individual layers. The RNN, as built in this configuration, has 25,283 trainable parameters. Hardware and Software For the first part of the design and hyper-parameter optimization, the RNN was developed with TensorFlow 2.4.1 and Keras 2.4.0. The network and the related algorithms were initially developed on the Google Colaboratory platform; then, the final computations were performed on a computer with an Intel Core i7-6800K CPU, 32 GiB of RAM and an NVIDIA GeForce GTX 1080 GPU. For the embedded part, we tested the RNN on a Cloud-JAM L4 board (https://www. rushup.tech/jaml4, accessed on 1 June 2021), which, for its small form factor and integrated Wi-Fi, can represent a valid prototyping base for a wearable system. While it features a set of inertial and environmental sensors, it is not a complete system with PPG sensor and other needed features. Nevertheless it allows testing the RNN on a real hardware and evaluating its performance in terms of memory and execution time, should a full-featured monitoring system be designed. The classification of test data is done in real time by providing input data to the board from the test set via a serial interface. This also ensures reproducibility of the results with respect to the other tests. The porting of the neural network to the STM32 architecture is made possible by a software framework from ST, named "STM32Cube.AI" [41] (current version 6.0.0), integrated in the STM32Cube IDE. The software is a complete solution to import a Keras (or other) model, analyze it for compatibility and memory requirements, and convert it to an optimized C implementation for the target architecture. The generated network can then be evaluated with test input data, both on the computer and the actual device to obtain various metrics, such as execution time, number of specific hardware operations and accuracy. All the software developed for this article is publicly available at https://github.com /MAlessandrini-Univpm/rnn-ppg-har, accessed on 1 June 2021, published in July 2021. Experimental Results The final RNN was tested on both the computer and the MCU, with several decimation factors. For every factor, the network was trained and tested with the following parameters: • Windows of 1200 samples (before decimation) and 50% overlapping. • Data augmentation applied. • Five subjects used for training, with no further split for validation. • Test performed on the last two subjects, not involved in training. • A total of 100 training epochs. In addition to the other hyper-parameters already discussed, the number of epochs was chosen experimentally by examining the training accuracy and loss value during the training stage. Figure 8 shows the progress of accuracy (estimated on the training material itself) and loss with respect to the training epochs for the network with no downsampling (original data at 400 Hz). It can be seen that at about 100 epochs, the values reach convergence. Table 5 shows the accuracies and resource usage obtained by the training and the final test for the various RNNs, for both computer and MCU. On the computer side, reported times are the total time for the training and test stages, respectively. On the embedded system, every RNN requires a given amount of flash and RAM memory, reported by the framework during the initial analysis. Flash memory requirements do not depend on the sample rate, but only on the network architecture, namely, the quantity of weights and other parameters that are read-only values after the training is done. As shown, the amount of flash memory required is well below the available quantity. RAM memory, on the other hand, is more limited (96 KiB in this case) and its usage is strongly dependent on the size of input data (and so on the sample rate). Moreover, part of the RAM is needed by the program besides data structures belonging to the RNN. It can be seen that not all the configurations can fit in RAM; combinations that would require more than 100% of RAM could not be executed on the MCU. (An alternative practice to fit a DNN model to a constrained architecture is converting it to TensorFlow Lite format. Unfortunately, the current STM32Cube.AI version-6.0.0-does not support some specific operations generated by the T.F. Lite converter for our model). Timing results are computed by running the RNN on the actual device (see Section 5.1). A dedicated firmware application is provided by the framework; the IDE tool can communicate with such an application on the board, send it the test data to make it run the neural network inference on the hardware and finally, obtain the statistics on performance. Every time a different model is used, a series of operations are needed: generating the code, compiling it, programming it on the MCU flash memory and finally, running the test. Presumably for a limitation of the firmware validation application, the program stops working if the input and reference data provided are too big, so it was not possible to use the full test data (consisting of more than 2000 rows). A subset of the data (100 rows) had to be used. Since the application reports the average time needed for every inference, the timing results are still meaningful. Indeed, the reported test time for the MCU is the average time of a single data input. About the accuracy, to have a meaningful comparison with results on the computer using the full test data, we referred to the validation performed by the toolkit on the computer; this uses the same C code generated for the MCU and so it is expected to provide equivalent numerical results. The CPU percentage usage was computed as the ratio between the average inference time reported by the validation application and the duration of a data window (3 s), multiplied by a factor of two to account for 50% overlapping of the data windows. This parameter can give an estimation of the capability of the embedded system to handle the data classification in real time and the CPU time remaining for other concurrent activities. The table also reports the number of MACC operations, in rounded thousand units, required for a single inference. It can be seen from the results that the accuracy does not decrease while downsampling the data down to 10 Hz (in fact, it actually increases), corresponding to a CPU usage of 10%, leaving plenty of execution time for other concurrent activities, or alternatively, allowing the reduction of the CPU clock frequency to achieve a lower power consumption. Note that the CPU usage does not include data pre-processing, that is, normalization of the mean value and/or standard deviation (see Section 4.1) that would be needed if data are acquired in real time. Those operations are much simpler than RNN inference, and so should not add a significant overhead. It can also be seen that the accuracies achieved by the MCU implementation are identical to the ones obtained on the computer. This is presumably due to the differences between the two models being relatively small: apart from the limited precision of the microcontroller FPU (32 bits), the model does not require further compression or quantization to fit on the embedded system. Figure 9 shows the confusion matrix from the classification of test data in the same setup. It can be seen that the squat and stepper activities are the ones suffering from the larger mistake rates, while the resting activity is recognized correctly in 98% of the cases. This may be due in part to the amount of original input data being substantially less for squat and stepper activities with respect to resting. In the current setup, the accuracy of the testing stage reaches a maximum of 95.54% for a decimation factor 40. While splitting the data set into five training subjects and two testing subjects is a natural choice, the limited size of the data set can lead to a bias in the results, according to the chosen partition. Moreover, it can be seen from Table 5 that by increasing the decimation factor, the difference between the training and testing accuracies increases. To test the effect of such a bias, we repeated the previous tests with a leave-one-subjectout, cross-validation strategy. This means testing seven models for every experiment in which six subjects are used for training and one (different each time) for testing. Table 6 shows the test accuracy for this setup, averaged over all the models. Since reducing the test material with respect to training can increase the overfitting effect, we repeated the tests with 50 epochs in addition to 100. It can be seen that in this configuration, the accuracies are significantly lower. Again, this can be explained by the data set being of limited size, and so a single subject may not be representative enough to be used for testing. Indeed, if one better examines a single case, for example, the one at decimation 40 and 100 epochs that results as the best one in Table 5, it can be seen that a few subjects can negatively influence the average results, while most of them have accuracies similar to the better ones reported earlier. This is shown in Table 7. This, again, confirms that the limited size of the data set can limit the generality of the results, producing a strong bias, according to the subject partition. A wider data set could solve those kinds of problems and provide more general results; this can be the subject for future work in this field. Table 8 reports a list of the state-of-the-art works related to HAR in terms of the employed algorithm, type of signal, data set used for experimentation, number of classes for each data set, hardware used for testing and performance. The commonly used metrics to evaluated the validity of the HAR algorithms are accuracy and F1 score: accuracy is the ratio of the sum of true positives (TP) and true negatives (TN) to the total number of records; the F1 score is an evaluation of the test's accuracy calculated as a weighted average of the precision and recall, where precision is defined as TP/(TP + FP) with FP = false positives, and recall as TP/(TP + FN) with FN = false negatives. By making a comparison with the methods present in Table 8, an evaluation of the contribution of the proposed work can be made. Regarding the data, accelerometer and gyroscope signal sources are the most commonly used in the state of the art since these signals are simple to acquire. So, many works focused on the popular and publicly available UCI HAR data set, which contains six activities (walking, walking upstairs, walking downstairs, sitting, standing, laying down). However, data sets containing PPG signals are relatively less common and more limited in the number of presented activities, but it is still an interesting topic because a PPG sensor is already embedded in smartwatches or wristbands and can either be used alone when other HAR sensors are unavailable, or combined with them to improve recognition performance; moreover, this sensor can be used to monitor different physiologic parameters in one device. Finally, as can be seen, the results obtained with the proposed method are in line with those of the state of the art, especially considering the few works that have experimented the implementation on microcontrollers. Conclusions In this paper, an RNN was built for human activity recognition, using PPG and accelerometer data from a publicly available data set. The RNN was then ported to an embedded system based on an STM32 microcontroller, using a specific toolkit for the porting of the network model to the mentioned architecture. The results show that an accuracy of more than 95% is achieved in the classification of test data, and that the sample rate of the acquired data can be downsampled down to 10 Hz, while maintaining the same accuracy. This, in turn, allows the network to be run on the embedded device, using modest hardware resources, paving the way to a fully autonomous activity classifier implemented as a wearable embedded device, using commonly available and cheap microcontrollers.
8,264.2
2021-07-17T00:00:00.000
[ "Engineering", "Computer Science" ]
Charting Galactic Accelerations: When and How to Extract a Unique Potential from the Distribution Function The advent of datasets of stars in the Milky Way with six-dimensional phase-space information makes it possible to construct empirically the distribution function (DF). Here, we show that the accelerations can be uniquely determined from the DF using the collisionless Boltzmann equation, providing the Hessian determinant of the DF with respect to the velocities is non-vanishing. We illustrate this procedure and requirement with some analytic examples. Methods to extract the potential from datasets of discrete positions and velocities of stars are then discussed. Following Green&Ting (arXiv:2011.04673), we advocate the use of normalizing flows on a sample of observed phase-space positions to obtain a differentiable approximation of the DF. To then derive gravitational accelerations, we outline a semi-analytic method involving direct solutions of the over-constrained linear equations provided by the collisionless Boltzmann equation. Testing our algorithm on mock datasets derived from isotropic and anisotropic Hernquist models, we obtain excellent accuracies even with added noise. Our method represents a new, flexible and robust means of extracting the underlying gravitational accelerations from snapshots of six-dimensional stellar kinematics of an equilibrium system. INTRODUCTION In galactic astronomy, a fundamental problem is to extract the underlying gravitational potential from the kinematics of a tracer population. If stars are moving on circular orbits in a spherical potential, then the matching of the centrifugal force to the gravitational one gives the rotation curve, and by extension the potential. Elaborations of this basic idea to stellar streams have proved to be one of the most powerful methods available to us today (e.g., Lynden-Bell 1982;Johnston et al. 1999;Bowden, Belokurov & Evans 2015;Erkal et al. 2019;Malhan & Ibata 2019). If the stellar population is not kinematically cold, the traditional way in which the problem is tackled is via the Jeans equations (Binney & Tremaine 2008, chap. 4). Given measurements of the second velocity moments and the density of the tracer population, the Jeans equations can be solved to yield the potential. There are numerous applications of this method both to the Milky Way (e.g., King et al. 2015;Bowden, Evans & Williams 2016;Nitschai, Cappellari & Neumayer 2020) and external galaxies (e.g., Cappellari 2008;Walker et al. 2009). Some studies have instead worked directly with the distribution function, fitting some assumed parametric form to the observed stellar data (e.g., Binney & Piffl 2015;Williams & Evans 2015;Posti & Helmi 2019). More rarely, the distribution function is constructed directly from the data, as in Kuijken & Gilmore (1989)'s numerical Abel inversion of the vertical tracer density. This though relies on the assumption that the vertical and in-plane dynamics are decoupled, and so is not of general applicability. However, the advent of the Gaia satellite (Gaia Collaboration 2016) has made possible the empirical construction of the full phasespace distribution function for stellar populations in the Milky Way, and perhaps even for some of its satellite galaxies. The data now comprise the full positions and velocities of many millions of stars. The process of averaging to obtain the second velocity moments does not do justice to the richness of the data. Green & Ting (2020) recently raised the possibility of direct determination of the gravitational potential from the distribution function using the collisionless Boltzmann equation itself. This is the continuity equation satisfied by the distribution function in the six-dimensional phase space of positions and velocities. At every location in physical space, the collisionless Boltzmann equation provides a single constraint on the three unknown components of the gravitational force. Thus, it is unclear if the identification of a stationary distribution function is sufficient to specify uniquely the gravitational potential (modulo an additive constant). So, the first aim of our paper is to establish the conditions under which the potential can be uniquely recovered, given the distribution function. The second aim of our paper is to provide a working algorithm to extract the potential. Whereas Green & Ting (2020) proposed a neural network, we instead utilize an efficient and accurate semi-analytic method, based on a direct solution of the collisionless Boltzmann equation. We demonstrate the efficacy of our method on mock datasets sampled from isotropic and anisotropic distribution functions of galaxy models, including the effects of errors. THE COLLISIONLESS BOLTZMANN EQUATION AND THE POTENTIAL Here, we address the theoretical question that underlies all this work: namely, when is the potential uniquely specified by the distribution function? We prove a uniqueness theorem in Section 2.1 subject to certain conditions, and investigate the instances when the conditions are violated in Section 2.2. Uniqueness theorem If F( p; x) is a phase-space distribution function (DF) in equilibrium in the static potential Φ(x), then it is an integral of motion of the Hamiltonian H = 1 2 3 j,k=1 g jk p j p k +Φ(x). Here p = (p 1 , p 2 , p 3 ) is the momentum component conjugate to a coordinate set x = (x 1 , x 2 , x 3 ) with the metric coefficients g i j and its inverse g i j . A mathematical representation of F being an integral of motion is given by the vanishing Poisson bracket of the integral F with the Hamiltonian H : namely, Considered as a partial differential equation for F, this is equivalent to the (time-independent) collisionless Boltzmann equation (CBE). 1 Since equation (1) is a linear homogeneous equation for F, any func- is also a solution. That is, the CBE only describes a (necessary) condition for the DF to be stationary and cannot uniquely determine the DF for any given potential. In fact, physical considerations make it obvious that many different DFs can indeed be in equilibrium with the given potential. On the other hand, if a stationary DF is known, the CBE may also be interpreted as a partial differential equation for the potential. Here the question is whether the given DF (or more generally an integral of motion) can determine a unique potential through the CBE. The CBE is linear in Φ (albeit non-homogeneous) and so there exists a gauge freedom such that, if Φ 0 is a particular solution, the function Φ 0 +G(I), where G(I) is an arbitrary function of a particular solution I to the homogeneous counterpart, also satisfies the same inhomogeneous differential equation. However, the potential Φ = Φ(x) is a function of only the configuration-space coordinates, whereas the CBE is a partial differential equation in phase space. In other words, we must only consider the solutions that are also constant along any direction in momentum space; that is, the solution must also be subject to the constraints that ∂Φ/∂p 1 = ∂Φ/∂p 2 = ∂Φ/∂p 3 = 0. Are these then sufficient to uniquely determine the potential Φ for the given DF? Let us suppose that a DF F(p; x) is known to be stationary in the potential Φ 0 (x). Then it follows that {F, If there exists another potential Φ which the same F is also a stationary DF in, the potential Φ satisfies the CBE with F in equation (1) or equivalently equation (2) but with Φ 0 → Φ. Eliminating the common terms between two CBEs, we can construct a homogeneous linear partial differential equation for the difference Φ − Φ 0 : Here Φ − Φ 0 is a function of only the real-space component (x 1 , x 2 , x 3 ), whereas F is a function of phase space in general. Thus taking the partial derivative with respect to one of the momentum components results in the set of three differential equations: Since the Hessian matrix [∂ p i ∂ p j F] (where ∂ p i = ∂/∂p i and so on) is real symmetric, it is diagonalizable at least locally by a point-wise orthogonal transformation. In the local coordinate diagonalizing the Hessian (in which ∂p i ∂p j F = 0 for i j), equations (4a) reduce to Therefore, if λ i = ∂ 2 p i F 0 for a direction in the transformed coordinate, then ∂(Φ−Φ 0 )/∂q i = 0 along the conjugate coordinate direction associated with the non-zero eigenvalue λ i . If m is the rank (i.e. the number of non-zero eigenvalues) of the Hessian, the difference Φ − Φ 0 is consequently an arbitrary function of 3 − m functionallyindependent functions q j = q j (x 1 , x 2 , x 3 ), which are the coordinate functions corresponding to the eigenvectors associated with the null eigenvalues. In particular, if the Hessian determinant is non-vanishing, then λ i 0 for all i and m = 3. Solving equations (4a) as a series of linear equations for ∂(Φ − Φ 0 )/∂x i then results in where C is an arbitrary constant; that is, the potential Φ(x) satisfying the CBE for a given DF, if it exists, is essentially unique up to an additive constant (resulting in the identical gravitational acceleration field). In other words, the non-vanishing Hessian of equation (5) is a sufficient condition for the uniqueness of the potential for a given stationary DF. 2.2 Are there physical DFs that do not specify a unique potential? If the Hessian [∂ p i ∂ p j F] is singular, there exists a local momentumspace coordinate system (p 1 ,p 2 ,p 3 ) such that the directional derivative of F in a fixed coordinate direction must be constant in momentum space. That is to say, the singularity condition indicates that at least one eigenvalue, which is the second-order partial derivative in the corresponding coordinate direction, must be zero (i.e. λ j = ∂ 2 p j F = 0 for ∃ j). Since the coordinate can be chosen to be orthogonal so that all the second-order cross partial derivatives vanish (∂p i ∂p j F = 0 if i j), there then exists a coordinate system in which all the second derivatives involving one particular coordinate should be zero (i.e. ∂p i ∂p j F = 0 for ∀i and ∃ j). Therefore the directional derivative of F in the same coordinate direction must be constant in momentum space; that is, ∂p j F = k 0 (x) for ∃ j. In the original coordinates, this implies that 3 i=1 k i ∂ p i F = k 0 where k i 's are the constants in momentum space (but they are functions of the real-space positions) and at least one of { k 1 (x), k 2 (x), k 3 (x) } is nonzero. In fact, if there are two or more distinct potentials satisfying the CBE with the given DF, equation (3) further indicates that there exists { k 1 , k 2 , k 3 } such that k 2 1 + k 2 2 + k 2 3 0 and (k 1 ∂ p 1 + k 2 ∂ p 2 + k 3 ∂ p 3 )F = 0. In other words, if the function F is an integral of motion in two (or more) distinct -as in generating different gravitational accelerations -potentials, then there exists a fixed direction (k 1 , k 2 , k 3 ) in momentum space that is tangent to the level surfaces of the DF everywhere in momentum space. However, the integral curve of a constant vector is a straight line and momentum space is topologically equivalent to R 3 . Consequently, all the level surfaces of F have infinite extent and the inverse image of any real interval under F −1 in momentum space cannot have a compact support (unless empty). That is to say, such a function F is not integrable and cannot be a physical DF. In light of this, we argue that the unique determination of the potential is a property related to the global behaviour in momentum space. That is to say, the CBE only describes the balance amongst the gradients of the DF and the external acceleration field in the local neighborhood of a fixed phase-space location, whilst the external gravitational acceleration is shared in the whole momentum space at a fixed real-space position. By joining all the constraints on the acceleration fields coming from the CBE in different momentum-space locations (but at a fixed real-space position), we can narrow down to the unique acceleration. This fact is also demonstrated by the examples presented in the following section (Sect. 3) where a unique potential actually follows from insisting that the CBE holds for all values of the momentum components. EXAMPLES To gain insight into the steps needed to extract a unique potential from the CBE, we first look at some analytic examples. Ergodic distributions: a unique potential We start by examining the case of an ergodic DF F = f (E) in a fixed potential Φ 0 (x), where E = 1 2 2 + Φ 0 is the specific energy and is known as a function of the phase-space coordinates. Here, no further assumption is made on the self-consistency of the system and so the potential need not be spherically symmetric (cf. An, Evans & Sanders 2017). In Cartesian coordinates, the CBE is then reducible to the differential equation on the difference between any two possible potentials: Assuming that the DF in itself is not constant, that is, f (E) 0, then in order for this to hold everywhere in phase space, Therefore Φ = Φ 0 + C and the potential is unique (up to an additive constant). Separable potentials with third integrals If there exists a DF of the form , then the resulting CBE in equation (1) reduces to a cubic polynomial equation on p i 's. This is of course the old "ellipsoidal hypothesis" (see Chandrasekhar 1939;Camm 1941;Evans & Lynden-Bell 1991, and references therein). Assuming that the DF is stationary, the CBE should hold for any p i 's and so the coefficients to all the monomial terms (p i p j p k , p i p j , and p i etc.) must vanish identically. It is then found that the coefficients to the cubic and quadratic terms respectively only involve the tensor K i j and the vector X i , and the first-order partial differential equations resulting from setting them to be zero restrict the possible forms for K i j and X i (An 2013, and references therein). However if the DF is already given and known to be stationary, these conditions must hold automatically. On the other hand, setting the coefficients to the linear terms to be zero results in the set of three differential equations: If ξ(x) is known, these can be considered as the coupled differential equations on the potential Φ. Provided that the matrix [K i j ] is invertible (here also note that K i j = ∂ p i ∂ p j J), equation (9) can be uniquely solved for ∂Φ/∂x i so that where In other words, if the local DF that is a function of a non-degenerate quadratic form of the canonical momenta is stationary, the gravitational acceleration is uniquely specified in the neighborhood. As a concrete example, suppose that there exists a stationary DF of the form F = f (J) where with constants a and k, is the third integral of the Kuzmin (1956) disc potential in the cylindrical polar coordinate (R, φ, z) and = = ( · ) 1/2 is the magnitude of the specific angular momentum. Here, = x ×ẋ = (Rê R + zê z ) × ( RêR + φêφ + zêz ) and so follows that Since this holds for all (p R , p φ , p z ), we have ∂Φ/∂φ = 0 and where we have used ∂|z|/∂z = z/|z| (NB: ∂ξ/∂z at z = 0 does not exist). If a 0, we can solve equation (13) for ∂Φ/∂R and ∂Φ/∂z: which satisfies the compatibility condition, (14) can be directly integrated to yield a unique solution: which recovers the axisymmetric potential of the Kuzmin disc up to an additive constant C. Integrals of motion due to the symmetry of the potential Let us consider the DF F = f ( z ) where z = ·ê z is the component of the specific angular momentum in a fixed (say, Cartesian z) direction. Technically any such function cannot be integrable over the whole phase space and so is unphysical. Nevertheless, the CBE merely requires F to be an integral of motion, and so is still appli- Provided f ( z ) 0, this implies that ∂Φ/∂φ = 0, the general solution of which is any axisymmetric potential; that is, an arbitrary function Φ = Φ(R, z) of two coordinate functions R and z. Also note ∂F/∂p R = ∂F/∂p z = 0 indicates that the only non-zero component of the Hessian [∂ p i ∂ p j F] is ∂ 2 F/∂p 2 φ and so it follows that the rank of Hessian is 1 as long as ∂ 2 F/∂p 2 φ = f ( z ) 0. The result is independent of the choice of the coordinate, although the calculation may be more complicated. For example, in Cartesian coordinates, z = x y − y x and so the Hessian becomes and so, unless f ( z ) = 0, we have a homogeneous first-order linear partial differential equation on Φ(x, y, z), Utilizing standard techniques such as the method of characteristics, its general solution is found to be Φ = Φ(x 2 + y 2 , z), which is again an arbitrary axisymmetric function. Similarly if a stationary DF (or rather an integral of motion) of the form F = f ( 2 ) is available, the CBE in the canonical phase-space coordinate (p r , p θ , p φ ; r, θ, φ) inherited from the spherical polar coordinate (r, θ, φ) is reducible to If f ( 2 ) is a non-constant integral of motion, equation (19b) should hold everywhere in phase space (i.e. for any p θ and p φ ) and so Hence the general solution is any spherically symmetric potential. As for the rank of the corresponding Hessian, we observe that the rank of the matrix [∂ p i ∂ p j 2 ] is 2 (independent of the coordinate system) with the radial vector being the eigenvector associated with a null eigenvalue (note ∂ 2 /∂p r = 0 in the spherical polar coordinate). In addition the radial vector is also in the null space of the matrix [(∂ p i 2 )(∂ p j 2 )], thanks again to ∂ 2 /∂p r = 0. Hence, for any f ( 2 ), the radial vector is in the null space of the Hessian matrix; In other words, the Hessian is singular and its rank is at most 2. Since any axisymmetric or spherical potential admits the integral of motion z or 2 , it is not an unexpected result that F = f ( z ) or f ( 2 ) only constrains the associated symmetry of the potential and cannot specify the unique potential. The above examples however demonstrate that such integrals of motion also fail the necessary condition of having a non-singular Hessian in momentum space. Furthermore, we also observe that f ( z ) and f ( 2 ) are not actually integrable in momentum space. That is to say, f ( z ) is independent of p R and p z , but both components are unbounded, and so the integral of any non-negative f ( z ) over the whole momentum space is infinite (unless it is identically zero). A similar argument can also be made for f ( 2 ) and the component p r . We have argued in Section 2.2 that this is not an accident, but that there is a logical connection between the singular Hessian and the non-integrability. ALGORITHMS FOR EXTRACTING THE GRAVITATIONAL ACCELERATION Suppose that stationary DF F is known. How then can we extract the gravitational accelerations? First consider the CBE in an arbitrary curvilinear orthogonal coordinate (in which the line element is ds 2 = h 2 1 dx 2 1 + h 2 2 dx 2 2 + h 2 3 dx 2 3 ) rearranged to be ∂F ∂ 1 where i = h iẋi is the velocity component projected on the orthonormal frame. Next let us observe that ∇Φ is constant at all the velocity-space points, given a fixed real-space position. Hence the subset of equations (22a) sampled over the range of velocity space at a fixed position results in an over-determined (assuming there are more than three sampling points) system of linear equations on (∂Φ/∂x 1 , ∂Φ/∂x 2 , ∂Φ/∂x 3 ). Technically, we only need samples at three different velocity-space points so as to uniquely determine the local gravitational acceleration, provided that the three vectors ∇ F at the three sampled points -where ∇ = (∂ 1 , ∂ 2 , ∂ 3 ) is the gradient operator in velocity space -are mutually linearly independent. In fact, the non-singular Hessian of F as discussed in Section 2 guarantees the existence of such three points in velocity space (and so is a sufficient condition for the unique determination of the potential). On physical grounds, the over-determined system of equations (22a) resulting from more than three velocity-space points at a single spatial location should be consistent and must possess a unique solution. However, due to the uncertainties in the data, the exact solution may not be necessarily found with the actual set of equations in practice. Instead, the problem should be approached by methods such as least-square: that is, minimizing where S is as defined in equation (22b), and the summation is over a suitably-chosen sample of velocities with the weights ς −2 . Finding the extrema with respect to ∇Φ = (∂Φ/∂x 1 , ∂Φ/∂x 2 , ∂Φ/∂x 3 ) is then equivalent to solving the set of linear equations: where which is basically the set of standard normal equations. Therefore, provided the matrix [A i j ] defined as in equation (24b) is invertible, ∇Φ that minimizes equation (23) at the same position can be found through a matrix inversion. Alternatively one may also attempt to minimize equation (23) summed over data points ranging in a region of space, in order to get the potential as an optimizing functional solution. In principle, this can be done with a suitably-chosen parametric function for the potential or non-parametrically (pixelized or otherwise), which is closer to the implementation proposed by Green & Ting (2020) to recover the potential. After reconstructing the DF from the discrete dataset via normalizing flows, Green & Ting (2020) characterized the potential as an optimized feed-forward neural network minimizing the cost function, which is defined similarly to equation (23) but with the absolute value instead of the square and also includes the penalty for the negative density. This procedure combines the determination of the local accelerations and their integration into the potential as one single optimization problem. Nonetheless, the actual physical constraints due to the CBE are in the form of an algebraic relation on the local acceleration and so the measurements of the accelerations at different spatial locations should in principle be independent (except for possible systematic correlations relating to the determination of the DF). EFFECTS OF DISEQUILIBRIUM If the stellar system is not in equilibrium, its DF F(p; x; t) by definition, is no longer an integral of motion. Provided that the collisional effects are negligible, the evolution of the DF is still governed by the CBE, but the CBE now must include explicit time dependence; D t F = ∂ t F + {F, H} = 0, where D t F is the (Lagrangian) phase-space convective derivative and ∂ t F = ∂F/∂t is the (Eulerian) time rate of change of F at a fixed phase-space coordinate, whilst {F, H} is the same as equation (1). We observe that the argument in Section 2.1 still holds for the time-dependent CBE as long as ∂ t F is also a known quantity. In particular, equation (22a) maintains the same form but the right-hand side additionally includes the ∂ t F term (S → S +∂ t F), and so the determination of the acceleration is still possible if ∂ t F's are known throughout phase space. However ∂ t F is impossible to measure directly within a practical time scale barring few exceptional situations -by contrast, if ∇Φ is known independently, ∂ t F may instead be determined using the CBE. If ∂ t F is considered as unknown, the system of equations (22a) becomes under-constrained and the problem is technically insoluble without some additional restrictive assumptions on the behaviours of ∇Φ or ∂ t F. Nevertheless, we may still infer effects due to the system not being in equilibrium. If the time derivatives are neglected when not warranted, that will introduce a systematic bias. Notably, the linear system of equations (22a) would then not necessarily be consistent even if all the phase-space derivatives of F are known exactly. Whilst equation (24) still has a unique solution despite the system of equations (22a) being inconsistent, the resulting solution is actually offset by the "sample average" of ∂ t F. That is to say, if ∂Φ s /∂x i is the solution of inverting equation (24) with ∂ t F = 0 (whereas ∂Φ/∂x i is the true gravitational acceleration component), then and A −1 i j is the matrix element of the inverse matrix of [A i j ] in equation (24b). This follows from the fact that ∂Φ/∂x i is actually the solution of equation (24) with S → S + ∂ t F. If we insert back the solution (eq. 25) into equation (22a) and consider the departure from the equality at each sample point, then (with In other words, the residuals consist of the time derivative ∂ t F and the projection of the bias (i.e. B i ) onto ∇ F. We note that B i 's are unknown but fixed constants and so the last term is also considered as ∇ F projected onto a fixed (albeit unknown) direction, which behaves in a predictable systematic pattern. Consequently it would be a smoking gun for a system in disequilibria if the observed residual on each sample point exhibits a systematic behaviour over velocity space not consistent with a projection of ∇ F onto a fixed direction. IMPLEMENTATION Given a known DF, equation (24) furnishes us with a way to calculate gravitational accelerations, under the assumption of equilibrium. We now wish to test this technique on a mock dataset. Here, we demonstrate a complete pipeline from a six-dimensional (6D) stellar kinematics dataset to a map of accelerations. This will necessitate an additional step in the procedure, that is, obtaining the underlying DF of the data. Whereas a conventional approach might (2020) and construct a non-parametric DF directly from the data. Our method can thus be summarized: (i) Employing a normalizing flow technique, we reconstruct a non-parametric DF from the mock data. (ii) With this reconstructed DF in hand, we exploit eq. (24) to calculate accelerations. This exercise serves mainly as a proof of concept. In a subsequent paper (Naik et al., in prep.), we shall apply the same methodology to local stellar kinematics, with a view towards mapping the acceleration field (and thence the distribution of matter) in the solar neighbourhood. It is worth noting that an acceleration field calculated with our method is not guaranteed to be physical, in the sense that it might show negative divergences (i.e. negative mass densities) or non-zero curls (i.e. non-conservative force). We view this feature as an advantage: the existence of such non-Newtonian accelerations can serve as a valuable post hoc test of our method. If they are found to be robust, they might hint at disequilibrium features or non-gravitational force (even modified gravity). On the other hand, the requirements for non-negative divergences and vanishing curls can be imposed a priori if so desired, by adding penalty terms to the loss function used to train the normalizing flow. These non-Newtonian accelerations are then still possible in principle, but heavily suppressed. Ergodic models We consider a simple galaxy halo model in which the DF selfconsistently generates both the potential and the density. We generate a mock 6D dataset using this DF, and then attempt to derive the underlying acceleration field from the mock data. For this model, we adopt the spherical Hernquist (1990) profile, specified by the potential-density pair where M and a are respectively the galaxy mass and scale radius. The isotropic (ergodic) DF for this model is given by 2 where = −Ea/GM ≥ 0 (here, −E is the specific binding energy of a star). In this case, the phase-space gradients of F are determined solely by the gradients of the energy E. A visualization of the isotropic DF, for M = 10 10 M and a = 5 kpc, is given in the left-hand panel of Figure 1. There is a clear curve above which the DF is everywhere zero: viz. the escape velocity esc = √ 2GM/(r + a). With this DF, we employ an MCMC technique to sample a mock 6D dataset with 10 6 stars. For this, we use the affine-invariant ensemble sampler implemented in the software package emcee (Foreman-Mackey et al. 2013). A density plot of this mock dataset is shown in the second panel of Figure 1. From this mock dataset, we now want to learn the underlying DF by means of a normalizing flow technique (Rezende & Mohamed 2015). Normalizing flows are a relatively new probability density estimation technique, and the basic principle behind them is rather straightforward: a simple base distribution such as a Gaussian is subject to a series (or "flow") of complex (but bijective and invertible) transformations into a target distribution. The parameters of these transformations are then optimized so as to give a target distribution that closely resembles the data. More detailed descriptions of the technique are given in the article by Rezende & Mohamed (2015) first describing normalizing flows, and the recent review articles by Kobyzev, Prince & Brubaker (2020) or Papamakarios et al. (2021). Despite taking a single Gaussian as the starting point, a flow with sufficiently flexible transformations (and sufficiently many of them) is able to mimic arbitrarily complex, multimodal data distributions. In practice, even rather minimalist flow architectures are capable of achieving great complexity (see e.g., Kingma & Dhariwal 2018, for an impressive application of flows in image generation). Another class of density estimation technique capable of emulating arbitrarily complex datasets is kernel density estimation. The advantages of flow-based techniques over kernel-based techniques are two-fold. First, flows are less susceptible to over/under-fitting data (Both & Kusters 2019). The second advantage is more contextdependent. Kernel-based techniques typically require no training beyond simply loading the kernels into memory, and perhaps some tuning of the kernel-width parameter. However, given a dataset of size N, evaluating the kernel density PDF then essentially requires the computation of N kernel functions, which can be costly as N grows large. Flows do require a training procedure, the cost and duration of which depend on the flow architecture and the size and complexity of the dataset in question. However, given a trained flow, evaluating the PDF is then a mere matter of computing a single Gaussian and a small number of transformations, regardless of N. In summary, ker- nel densities are cheap to train but expensive to evaluate, while flow densities are expensive to train but cheap to evaluate. In our context, we need to train a density estimator only once to learn the DF, but would then like to evaluate it many times, e.g., for the sums in equation (24). This would therefore suggest flows over kernels. Another notable aspect of normalizing flows is that the target distribution is guaranteed to be a well-behaved probability distribution, i.e. positive everywhere and normalized to unity. The positivity requirement is met straightforwardly by working in log-space, but the normalization requirement is more exacting: it restricts the space of usable transformations to bijective and invertible functions. This space is then restricted further by the desire for computational efficiency. Different normalizing flow techniques differ primarily in the details of these transformations, as well as the base distributions and flow architectures. We differ from Green & Ting (2020) in that we employ "masked autoregressive flows" (MAFs; Papamakarios, Pavlakou & Murray 2017). This choice is motivated by the benchmarking of a number of normalizing flow algorithms. We train an ensemble of 30 MAFs, each with 8 transformations along the flow, each transformation being a neural network with one hidden layer of 64 units. We use the implementation of MAFs in the publicly available software package nflows. 3 The MAFs are trained on the mock data, and thus learn a nonparametric DF that closely resembles the data. This learned DF is shown in the third panel of Figure 1. It is worth emphasizing that, whilst this plot is in two dimensions, the MAFs are trained using 6D data and learn a 6D DF. The plotted values here are taken from a 2D Model / Exact -1 Figure 4. The anisotropic Hernquist DF, projected into rt space at fixed position (r = a). The four panels carry the same meanings as in their isotropic analogues in Fig. 1, although some differences are discussed in the text. The normalizing flow technique is also successful at recovering the anisotropic Hernquist DF, albeit with larger residuals than in the isotropic case. slice through this 6D DF, with y = z = y = z = 0 (so that x = r, x = ). The rightmost panel of Figure 1 shows fractional residuals, i.e. F model /F exact − 1. Encouragingly, the residuals are less than 5 % throughout most of phase space. In other words, our algorithm is successfully able to reproduce the isotropic Hernquist DF. One apparent qualification to this success is the region near the esc -curve, where the DF is consistently overestimated. The esccurve represents a hard edge in the Hernquist DF, and even very flexible non-parametric density estimation schemes can struggle to reproduce such a hard edge. However, this need not be a cause for concern, for the following reason: if we progress to step (ii) of our method and attempt to derive acceleration at a given spatial location using this learned DF, the right-hand side of equation (24) requires us to choose a number of points in velocity space. At this stage, we are free to choose whichever velocities we like, and we can thus choose to steer well clear of this region near esc , which we term a "zone of avoidance". Of course, in real-world applications, one might not know the exact value of esc , but one can always make an educated guess (e.g., Williams et al. 2017;Deason et al. 2019). Equation (24) requires the spatial and velocity derivatives of the DF to calculate accelerations. We therefore check if our technique accurately recovers not just the DF, but also its derivatives. Here, a compelling benefit of the normalizing flow technique is that the learned DF is everywhere exactly differentiable, irrespective of the complexity of the flow architecture. Thus, we can efficiently calculate exact derivatives, obviating the need for potentially noisy finite difference schemes. Figure 2 compares the first derivatives ∂F/∂x and ∂F/∂ x of the exact and reconstructed DFs, evaluated on a 2D (x, x ) plane in phase space. Inspecting the residuals in the lower panels of Figure 2, it is apparent that the MAFs are rather successful at accurately recovering the gradients of the DF; the residuals are less than 10 % throughout most of phase space. As seen in Figure 1, there is a problematic region of larger residuals near esc . In addition to this, two more such regions are apparent. First, the ∂F/∂ x residuals grow rather large in the immediate vicinity of x = 0. This is the peak of the 1D x -distribution, and so the nearby gradients are small and susceptible to mis-estimation. Second, the ∂F/∂x residuals show similar issues around x = 0. The same arguments hold here, perhaps exacerbated by the power-law cusp in the Hernquist model. For calculating accelerations, the first problem can be avoided as in the esc case, i.e. by sampling velocities that avoid the region around x = 0 (likewise y , z ). However, in the second region around x = 0, the residuals appear to be consistently large throughout velocity space, suggesting that our calculated accelerations at these small very radii will be biased. With these points in mind, we now progress to step (ii) of our method, and derive accelerations from our learned DF using equation (24). Here, we take 50 points along the x-axis, and at each of these points we sample 10 3 velocities for the sums on the right-hand side of equation (24). We perform this sampling by calculating the escape speed esc at each spatial point, then uniformly sampling 10 4 speeds between 0 and 0.9 esc . Random directions are then chosen from the unit sphere. Finally, we randomly subsample 10 3 velocities from this set, avoiding the region around i = 0. After performing this sampling, we have 10 3 points in phase space at which we evaluate equation (24) for each spatial location. The results of this are shown as the "Isotropic, σ = 0" curve in Figure 3. It is clear that the method derives the accelerations in the isotropic Hernquist model very well. The fractional residuals shown in the lower panel indicate an accuracy everywhere at the level of 3 %. Anisotropic models We repeat this exercise, using a simple anisotropic DF for the Hernquist model (Baes & Dejonghe 2002;Evans & An 2005, 2006) Now, the DF depends on the magnitude of the angular momentum = r t (here 2 t = 2 θ + 2 φ ) as well as the (dimensionless) binding energy . As before, we sample one million positions and velocities from this DF, then feed this data to an ensemble of MAFs. Figure 4 is the anisotropic analogue of Figure 1, and shows the exact DF, a density plot of the mock data, the learned DF and the fractional residuals. As the DF is not isotropic, we do not show the DFs projected into (r, ) space, but rather into ( r , t ) or radial versus tangential velocity space at fixed position (r = a). Consequently, the "Data" panel does not show the full dataset as in Figure 1, but only the stars within a small radial slice around r = a. The residuals in the anisotropic DF are generally larger than in the isotropic case, but nonetheless reasonably small, ∼ 5-10 %. Moreover, there seems to a be an additional zone of avoidance here beyond those already discussed in the isotropic case, around t = 0. The source of the large errors here can be seen directly from the form of the DF (29): t = 0 means = 0, so the DF diverges. The probability distribution remains well-behaved, but the MAFs nonetheless struggle to reproduce the sharp rise in probability density at small t . Despite these foibles, the accelerations are still well recovered in the anisotropic case. These are shown as the points labelled "Anisotropic, σ = 0" in Figure 3. Indeed, the residuals here are comparable to the isotropic case. One aspect of our procedure worth emphasizing is that successful calculation of accelerations relies on the judicious choice of velocity samples, steering clear of the "zones of avoidance" in which the DF and its gradients are poorly estimated. We have seen above that the existence and locations of zones can vary from context to context, and so it might be difficult to know a priori where they are for any given real stellar population. This is a potential drawback to our method, but it can be readily circumvented by performing tests on mock datasets. Effect of errors As a final test, we assess the potential impact of observational errors by adding Gaussian noise to the isotropic dataset, at the 1 % and 10 % level. The results of this trial are also shown in Figure 3, alongside the original results for the noiseless dataset. Based on this test, it appears that random errors of this magnitude have no appreciable adverse impact on the calculation of accelerations, with residuals still at the percent level. The application of our method to real data is therefore unlikely to be limited by statistical error. Going beyond our simple test, there is a natural way to propagate observational errors in our method: when training an ensemble of MAFs on the data, each MAF could be provided with a slightly different dataset from which to learn, generated from a different realisation of the error distribution. Each member of the ensemble will then have a different learned opinion about the acceleration at a given spatial location, and the spread of these values will incorporate observational errors. CONCLUSIONS The phase-space distribution (DF) for the stars in the Milky Way is an obvious way to organize the new datasets comprising of nearby stars in the full six-dimensional phase-space coordinates. One question that follows is what information the DF actually contains about the overall properties of the Galaxy. We have proved that, if the stationary DF of a population is known locally in the neighborhood of a fixed real-space position, then the gravitational acceleration at that location can be uniquely determined from the phase-space gradients of the DF, using the collisionless Boltzmann equation (CBE) under the assumption of dynamical equilibrium. A sufficient condition for this to be true is that the Hessian of the DF with respect to the momenta does not vanish (see eq. 5). In practice, once the CBEs are set up locally at more than three independent phase-space points sharing the real-space coordinates, we have an over-determined system of linear equations on the potential gradients, which can be solved via techniques, such as the least square and normal equations. A practical prescription of how to do this is provided in equation (24). In light of this finding, we address the question as to how to empirically reconstruct the DF suitable for the local measurements of the gravitational acceleration. Recent developments in machine learning techniques offer great promise in this regard. In particular, Green & Ting (2020) proposed that the DF of stars can be reconstructed from samples of discrete positions and velocities via the method of normalizing flows and the underlying potential can be recovered from this empirical DF. We examine this suggestion by devising tests derived from isotropic and anisotropic Hernquist models using masked autoregressive flows to build the DF. Once built, direct solution of the over-constrained linear equations for the accelerations (eq. 24) is highly efficient, and preferable to use of a neural network (cf. Green & Ting 2020). The accelerations are everywhere well reproduced with samplings of ∼ 1000 velocities at any given position. One caveat here is the existence of regions of velocity space in which the DF is poorly estimated, which need to be avoided in the sampling. Tests with the addition of Gaussian noise at the 1 % or 10 % level suggest that the method is stable against errors of this magnitude. There are a number of evident applications of this method, some of which we are actively pursuing. For example, if we reconstruct the velocity distributions of a homogeneous (in equilibrium) stellar population in the solar neighbourhood from the sample of the nearby stars (e.g., Gaia Collaboration 2021), it is possible to measure the local gravity at the sun's position due to the Galactic potential (Naik et al., in prep.). This has implications both for the measurement of the local dark matter density and for tests of alternative theories of gravity. Equally, the method is potentially applicable to the datasets of Milky Way halo stars to measure the mass of the Milky Way and its escape speed. One assumption underlying the implementation of our method is that of dynamical equilibrium. Incorrectly assuming ∂F/∂t = 0 leads to an additive bias in the derived accelerations that is linear in ∂F/∂t. In addition, disequilibrium can manifest itself through the system of equations (22a) sampled at many different velocity space positions being inconsistent with a single value of ∂Φ/∂x i (after accounting for observational uncertainties), or equation (24) resulting in different values of the acceleration for distinct choices of samples. There is now a significant body of evidence suggesting the existence of disequilibria in the Milky Way disc (e.g., Antoja et al. 2018;Schönrich & Dehnen 2018;Salomon et al. 2020), which will need to be carefully considered in future applications of our technique to local stellar kinematics. Banik, Widrow & Dodelson (2017) find the bias in inferred accelerations to be at the 10 % level if such systematic perturbations are ignored. So, it is interesting to explore whether the pattern of residuals at a sampling point has a systematic behaviour over velocity space that may be a tell-tale signature of departures from equilibrium (cf. Li & Widrow 2021, for a somewhat similar idea). It is also worth remarking that the first step of our outlined procedure, i.e. learning the DF with normalizing flows, is entirely assumption-free. Given this learned DF, one could then study the non-equilibrium structures themselves. These non-equilbrium structures imprinted in the stellar kinematics are much more than merely sources of systematic error: perturbations to a system can reveal insights about the system itself. For example, Widmark et al. (2021) has shown that the shape of the Gaia phase spiral can be used to constrain the local gravitational potential. To summarize, our method bypasses many of the assumptions that have been traditionally adopted in studies of galactic dynamics, and represents an efficient, flexible, and data-driven means of extracting underlying gravitational accelerations from snapshots of stellar kinematics. ACKNOWLEDGEMENT We thank Gregory Green and Yuan-Sen Ting for useful discussions, as well as the anonymous referee for a very useful report. APN and CB are supported by a Research Leadership Award from the Leverhulme Trust. CB is also supported by a Royal Society University Research Fellowship.
10,845.8
2021-06-10T00:00:00.000
[ "Physics" ]
Trace Amine-Associated Receptor 1 Trafficking to Cilia of Thyroid Epithelial Cells Trace amine-associated receptor 1 (rodent Taar1/human TAAR1) is a G protein-coupled receptor that is mainly recognized for its functions in neuromodulation. Previous in vitro studies suggested that Taar1 may signal from intracellular compartments. However, we have shown Taar1 to localize apically and on ciliary extensions in rodent thyrocytes, suggesting that at least in the thyroid, Taar1 may signal from the cilia at the apical plasma membrane domain of thyrocytes in situ, where it is exposed to the content of the follicle lumen containing putative Taar1 ligands. This study was designed to explore mouse Taar1 (mTaar1) trafficking, heterologously expressed in human and rat thyroid cell lines in order to establish an in vitro system in which Taar1 signaling from the cell surface can be studied in future. The results showed that chimeric mTaar1-EGFP traffics to the apical cell surface and localizes particularly to spherical structures of polarized thyroid cells, procilia, and primary cilia upon serum-starvation. Moreover, mTaar1-EGFP appears to form high molecular mass forms, possibly homodimers and tetramers, in stably expressing human thyroid cell lines. However, only monomeric mTaar1-EGFP was cell surface biotinylated in polarized human thyrocytes. In polarized rat thyrocytes, mTaar1-EGFP is retained in the endoplasmic reticulum, while cilia were reached by mTaar1-EGFP transiently co-expressed in combination with an HA-tagged construct of the related mTaar5. We conclude that Taar1 trafficking to cilia depends on their integrity. The results further suggest that an in vitro cell model was established that recapitulates Taar1 trafficking in thyrocytes in situ, in principle, and will enable studying Taar1 signaling in future, thus extending our general understanding of its potential significance for thyroid autoregulation. Introduction Cilia of thyroid epithelial cells are involved in the regulation and maintenance of thyroid homeostasis and intact follicle structure [1][2][3][4][5][6]. Thyrocytes in well-polarized states expose one primary immotile cilium per cell that is identified by the cilia marker acetylated alpha-tubulin [1,2,4]. The primary cilium extends from the apical surface of polarized thyroid epithelial cells, e.g., in confluent cultures of Fisher rat thyroid (FRT) cells. Upon long-term FRT cell culture, follicle-like structures (FLS) are formed whereby thyrocytes build a monolayer around an extracellular lumen into which cilia extend [1]. Hence, cilia of cultured thyrocytes in vitro mimic the in situ-localization of primary cilia at the apical surface of thyrocytes in the sphere-like follicles of thyroid tissue [1,2,4]. Alterations of cilia or changes in their frequency are indicative of thyroid diseases, ranging from dysfunctional thyroid states to neoplastic pathologies [2,7]. While such correlations of thyroid pathologies with altered cilia length and numbers are important as diagnostic criteria in thyroid disease, little is known about the molecular mechanisms that connect cilia with altered thyroid states. To this end, we proposed a thyroid autoregulatory mechanism that encompasses cilia as sensory extensions of thyrocytes probing the molecular state of the thyroid hormone (TH) precursor protein, thyroglobulin, which is stored in the thyroid follicle lumen [1,4,5]. The trace amine-associated receptor 1 (TAAR1 in human, mTaar1 in mouse, rTaar1 in rat), a G protein-coupled receptor (GPCR), has been suggested as the ciliary molecule that senses the state of luminal thyroglobulin, thereby enabling thyroid function by initiating or terminating its proteolytic utilization for TH liberation [1,5]. It is of note that mTaar1 and the basolateral GPCR thyroid-stimulating hormone (TSH) receptor co-regulate thyroid function in vivo [8]. TAAR1 has been identified to be susceptible to activation by a variety of biogenic amines [9][10][11][12]. Attempts to understand the physiological role of TAAR1, its trafficking and subcellular localization have been challenged by the protein's weak cell surface expression in vitro [13], and the difficulty in achieving stable TAAR1 expression in heterologous systems [11]. Nonetheless, the limited in vitro studies available assume that TAAR1/Taar1 retains an intracellular localization. The exact transport pathways, however, as well as the main subcellular TAAR1 localization along the secretory route, and whether dimer or oligomer formation, either with itself or other GPCRs, is required for productive transport to the cell surface remain an important field for investigations. We have previously shown that, at steady state, Taar1 localizes to compartments of the secretory pathway and, prominently, to the cilia of mouse and rat thyrocytes [1,4]. Using rat thyroid epithelial cell lines, we further showed that cell surface expression of rTaar1 in vitro depends on intact cilia, reminiscent of Taar1 s in situ localization in rodent thyroid tissue [1,4]. Since previous studies by us and others are further suggestive of an essential role of cilia in thyroid function regulation in man, mouse and rat, it is particularly important to better understand TAAR1/Taar1 trafficking to the cilia of thyroid epithelial cells [2,4,5,7,8]. The present study was designed to test the proposal of mTaar1 being transported along the secretory pathway in a heterologous system of stable mTaar1 expression. To this end, a construct coding for mTaar1 tagged with enhanced green fluorescent protein (EGFP) on its C-terminus was used to stably express mTaar1-EGFP in normal thyroid epithelial (Nthy-ori 3-1) and papillary thyroid carcinoma (KTC-1) cell lines, bearing characteristics of non-and well-polarized thyroid epithelial cells, respectively [14]. The subcellular localization of mTaar1-EGFP and its transport pathways were investigated in these cell lines at steady state and in pulse-chase experiments. The results show that mTaar1-EGFP reaches spherical structures at the apical plasma membrane of thyrocytes, referred to as procilia. When ciliogenesis was promoted by serum-starvation, mTaar1-EGFP was transported to elongated structures, co-stainable with the cilia markers acetylated α-tubulin or ARL13B, in both mTaar1-EGFP stably expressing cell lines. Thus, the presence of cilia in KTC-Z, the stably mTaar1-EGFP expressing and well-polarized human thyrocytes, promotes mTaar1-EGFP trafficking to this specific cell surface localization and maintains it at the cellular appendages. These results corroborate our previous findings with mouse and rat thyrocytes in situ and in vitro. However, transient mTaar1-EGFP expression in cilia-bearing FRT cells results in endoplasmic reticulum (ER) retention, thereby hindering mTaar1-EGFP's transport to the apical plasma membrane. Hence, we further aimed at delineating possible co-trafficking partners of the same GPCR family in thyrocytes. Taar proteins have been classified into three phylogenetic subgroups [15]. Consequently, Taar5 and Taar8b were picked as representatives of the two other phylogenetic subgroups, besides Taar1. Contrary to Taar1, both Taar5 and Taar8b were previously shown to reach the cell surface in transiently expressing HEK 293T cells [16,17]. Interestingly, co-expression of mTaar1-EGFP and the related hemagglutinin (HA)-tagged mTaar5, HA-mTaar5, promotes Taar1's ability to reach cilia in transiently co-expressing polarized FRT cells. Therefore, we propose that oligomerization of mTaar1 occurs early in the secretory pathway and promotes Taar1 trafficking to cilia of thyroid epithelial cells. Materials and Methods All studies were performed in the S1 and S2 laboratories of Jacobs University Bremen as registered with the Authorities of the City State of Bremen (Senatorin für Gesundheit, Frauen und Verbraucherschutz der Hansestadt Bremen, Bremen, Germany) under registration numbers 513-30-00/2-15.32 and 517/2-15.43 to K.Br. and S.Sp., respectively, as the responsible project leaders. The pHA-mTaar1 plasmid was employed as a template to amplify the mouse Taar1, Taar5 or Taar8b cDNA sequence, omitting the stop codon, while providing overhangs complementary to the XhoI and BamHI restriction sequences to enable ligation into the pEGFP-N1 (Clontech, Heidelberg, Germany) expression vector using T4 DNA ligase (EL0011, Thermo Scientific, Schwerte, Germany). The resultant plasmid coded for a chimeric protein with full-length mouse Taar1, covalently linked to the EGFP tag by a 12-amino acid long spacer peptide linker (pmTaar1-EGFP). The sequence was confirmed using standard pEGFP-N1 forward and reverse primers at Eurofins Genomics (Ebersberg, Germany). Similarly, the sequence coding for full-length mTaar1 minus the stop codon was cloned into a modified puc2CL6Ipwo lentiviral vector [18,19] at XhoI and AgeI sites of insertion (5 -end and 3 -end, respectively), to obtain a construct coding for the chimeric protein consisting of full-length mTaar1, covalently linked to EGFP by a 12-amino acid long spacer peptide linker (mTaar1-EGFP in puc2CL6Ipwo). Sequences were confirmed at Eurofins Genomics (Ebersberg, Germany) using oSF031Fwd (5 -CGGCGCGCCAGTCCTCCG) and oSF031Rev (5 -TAGACAAACGCACACCGG) sequencing primers. All cell lines were incubated at 37 • C and 5% CO 2 in a moisturized atmosphere, unless otherwise indicated. For trafficking studies, cell lines were grown on sterile coverslips until confluent, then incubated overnight at 18 • C in Gibco's CO 2 -independent culture medium (18045, Thermo Fisher Scientific, Schwerte, Germany), supplemented with 10% FBS and 1 µg/mL puromycin, and shifted to 37 • C subsequently for the indicated time periods. Cells were fixed in 4% paraformaldehyde (PFA) in 200 mM HEPES, pH 7.4, at t = 0 min, 15 min, 30 min, 45 min, 1.0 h, 1.5 h, 2.0 h, 3.0 h and 4.0 h, respectively, post-temperature shift, and immunolabeled with compartment-specific markers, as described below. For experiments on cilia markers, cell lines were serum-starved for 48 h in order to promote ciliogenesis before fixation in 4% PFA for 20 min at room temperature and in ice-cold methanol for 5 min at −20 • C, and immunolabeling, as described below. Henceforth, the acronyms KTC-Z and Nthy-Z will be used when referring to transduced, mTaar1-eGFP-expressing KTC-1 and Nthy-ori 3-1 cells, respectively. KTC-Z and Nthy-Z cells were cultured in RPMI 1640 medium (Lonza, Verviers, Belgium) supplemented with 10% FBS, in the presence of penicillin and streptomycin. When cells were thawed from frozen stocks, transduction efficacy was controlled by FACS and cells were eventually re-selected using complete culture medium supplied with 1 µg/mL puromycin. Cytochemistry and Indirect Immunofluorescence Following fixation, cells were washed 3 × 5 min by incubation with CMF-PBS and blocked in 3% bovine serum albumin (BSA; 3854, Carl Roth, Karlsruhe, Germany) in CMF-PBS for 60 min at 37 • C. The cells on coverslips were mounted with embedding medium consisting of 33% glycerol, 14% Mowiol in 200 mM Tris-HCl, pH 8.5 (Hoechst AG, Frankfurt, Germany). The slides were analyzed by confocal laser scanning microscopy using Argon and Helium-Neon, or diode lasers (LSM 510 Meta; Carl Zeiss Jena GmbH, Jena, Germany; LSM 980 with Airyscan 2 and Multiplex; Carl Zeiss Microscopy GmbH, Oberkochen, Germany). Images were obtained at a pinhole setting of 1 Airy unit and at a resolution of 1024 × 1024 pixels or using high-resolution Airyscan modes. Micrographs were analyzed with the LSM 510 software, release 3.2 (Carl Zeiss Jena GmbH, Jena, Germany) and with the LSM 980 ZEN 3.2 software (Carl Zeiss Microscopy GmbH, Oberkochen, Germany). Cell Lysate Preparation, SDS-PAGE and Immunoblotting Following washing in ice-cold PBS, cells were scraped off the 10 cm Petri dishes and collected in 500 µL lysis buffer, consisting of 50 mM Tris (pH 6.8) with 0.2% Triton-X 100 (TX-100) and supplemented with protease inhibitors (0.2 µg/mL aprotinin, 10 µM E-64 and 1 µM pepstatin A and 2 mM EDTA). The cell lysates were incubated for 1 h at 4 • C with constant rotation, and cleared by centrifugation for 10 min at 10,000× g at 4 • C. The supernatants were collected and protein content was determined according to the Neuhoff assay [25]. Cell Surface Biotinylation and Streptavidin Pull-Down Experiments Cell surface biotinylation was performed according to a modified protocol described elsewhere [28]. In brief, KTC-1, KTC-Z, Nthy-ori 3-1 and Nthy-Z cells were cultured in biotin-free medium (DMEM supplemented with 10% FBS and 1 µg/mL puromycin for transduced "Z" cells) continuously for 14 days prior to commencing the experiments. Cells were grown in 10 cm Petri dishes until~70%-90% confluent. The cells were then washed in cold PBS 2 × 30 min and incubated with 200 µg/mL biotinamidohexanoic acid 3-sulfo-N-hydroxysuccinimide ester sodium salt (B1022, Sigma-Aldrich, Steinheim, Germany) in PBS for 1 h at 4 • C with gentle shaking. Non-biotinylated controls were incubated in parallel in PBS only. Then, cells were briefly rinsed in PBS, and washed with 10 mM L-lysine (L5501, Sigma-Aldrich, Steinheim, Germany) in PBS solution 4 × 10 min to quench unbound biotin. Finally, the cells were incubated in lysis buffer (50 mM Tris, pH 6.8, with 0.2% TX-100, containing protease inhibitors as specified above), and collected in 2 mL microcentrifuge tubes to complete cell lysis and protein extraction, as described above. Cell lysates were subsequently used for SDS-PAGE and immunoblotting (see above). For streptavidin pull-down, cells were homogenized in cold homogenization buffer (250 mM sucrose, 20 mM HEPES, 1 mM EDTA, pH 7.4), supplemented with protease inhibitors as specified above, using a hand-held homogenizer at 500 rpm for 2 × 30 s, on ice. Homogenates were cleared by centrifugation for 15 min at 10,000× g at 4 • C. The supernatant was collected and protein content was determined according to the Neuhoff assay [25]. Streptavidin-precipitation was carried out using the µMACS streptavidin kit (130-074-101; Milteny Biotech, Bergisch-Gladbach, Germany) according to the manufacturer's protocol. Cold µMACS streptavidin MicroBeads solution was added to the cell homogenates in a ratio of 1:3 on ice and mixed by slowly pipetting up and down. The µ-column was placed in the magnetic field of the µMACS separator and prepared by rinsing it with 100 µL equilibration buffer prior to protein application, followed by two rinsing steps with 100 µL homogenization buffer (without proteinase inhibitors). The magnetically labeled complexes, i.e., streptavidin MicroBeads precipitates out of whole cell homogenates, were applied onto the top of the column matrix and washed 4 times with 100 µL washing buffer to remove non-specifically bound molecules. Elution of target molecules bound to the biotinylated probe was performed by adding 150 µL sample buffer without DTT (non-reducing) directly onto the top of the column matrix. The eluted proteins were heated for 5 min at 95 • C and stored at −20 • C until separated by SDS-PAGE. Molecular Mass Calculation The predicted molecular masses of mouse Taar1, the chimeric mTaar1-EGFP, and human TAAR1 were calculated using the SIB Swiss Institute of Bioinformatics ExPASy "Compute pI/Mw tool" (https://web.expasy.org/compute_pi/, accessed on 18 November 2017). Constructs and Cell Lines Used for Stable and Transient Expression in Thyrocytes In Vitro The mTaar1 is a 332-amino-acids long, 7-transmembrane GPCR, with extracellular Nterminus and a cytoplasmic C-terminal tail ( Figure 1A). In the present study, mTaar1-EGFP was studied upon stable expression in human KTC-1 or Nthy-ori 3-1 cells. In addition, N-terminally HA-tagged mTaar1, mTaar5, or mTaar8b were transiently co-expressed in rat FRT cells. A schematic diagram highlighting the position of either tag relative to the protein's transmembrane orientation is given ( Figure 1B). To test the hypothesis of mTaar1 trafficking in human thyrocytes, stable mTaar1-EGFP expression was favored over transient expression because we reasoned that cells require translation of sufficiently high enough protein amounts to facilitate transport to the cell surface, and to enable performing biochemical analyses. This approach was realized by transducing human KTC-1 and Nthy-ori 3-1 thyroid cell lines to express mTaar1-EGFP. Nthy-ori 3-1 is a well-studied human thyroid follicular epithelial cell line that retains functional differentiation, enabling iodide-trapping and thyroglobulin secretion [22]. In contrast, KTC-1 are functionally poorly differentiated papillary thyroid carcinoma cells that, despite not expressing TSH receptors, thyroid peroxidase (TPO) and sodium iodide symporter (NIS), retain high transcript levels of TG, TTF-1 and paired box gene 8 (PAX8) relative to other thyroid cancer cell lines [20,29,30]. Additionally, KTC-1 cells maintain epithelial polarity, supported by the prevalence of tight junction proteins, such as claudin-1, E-cadherin and occludin in the lateral plasma membrane of the cells [14,21]. Both cell lines were therefore regarded suitable to be employed as models for functionally differentiated vs. polarized, structurally differentiated human thyrocytes. Chimeric mTaar1-eGFP Is Abundant in High Molecular Mass Form in KTC-Z and Nthy-Z Cells, but Primarily Monomeric Chimeras Reach the Surface of KTC-Z Cells Proteins of whole cell lysates of stably mTaar1-EGFP-expressing KTC-Z and Nthy-Z, vs. the non-transduced KTC-1 and Nthy-ori 3-1 controls, respectively, were separated by SDS-PAGE and immunolabeled with GFP-specific antibodies to determine the molecular mass of expressed protein. The predicted molecular mass of mTaar1 equals 37.6 kDa, and mTaar1-EGFP is 65.8 kDa, disregarding potential post-translational modifications like through usage of N-glycosylation sites. Similarly, the predicted molecular mass of human TAAR1 is 39.1 kDa. However, immunolabeling revealed anti-GFP positive bands prominently at an apparent molecular mass of 157 kDa and 282 kDa in KTC-Z and Nthy-Z lanes only ( Figure 2). The said molecular masses represent an average of apparent molecular mass values, determined from the exponential equation of retardation factor (Rf) values plotted against the molecular masses of the protein ladder, thereby, potentially representing dimeric and tetrameric forms of mTaar1 (Table 1). This suggests that mTaar1-EGFP exists in SDS-resistant high molecular mass forms, which are not likely resulting from polyubiquitination as no band ladders in 9-kDa spaced pitches are seen. Additionally, a band at~27 kDa was seen in mTaar1-EGFP-expressing cells only, corresponding to the size of EGFP (Figure 2), indicating cleavage of the EGFP tag upon mTaar1-EGFP degradation. Moreover, a band at 52 kDa was identified in Nthy-Z and KTC-Z cell lysates, only, suggesting degradation products of the chimeric protein ( Figure 2). Additionally, cell surface proteins of KTC-Z and Nthy-Z cells, as well as of the non-mTaar1-EGFP-expressing KTC-1 and Nthy-ori 3-1 controls, were subjected to biotinylation under endocytosis-blocking conditions at 4 • C. Proteins in lysates from cell surface biotinylated and non-biotinylated control cells were separated by SDS-PAGE and blots were incubated with HRP-conjugated streptavidin to detect biotinylated proteins. The results show that the proportion of biotinylated proteins corresponding in size to the 282 kDa and 157 kDa mTaar1-EGFP tetramer and dimer, respectively, were prevalent in KTC-Z and Nthy-Z cells ( Figure 3A,C). However, the abundance of endogenous biotinylated proteins made any further interpretation difficult. Therefore, an alternative experimental approach was chosen to verify mTaar1-EGFP's cell surface localization, namely, streptavidin precipitation was performed on cell surface biotinylated lysates of mTaar1-EGFP expressing KTC-Z and Nthy-Z cells, as well as their non-expressing controls ( Figure 3B,D). Subsequent anti-GFP immunoblotting was used to identify cell surface-biotinylated forms of mTaar1-EGFP. The results were not fully conclusive for Nthy-Z preparations because many protein bands were identified as streptavidin-precipitated cell surface-biotinylated proteins that were also recognized in the Nthy-ori 3-1 control preparations by the anti-GFP antibodies ( Figure 3D, lanes 1, 3, 4). These were in the range of~40-~130 kDa in Nthy-ori 3-1 controls ( Figure 3D, lanes 3, 4) but they did not include the suspected di-or tetrameric forms of mTaar1-EGFP, which were detectable in streptavidin precipitates from Nthy-Z ( Figure 3D, lane 1). Specificity of the approach was shown for KTC-Z and KTC-1 preparations because no bands were present for the transduced, non-biotinylated KTC-Z cells ( Figure 3B, lane 2) or non-transduced KTC-1 cells ( Figure 3B, lanes 3, 4), as expected. The cell surface-biotinylated, transduced KTC-Z cells revealed the 65.8 kDa monomeric form of the mTaar1-EGFP protein in the anti-GFP immunoblots of streptavidin precipitates, while the di-and tetrameric forms were identified as traces only ( Figure 3B, lane 1). It is of interest to note that the reverse approach, namely, using anti-GFP antibodies for the precipitation of cell surface biotinylated mTaar1-EGFP forms and their identification on streptavidin blots was not productive with the antibodies used (data not shown). The results point to the notion that it is mainly monomeric mTaar1-EGFP that reaches the surface of KTC-Z cells in sufficiently high enough amounts to become detectable biochemically ( Figure 3B, lane 1). Since biochemistry was not conclusive for cell surface expression of mTaar1-EGFP in transduced Nthy-Z cells, we next used microscopical inspection to visualize mTaar1-EGFP transport. Transport of mTaar1-eGFP in Transduced, Polarized Thyroid Epithelial Cells Results in Its Targeting to and Localization at Procilia When steadily incubated at 37 • C, KTC-Z and Nthy-Z cells exhibit fluorescence in the nuclear envelope in addition to a reticular pattern of mTaar1-EGFP distribution and juxta-nuclear staining of the Golgi apparatus, besides an occasional cell surface localization in sub-confluent cultures (Figure 4). This fluorescence pattern is typical for proteins sorted into the lumen of the endoplasmic reticulum, which is continuous with the lumen of the nuclear envelope, and that are transported along the secretory pathway via the Golgi apparatus [14]. In order to further elucidate mTaar1-EGFP trafficking in the KTC-Z and Nthy-Z cell lines, these were incubated at 18 • C to inhibit anterograde trafficking of proteins along the secretory pathway from the trans-Golgi network (TGN) onwards. Cells were incubated for a minimum of 8 h, and up to 17 h, at 18 • C prior to shifting back to 37 • C to restore the microtubule polymerization-depolymerization dynamics from the perinuclearly located microtubule-organizing center, therefore re-enabling post-TGN vesicle trafficking [31]. Following incubation at 18 • C, mTaar1-EGFP was predominantly observed in the perinuclear region, i.e., in the ER, as indicated by the green mTaar1-EGFP signal outlining the nuclear envelope and surrounding the nuclei in a reticular pattern, as well as in the Golgi apparatus, as evident from co-localization of the mTaar1-EGFP signal with that of the cis-Golgi marker GM130 ( Figure 5A The ER, Golgi, and vesicular distribution of mTaar1-EGFP was prevalent in both cell lines for the duration of the experiment, i.e., up to 4 h post-shifting the cells back to 37 • C. It should be noted that the chimeric protein persisted in KTC-Z cells upon recovery from the 18 • C transport block, particularly on spherical extensions of the apical cell surface ( Figure 5B,C, arrows), consistent with such procilia being resistant to cold temperature conditions (see Discussion). A distinct basolateral cell surface localization of mTaar1-EGFP was observed through co-localization with ConA-stained cell surface constituents in some Nthy-Z cells, especially at 45 min following TGN release onwards ( Figure 5E,F, arrows). It is important to note that procilia localization was assessed by co-localization of mTaar1-EGFP with immuno-stained acetylated α-tubulin which proved a suitable axonemal marker of thyrocyte cell surface protrusions which become well-extended cilia with centrosomal CP110 at their base upon serum-starvation (Supplementary Figure S1). Incubation with the Putative Ligand 3-Iodothyronamine Does Not Result in Downregulation of mTaar1-EGFP from Procilia or the Cell Surface The morphological transport studies showed that mTaar1-EGFP is trafficked to the cell surface of KTC-Z and Nthy-Z cells, where it was detectable for up to several hours (see above, Figures 4 and 5). The biochemical studies indicated anti-GFP immuno-positive bands that could be representative of degradation products of heterologous mTaar1-EGFP (see above, Figure 2). To understand the fate of mTaar1-EGFP and to test for its possible turn-over, KTC-Z and Nthy-Z cells were immunolabeled with antibodies against lysosomal acidic membrane protein 2 (LAMP-2). Partial co-localization of mTaar1-EGFP with the endo-lysosomal marker LAMP-2 was occasionally observed in constant cultures of KTC-Z cells, while this was less prominent in Nthy-Z cells ( Figure 6A,A',F,F'). These data indicate targeting of the chimeric mTaar1-EGFP protein for lysosomal degradation at steady state, at least in some proportion of the total expressed chimeric protein. We further reasoned that downregulation of mTaar1-EGFP for subsequent delivery to endo-lysosomes might be triggered by ligand stimulation as typically seen for GPCRs [32]. Therefore, KTC-Z and Nthy-Z cells were incubated with the potential ligand of TAAR1/Taar1, namely, 3-iodothyronamine (3-T 1 AM). The concentration of 5 µM 3-T 1 AM was chosen because it is known to be productive in inducing Taar1 signaling and downstream effects thereof in thyroid epithelial cells in vitro and in situ [33]. Co-localization with LAMP-2 revealed the delivery of mTaar1-EGFP to endo-lysosomes throughout the 2 h past ligand addition ( Figure 6). There was no obvious change in mTaar1-EGFP expression, which prevailed in the nuclear envelope and in reticular structures throughout the cyto-plasm, while there was also no striking alteration of its endo-lysosomal presence ( Figure 6). The data indicated that mTaar1-EGFP turnover remains constant in stably expressing KTC-Z and Nthy-Z cells throughout steady state and irrespective of ligand stimulation or not. This interpretation however assumes that mTaar1-EGFP is functional in transduced human thyrocytes, which must be assessed in future studies. to endo-lysosomes might be triggered by ligand stimulation as typically seen for GPCRs [32]. Therefore, KTC-Z and Nthy-Z cells were incubated with the potential ligand of TAAR1/Taar1, namely, 3-iodothyronamine (3-T1AM). The concentration of 5 μM 3-T1AM was chosen because it is known to be productive in inducing Taar1 signaling and downstream effects thereof in thyroid epithelial cells in vitro and in situ [33]. Co-localization with LAMP-2 revealed the delivery of mTaar1-EGFP to endo-lysosomes throughout the 2 h past ligand addition ( Figure 6). There was no obvious change in mTaar1-EGFP expression, which prevailed in the nuclear envelope and in reticular structures throughout the cytoplasm, while there was also no striking alteration of its endo-lysosomal presence (Figure 6). The data indicated that mTaar1-EGFP turnover remains constant in stably expressing KTC-Z and Nthy-Z cells throughout steady state and irrespective of ligand stimulation or not. This interpretation however assumes that mTaar1-EGFP is functional in transduced human thyrocytes, which must be assessed in future studies. Figure 6. mTaar1-EGFP turnover is constant in steady state and not affected by ligand stimulation. KTC-Z (A-E') and Nthy-Z cells (F-J') were fixed and immunolabeled with the lysosomal marker LAMP-2 (red) at 0 min to 120 min after stimulation with 5 µM 3-T 1 AM, a potential ligand of TAAR1/Taar1 in the thyroid gland. The boxed areas in A-K are magnified in A'-J', respectively. mTaar1-EGFP (green) partially co-localized with LAMP-2 (red) in endo-lysosomal compartments (yellow) at all time intervals. Circles denote vesicles in which mTaar1-EGFP is seen to co-localize with LAMP-2. Draq5™ was used as nuclear counter-stain. Merged fluorescence (A-J,A'-J') and corresponding single channel fluorescence and phase contrast micrographs are provided in the right panels of (A-J), respectively, as indicated. Scale bars represent 20 µm. Serum-Starvation Reveals Transport of mTaar1-EGFP to Cilia of Transduced Human Thyrocytes Arrested at the G 1 /S-Transition The results described above suggest the notion of mTaar1-EGFP's transport to primary cilia. However, the structures detected in e.g., KTC-Z cells (see Figure 5C) were not as extended as typically observed in well-differentiated human thyrocytes in situ [2]. In order to recapitulate differentiation states in the G 1 /G 0 -phase of the cell cycle, serum-starvation was used to block cell cycle progression at the G 1 /S transition using a protocol recently established by us for KTC-1 and Nthy-ori 3-1 cells [14]. In addition, the protocol of colocalization of mTaar1-EGFP with marker proteins like acetylated α-tubulin and ARL13B was adapted to sequential fixation of the serum-starved cells with PFA and methanol, respectively, in order to preserve ciliary structures. Serum-starvation of transduced KTC-Z and Nthy-Z cells for 48 h resulted in the formation of long, extended structures emanating from above or close to the nuclei, which were immuno-positive for the cilia markers acetylated α-tubulin and ARL13B (Figure 7). Colocalization of the green fluorescent chimeric mTaar1-EGFP protein with the cilia markers acetylated α-tubulin and ARL13B was detected (Figure 7, yellow signals, arrows), besides the presence of the heterologously expressed chimeric protein in the compartments of the secretory pathway, namely, in structures reminiscent of the ER/nuclear envelope, Golgi apparatus and in vesicles (Figure 7, green signals, arrowheads and circles, respectively). We conclude that mTaar1-EGFP is transported to cilia of transduced human thyrocytes in their differentiated states. Transient Expression of a Related mTaar Protein Results in Trafficking of mTaar1-EGFP to Cilia of FRT Cells Cell surface transport of mTaar1-EGFP was observed in both the structurally differentiated and polarized KTC-Z, as well as the functionally differentiated but less well-polarized Nthy-Z cells. However, when cultures of transduced cells were maintained in complete medium containing FBS, procilia localization was prevalent in KTC-Z cells only. The fact that mTaar1-EGFP is present mainly in monomeric form at the cell surface of polarized KTC-Z cells, while it reached the surface of Nthy-Z cells possibly as dimers and tetramers (see Figure 3), might argue that the chimeric protein contains both, a cilia targeting and retention signal. Therefore, a co-expression approach was chosen to study the aspect of oligomerization of mTaar1-EGFP with phylogenetically related mTaar's. The structurally differentiated, well-polarized FRT cells were used for co-expression studies, because transfection of transduced KTC-Z and Nthy-Z cells was not productive. Transduction of FRT cells, on the other hand, was not successful. Therefore, to ask whether homo-and/or heterooligomerization between related Taar proteins favors trafficking to the cilia, N-terminally HA-tagged or C-terminally EGFP-tagged mTaar1, mTaar5 or mTaar8b (see Figure 1B) were transiently co-expressed in rat FRT cells. When singly and transiently expressed in FRT cells, the mTaar1-EGFP signal appeared in a predominantly reticular and vesicular distribution at steady state, indicating it was retained in the endoplasmic reticulum and did not reach apical cilia ( Figure 8A-C). In contrast, mTaar5-EGFP was predominantly localized to cilia and lipid-raft like patches ( Figure 8D-F), and it was sorted to the lateral plasma membrane between neighboring FRT cells. Of note, mTaar8b-EGFP was not productively expressed, instead, FRT cell cultures featured cell death upon transient expression of this chimeric protein (not shown). Next, we sought to test for Taar hetero-oligomerization and its effect on mTaar1-EGFP trafficking to the cell surface. To this end, an mTaar-EGFP was paired with an HA-tagged mTaar for co-expression studies. Our results show that co-expressing mTaar1-EGFP with HA-mTaar5 results in cilia localization of both ( Figure 9B,C,E-G, arrows) in addition to the predominant presence of mTaar1-EGFP in reticular structures reminiscent of the ER (arrowheads). These results indicate HA-mTaar5 expression leading to the partial release of mTaar1-EGFP from ER retention (compare with Figure 8). The results indicated mTaar5, among the tested mTaar proteins, to be trafficked most efficiently to the apical and basolateral plasma membrane domains of FRT cells. The results suggest that hetero-oligomerization of mTaar1-EGFP with the related HA-mTaar5 promotes trafficking to cilia, while mTaar1-EGFP expression, alone, in the polarized FRT cells was not productive in this regard. Taken together with the results gained with polarized human thyroid cells, the KTC-Z cell line, mTaar1-EGFP transport to the cilia, in particular, is likely in its monomeric form. Discussion TAAR1/Taar1 has been primarily investigated for its neuromodulatory role in the central nervous system, despite being expressed in various human and mouse peripheral tissues [9,10]. As such, it has been assessed as a potential target for pharmacological intervention to treat neurological and psychiatric disorders [34][35][36]; reviewed in [37][38][39]. However, the initially promising therapeutic role of thyronamine-triggered TAAR1 signaling was challenged, among others, by the notion of thyronamines acting as multi-target ligands on several non-GPCRs and GPCRs other than those of the TAAR family [for review, see [12]. Still, in the thyroid, TAAR1/Taar1 might take over a specific role by interacting with the thyronamines that can, in principle, be generated at the lumen-apposed pole of thyrocytes [1]. Therefore, its localization on apical cilia of mouse and rat thyrocytes [1,4] makes it all the more important to understand Taar1 trafficking with the aim to set up human cellular models that will enable studying thyronamine-triggered TAAR1/Taar1 signaling in a thyroid-specific context in future. Studies revolving around the heterologous expression of Taar1 formerly reported by another group, demonstrated Taar1 to retain an intracellular localization pattern, which led to speculations that Taar1 signals from within intracellular compartments, rather than from the cell surface [10]. Alternatively, this may suggest Taar1 to additionally interact with another protein to facilitate trafficking to the cell surface, a phenomenon known for various other GPCRs [40][41][42]. Indeed, TAAR1 has been reported to form functional dimers with TAAR2 in human leukocytes [43], as well as with human dopamine receptor when co-expressed in HEK 293T cells [44]. Moreover, the majority of studies reporting on TAAR1/Taar1 trafficking and subcellular localization to date entailed N-terminal modifications to the TAAR1 sequence, often to promote its transport to the plasma membrane [10,45,46]. We hereby present a model in which the N-terminus of Taar1 remained intact; however, a covalently linked EGFP tag was introduced at the protein's C-terminus. Immunocytochemical analysis revealed mTaar1-EGFP to localize to spherical, procilia structures at the apical plasma membrane of polarized KTC-1 cells, stably mTaar1-EGFPexpressing. Surface localization of mTaar1-EGFP was also observed in stably mTaar1-EGFPexpressing Nthy-ori 3-1 cells. Pulse-chase experiments showed that prociliary localization of mTaar1-EGFP was maintained even at 18 • C in the polarized KTC-Z cells, as evident from the co-localization with the ciliary marker acetylated-α-tubulin [1,4,47], when cells were inspected shortly after shifting the temperature back to 37 • C to allow post-Golgi transport (see Figure 5). The latter observation suggests that the half-life t 1/2 of Taar1 at procilia of KTC-Z cells exceeds several hours. In addition, the results suggest that procilia once established in KTC-Z cells are not affected by temperature shifts, which is consistent with the understanding that microtubules of primary cilia are rendered coldinduced disassembly stable by the binding of MAP6 proteins [48]. It is of note, that cilia of rat thyroid epithelial cells are, however, highly susceptible to incubation with cysteine cathepsin inhibitors, causing cilia disappearance and Taar1 re-location to the ER [4]. These data prompted our suggestion of the involvement of ciliary Taar1, co-localized with the thyroglobulin-processing cathepsin proteases, in thyroid auto-regulation (see below) [5]. The observations of this study may at first glance suggest that rodent Taar1 contain a cell surface targeting sequence that is responsible for its transport to reach cilia at the apical thyrocyte pole. However, studies performed on FRT cells transiently transfected with either mTaar1-EGFP or mTaar5-EGFP, or co-transfected with mTaar1-EGFP and HA-Taar5, show that, while mTaar1-EGFP was intracellularly retained in transiently expressing FRT cells, mTaar5-EGFP was more readily observed on the cell surface (see Figure 8). However, upon co-expression of both mTaar1-EGFP and HA-mTaar5, partial colocalization was observed at cilia of FRT cells (see Figure 9). Unfortunately, this could not be demonstrated in the human thyrocytes, because transfection and co-transduction of KTC-Z cells were not successful. From this data, we could speculate that, while intracellular retention was observed when mTaar1-EGFP was transiently expressed in FRT cells, suggesting too low expression levels and homo-oligomerization of Taar1 not being conducive to ciliary targeting, the cilia were reached when mTaar1-EGFP was co-expressed with HA-mTaar5. Thus, transient co-expression of mTaar5 constructs in FRT cells in vitro suggest that Taar5 traffics to the surface of well-polarized thyroid epithelial cells more readily than Taar1. The latter may be attributed to the fact that mTaar5 contains the amino acid sequence "FRKALKLLL", in its C-terminus, which corresponds to the F(X) 6 LL C-terminal motif that was identified to promote GPCR trafficking to the cell surface [49]. This particular motif is absent in the C-terminus of mouse Taar1. Significance of Taar1 Trafficking to Cilia of Well-Polarized Thyroid Epithelial Cells The trafficking of mTaar1-EGFP to patches, most likely lipid raft-like microdomains, or ciliary extensions in transfected FRT and in the stably expressing KTC-Z cells as well as in the serum-starved human thyrocytes, which promoted ciliogenesis, supports our previously reported observations that endogenous Taar1 localizes on cilia of FRT cells and on the apical plasma membrane domain of mouse thyroid epithelial cells in situ [1]. This fact is strongly suggestive of Taar1/TAAR1 serving a role in thyroid regulation [for a recent review, see, [5], because the apical plasma membrane domain of thyrocytes faces the thyroid follicle lumen into which the cilia extend and where thyroglobulin, the precursor protein of thyroid hormones, is stored in high concentrations. Therefore, exposing the Taar1 to the extracellular environment opens up the possibility that Taar1 could potentially serve as a sensor to intraluminal molecular alterations. Such changes in the composition of the thyroid follicle lumen are readily achieved upon thyrocyte stimulation with TSH, hence, thyroglobulin degradation may result in the generation of thyronamine precursors, which eventually may be rendered into thyronamines upon cellular uptake and cytosolic conversion before re-export into the lumen, where they can, in principle, act as intrathyroidally generated Taar1 agonists [1]. This suggestion of an intra-follicular mechanism of Taar1 ligand generation and Taar1 signaling from apically located cilia may contribute to regulating thyroid function in a non-canonical form [1,5,50]. Support of this hypothesis comes from our recent investigations describing the thyroid phenotype of Taar1-deficient mice, which is mild but affects TSH receptor localization in particular [8]. Cilia on Human and Rodent Thyrocytes Of note, the spherical structures, which we termed procilia, of human thyrocytes KTC-1 and Nthy-ori 3-1 cells kept in complete culture medium are not as well extended as the cilia observed in thyroid follicles in situ [2] because extensive ciliogenesis in vitro requires serum-starvation (see Supplementary Figure S1). This was attempted only in a late phase of this study because we were concerned to not stress the KTC-Z and Nthy-Z cell lines beyond the transduction process. However, our established cell models proved suitable enough to induce ciliogenesis by cell cycle arrest, further supporting our conclusion of having established a valuable in vitro model for future studies. It is further important to note that acetylated alpha-tubulin is an appropriate cilia marker in these cell lines as deduced from comparable staining of peri-nuclear spherical structures in KTC-1 cells with anti-ARL13b and anti-CP110 antibodies (see Supplementary Figure S1), and for its staining of elongated cilia structures in serum-starved cells (see Figure 7). It is somewhat astonishing that especially KTC-1 cells, which are representatives of human thyroid carcinoma cells, exhibit cilia and maintain mTaar1-eGFP expression at them because in mice, papillary and follicular carcinoma are both correlated with cilia loss [3,7]. Nevertheless, in human thyroid tissue, the disappearance of cilia or their shortening has been associated with hyperactivity of the follicles [2]. It is therefore obvious that the presence of cilia is required to allow trafficking of Taar1 to these appendages of the apical surface of rodent thyrocytes [4], and that a direct connection between the presence of cilia and thyroid cancer is not reproduced by the human cell line KTC-1 (this study). Conclusions This study was conducted by expressing a mouse Taar1 chimera with a C-terminal EGFP tag fused via a short linker peptide. We conclude that KTC-1 and Nthy-ori 3-1 cells stably expressing mTaar1-EGFP provide a suitable model to study Taar1 trafficking and localization in thyrocytes. We report that chimeric mTaar1-EGFP, when expressed in rat and human thyrocytes in vitro, is transported to the cell surface and is preferentially targeted to the primary cilia of polarized thyrocytes, where it exists in monomeric form. We also report that mTaar1-EGFP forms homo-oligomers in stably expressing human KTC-Z and Nthy-Z cells. However, homo-oligomerization was found not to be supportive of ciliary localization of mTaar1-EGFP in our model. We propose these cellular models to be suited for in vivo imaging and signaling studies that are beyond the scope of the present investigation and will be conducted in future. For this to become an even better simulating model, KTC-Z and Nthy-Z cells need to be arrested in the cell cycle at the G 1 /S-transition to achieve full ciliary extensions, which has been demonstrated in this study to be a viable option. The current study mainly focused on the anterograde trafficking of mTaar1-EGFP from the trans-Golgi network to the cell surface. For future studies, we will rely on these established cellular models to measure mTaar1 turnover rates, i.e., analyze its re-entry by endocytosis and subsequent fates like receptor recycling or endo-lysosomal degradation. Moreover, we intend to perform functional assays to study mTaar1 signaling in vitro, and its implication in regulation of thyroid function. In line with this notion, we have recently discovered that Taar1 is needed to maintain the basolateral localization of the TSH receptor in vivo, suggesting that ciliary Taar1 functionally serves as a co-regulator in the hypothalamic-pituitary-thyroid feedback loop [8]. Funding: This research was funded by the DFG (Deutsche Forschungsgemeinschaft), Germany, in the framework of the priority program SPP 1629/1 and 2, in particular, grant numbers BR1308/11-1 and 11-2 to K.Br. This research was also funded by DFG, grant number SP583/7-2 to S.Sp. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. All data is included and referenced.
9,207.8
2021-06-01T00:00:00.000
[ "Medicine", "Biology" ]
Neutron Transport Simulations of RBMK Fuel Assembly Using Multigroup and Continuous Energy Data Libraries within the SCALE Code cited. The neutron transport simulations of RBMK-1500 fuel assembly were performed using both multigroup and continuous energy data libraries available within the SCALE code system in order to validate its suitability for the estimation of RBMK neutronic characteristics. The resonance processing of cross section, involved in the preparation of the multigroup data library, has a significant impact on neutron transport calculations. Standard Dancoff factors (DFs) used for the heterogeneous geometry of RBMK fuel assembly are insufficient for the accurate estimation of resonance self-shielding. Thus, the SCALE module MCDancoff was used in this study to determine location-specific DFs. The results of RBMK-1500 fuel assembly simulations using standard and user-defined DFs were compared. In addition, the continuous energy (CE) cross-section data library was applied for the benchmark calculations. The impact of different nuclear data libraries on neutron transport simulations was tested as well. It was found out that the usage of the multigroup data libraries generates some deviation from the reference simulations obtained with CE libraries. The CE library based on the estimated ENDF/B-VII.1 data proved to be the best alternative for neutron transport simulations of RBMK fuel Introduction e objective of this paper is to validate the suitability of the SCALE code for estimation of RBMK reactor nuclear fuel neutronic characteristics and to propose the best suitable models and factors to be used in the calculations. e SCALE code [1] provides a suite of computational tools of criticality safety analysis that is primarily based on the KENO Monte Carlo module for the neutron transport simulations to calculate the neutron multiplication factor on both multigroup (MG) and continuous energy (CE) modes. e deterministic T-NEWT module, available within the SCALE, is applicable with an MG data library only; thus, the KENO Monte Carlo module was chosen for the analysis. All cross-section libraries available within the SCALE code were processed from ENDF/B evaluated data libraries. e CE and several MG cross-section data libraries with several group structures are available in the SCALE for neutron transport calculations, so that a user may select the nuclear data library based on a specific application, required accuracy, or preferable execution time. e pointwise CE v7.0 (based on ENDF/B-VII.0 data) and CE v7.1 (based on ENDF/B-VII.1 data) libraries were employed. e CE data are further processed into the MG data. e 238-group v7-238 (ENDF/B-VII.0) and the 252-group v7-252 (ENDF/B-VII.1) libraries were employed in this study's simulations as well. A generation of problem-specific MG cross sections is needed for performing the neutronic analysis if MG mode is chosen. ese cross sections are significantly affected by the resonance self-shielding, the effect of which can be classified into two types regarding the cause: energy self-shielding and spatial self-shielding. e spatial self-shielding is primarily induced by heterogeneous effects of the reactor core or fuel assembly. Different materials and their locations can influence significant changes in neutron fluxes over short distances. us, the right estimation of spatial self-shielding is important for every specific problem related to heterogeneous geometry. In the reactors' cores operated on thermal-neutrons, the fuel assemblies are composed of fuel rods surrounded by the moderator and control rods. us, the primary basic element of the core is a fuel pin cell. e neutrons, which are born in the fuel pin, are slowed down in the moderator, while some of them are absorbed in the fuel, and some neutrons can enter another fuel pin. e Dancoff factor (DF) [2] is used in neutron transport calculations in order to estimate resonance self-shielding effects between neighboring fuel pins. DF indicates the probability that a neutron from the surface of one fuel pin will pass through the external media and will enter a nearby located fuel pin. On the assumption that the SCALE lattice cell treatment assumes that a fuel pin lies in an infinite lattice, which composes identical fuel pins with the same pitch, an analytical expression for DF is employed and applied uniformly throughout the lattice [1]. However, reactor cores usually are composed of finite fuel assemblies with some distance between them. Additional moderation usually exists in these gaps between fuel assemblies, and it increases self-shielding effects between fuel pins. us, the application of standard DF underestimates the actual self-shielding effects. Even more, because of resonance self-shielding, the effect of this additional moderation decreases from outer to inner fuel pins. More precisely, location-specific DFs can be calculated using the SCALE module MCDancoff and introduced to neutron transport calculations [3]. is possibility was examined quantitatively in this paper. e best-known example of heterogeneous fuel assemblies is BWR, where nonboiling water between fuel assemblies and in special water channels within assemblies plays the part of the additional moderator to otherwise lowmoderated assembly. Previous studies demonstrated that the application of location-specific DFs estimated using the MCDancoff module has a significant impact on BWR neutron transport calculations [3][4][5]. For example, it was assessed that the reactivity difference for GE14 assembly irradiation simulations using both standard and locationspecific DFs was in the range of 800-1100 pcm [3]. erefore, the location-specific DF is usually applied in the case of BWR fuel assemblies' simulations [6][7][8][9]. Due to high geometry heterogeneity, RBMK assemblies can be treated similar to BWR. Fuel pins inside the assembly are cooled by water, which can be considered as a moderator as well. However, the primary moderator in RBMK reactors is graphite, which is positioned outside the pressure tube. Only a coolant in a fuel pitch cell is considered as a moderator in the estimation of standard DFs for RBMK assemblies, while the influence of graphite as a primary moderator is not considered. us the resonance processing with standard DFs is not sufficient for the accurate estimation of MG cross sections. e coupling between neutron transport and fuel burn-up simulations is undeniable and strong, as accurate neutronics data are required to simulate the radionuclide inventory. e results of neutron transport simulations are substantial for correct prediction of the uranium depletion, the plutonium production, and the build-up of fission products. e previous study [10] attempted to estimate the influence of additional graphite moderation on the isotopic fuel composition by increasing the lattice pitch (the amount of coolant/moderator as well) from 1.605 cm to 2.5 cm. It was found that the differences in the actinide inventory could reach 10-15%. is paper presents the results of RBMK-1500 fuel assembly neutrons transport simulations employing two sets of DFs: standard and user-defined. No resonance processing is needed and the cross sections are used directly if the CE calculation mode is selected. us, the simulation with the applied CE cross-section data library was performed for the benchmarking purpose of both resonance processing models. In addition, the impact of the updated cross section from ENDF/B-VII.1 evaluated data on the neutron transport simulation results was analyzed. e discrepancies of reactivity between simulations are demonstrated and explained. Computational Methodology Based on the knowledge and experience gained by the authors on RBMK, the fuel assembly of the RBMK-1500 reactor was chosen as the object of investigation. Till June 1995, the Ignalina NPP operated on fuel assemblies with 2% U 235 enrichment uranium only. Since the majority of spent nuclear fuel at Ignalina NPP consists of these types of fuel assemblies, the simulations were executed for this fuel type. e impurities of the initial fuel composition for U 234 and U 236 are 0.021% and 0.0018%, respectively. e RBMK-1500 fuel assembly consists of 18 fuel rods arranged in two concentric rings ( Figure 1) with the central carrier rod [11]. In the inner ring, there are 6 equally spaced fuel rods while in the outer ring, there are 12 equally space fuel rods. Fuel rods are packed with cylindrical uraniumdioxide pellets, outside diameter, 1.152 cm. e fuel pellets are placed into the clad tube of zirconium alloy, the outside diameter and the wall thickness of which is 1.36 cm and 0.9 mm, respectively. ese tubes are pressurized with helium and sealed. e fuel assemblies are located within a rectangular-form graphite block 25 × 25 cm. e zirconium alloy content used for cladding and the pressure tube is different. Zr and Nb as cladding materials account for 98.9% and 1%, respectively. Meanwhile, in the pressure tube, they account for 97.4% and 2.5% of the alloy. Other impurities that included zirconium alloys are Hf 178 (0.04%), Fe (0.018%), Ni (0.018%), and Al (0.018%). SS3304 stainless steel is used as a grid element. In order to maintain the same amount of fissile nuclear material as is prescribed in the design, the theoretical fuel density was reduced. e fuel temperature was set at 1000 K, while the temperatures of coolant, structural, and graphite materials were set at 557 K, 575 K, and 750 K, respectively, in the simulations to represent the average operation conditions. e RBMK-1500 reactor core has a heterogeneous axial void distribution across fuel assembly, and the average coolant density of 0.43 g/cm 3 was considered for the analysis. SCALE 6.2.3 code package [1] was used to perform the neutron transport calculations. e verification and validation of the code package are based on the experimental investigations of RBMK-1000 fuel [12,13]. e SCALE package provides a framework with 89 computational modules that could be selected on the desired solution strategy. e KENO-VI sequence was employed for Monte Carlo neutrons transport simulations in this study. 36 million neutron histories were used for the reliable estimation of the neutron multiplication factor. e cross section of the 3D model representing the RBMK assembly placed in the pressure tube channel and surrounded by the graphite block was used in simulations ( Figure 1). e first set of simulations was performed with the application of standard DF. DFs were calculated automatically using the SCALE lattice treatment within the TRITON sequence. e automatic lattice treatment assumes an infinite lattice that consists of identical fuel pin cells segregated inside the regions for fuel, gap, cladding, and moderator. e region of graphite block outside fuel assembly and fuel pin cells was not included during the estimation of spatial resonance shelf-shielding effects. e triangular pitch with the size of 1.605 cm was used for the resonance self-shielding calculations and the preparation of problem-specific multigroup cross sections. It was estimated that the standard DFs, which are identical for all fuel pins in the assembly, are equal to 0.5685. e second set of simulations was performed with location-specific DFs, which were manually inserted into the input file of TRITON. e MCDancoff module within the SCALE code was used to calculate DFs for specific locations in the fuel assembly. e calculation within the MCDancoff module involves the following paths of neutrons throughout all regions of the system and materials until they are absorbed or exit the system [1]. us, all details of geometry and materials of the heterogeneous system were considered for the estimation of spatial resonance self-shielding effects. Identical geometry setup as in KENO-VI sequence was used in MCDancoff simulation. MCDancoff module uses a onegroup xn01 library to calculate DFs. 100 generations with 300 neutrons per generation were used to produce DFs. As shown in Figure 2, determined DFs could be grouped into two groups since similar values were calculated for fuel pins in the inner and outer rings. e differences in DF values between fuel pins of the same ring are related to the statistical error, which comes with the Monte Carlo method. Hence, two averaged DFs values were determined and used for the following neutron transport simulations: 0.452 for the inner ring and 0.289 for the outer ring. DFs for fuel pins in the inner ring are larger since the effect of additional moderation from the graphite block for the inner ring fuel pins is shielded by fuel pins of the outer ring. e user-defined DFs are smaller than standard DFs for inner and outer rings by 21% and 49%, respectively. It means that the actual effect of spatial resonance self-shielding is larger than the one estimated using the automatic lattice cell treatment. e SCALE lattice treatment increased the triangular pitch of infinite lattice from 1.605 cm to 1.775 cm (inner ring) and 2.11 cm (outer ring) to get the same values as of user-defined DFs. e third simulation set was performed using the CE data libraries. Since the resonance processing is not needed if a CE cross-section library is specified, the simulations with a specified CE mode can be used as a benchmark for the evaluation of the resonance processing models in cases where the MG mode was specified. Discussion of Results e neutron multiplication factor is one of the essential neutronic characteristics derived from the neutron transport simulations. e discrepancies in values of multiplication factor estimated using the SCALE code for the transport simulation of RBMK-1500 fuel assembly considering the different options for the cross-section processing and available cross-section data libraries are presented in this section. e reasons behind these discrepancies are disclosed as well. e Influence of User-Defined DFs. As was mentioned before, the use of multigroup cross-section libraries requires the resonance processing of cross-section data. An infinite lattice, which consists of identical fuel pin cells, is considered for the estimation of standard DF (sDF) values within the SCALE code during an automatic lattice treatment without any user interference. However, the user has another possibility to define manually DF (uDF) values, which are estimated considering the real geometry of the fuel cell. For both cases, the simulations were performed and neutron multiplication factors were compared. e neutron transport simulations were made using both v7-238 and v7-252 multigroup cross-section data libraries available in the SCALE code; the results are depicted in Figure 3. It is seen that the multiplication factor determined using sDF has 0.5% and 0.36% (455 pcm and 317 pcm) higher values for v7-238 and v7-252 data libraries, Science and Technology of Nuclear Installations respectively, in comparison with simulations using uDF. It is evident that the change of the multiplication factor is reflected by the redistribution of neutron absorption in separate regions. Representative results of fractional absorption data for all simulation cases, including ones with CE libraries, are shown in Figure 4. It is seen that the majority of neutrons are absorbed by fuel pins (in inner 23%, in outer 53%). Other neutrons (20%) are absorbed in coolant and graphite block and additionally are lost due to the parasitic absorption in other structure materials (4%). Slight differences between fractional absorption data in some geometry regions (Figure 4) should explain the discrepancies in the calculated multiplication factors. e decomposition analysis of the neutron multiplication factor, being an extremely useful instrument and quantitative testing measure [14], was employed in this study with the objective to explain the discrepancies in simulation results. e reactivity differences between simulation cases were decomposed to separate components, which represent the contributions of the separate region to the total reactivity difference. e total reactivity change Δρ through separate components i can be expressed as follows: Here, indexes x and y stand for different simulation cases. Meanwhile, k inf and a i are the infinite neutron multiplication factor and the fractional absorption for component i, respectively. For more details, see [15]. e results of the decomposition analysis examining the reactivity difference (Δρ sDF-uDF ) between sDF and uDF cases and employing v7-238 and v7-252 MG libraries are shown in Figure 5. It is seen that Δρ sDF-uDF are mainly influenced by fuel regions, predominating clearly by fuel pins located in the outer ring. e outer fuel pins are affected more significantly in case of the usage uDF since the additional moderation (graphite block) influences greater neutronic changes in comparison to the effect in inner fuel pins. e positive Δρ sDF-uDF in fuel regions is related to the slight increase in fractional absorption. Neutron absorptions increase from 76.74% for the sDF case to 76.84% for the uDF case if v7-238 data library was applied. Meanwhile, it increases from 76.19% to 76.26% with the use of v7-252 data library (see Figure 4). e fractional absorption spectra (Figure 6(a)) were used to estimate Δρ sDF-uDF in each energy group (Figure 6(b)) on purpose to explain the total Δρ sDF-uDF in outer fuel pins. It is seen that although most neutrons are absorbed in the ranges of thermal and epithermal energy (0.01-0.5 eV), the differences in reactivity are mainly influenced by the change in fractional absorption in the ranges of resonance and fast energy (6 eV-13 MeV). e only discrepancy found between sDF and uDF cases was different DFs values used for the resonance processing of cross section. Lesser uDF than sDF values define a lower resonance escape probability as more neutrons are absorbed in fuel pins before they are slowing down to thermal energies (reflected in Figures 5 and 6(b)). us, the consideration of fuel pin location in the RBMK-1500 assembly and both geometry and material details for uDF estimation, as well as the follow-up application of a more detailed resonance processing model with uDF values, give reasons for higher resonance absorption in fuel and a lower neutron multiplication factor. Transport Simulations: CE versus MG. If the CE crosssection data library is specified for neutron transport simulations, then the resonance processing of cross-section data is not needed, and such CE simulation can be used to evaluate the accuracy of the resonance processing models used in MG cross-section library applications. e comparison of neutron multiplication factor for CE and uDF cases using both ENDF/B-VII. 0 and ENDF/B-VII. 1 data was performed and the results are depicted in Figure 7. An almost ideal match was obtained between CE and uDF cases, ough the total difference is relatively low (60 pcm) in the case of applied ENDF/B-VII. 0 data library, it increases 4 times (to 236 pcm) when simulations were run with ENDF/B-VII. 1 data library. e distribution of Δρ CE-uDF in the whole energy range in main contributing regions, that is, outer fuel pins and pressure tube, is depicted in Figure 9. e simulation results obtained using the ENDF/B-VII. 1 data library are only presented as the fractional absorption data were available only for CE cases with a structure of 252 energy groups; therefore, MG libraries of the same structure were used for the simulation of uDF cases. Following performed decomposition analysis of Δρ CE-uDF into separate energy groups revealed the various changes in fractional absorptions. e main changes in Δρ CE-uDF in the pressure tube region (Figure 9(a)) occur at resonance peaks. Hf 178 resonance peak ∼8 eV (−49 pcm), Zr 90 resonance peaks ∼188 eV (−16 pcm), and ∼305 eV (−24 pcm) are very clear and mainly contribute to the negative Δρ CE-uDF . e negative Δρ CE-uDF means that the fractional absorption for the uDF case is higher compared to the CE case. us, the postprocessed resonance cross sections of the pressure tube in the MG cases are clearly overpredicted, and, for example, at resonance peaks of Hf 178 and Zr 90 , the level of such overprediction exceeds 25-40% ( Figure 10). Obviously, the same overprediction of resonance cross sections occurs for cladding material too, because the cladding composition is similar to the pressure tube composition (Zr alloy with different Nb concentration). However, the estimated Δρ CE-uDF for cladding regions are smaller (Figure 8) since the total mass, and the effect of the cladding on the neutron transport processes is relatively smaller compared to the pressure tube. e results of the analysis showed that most changes of Δρ CE-uDF in the fuel pin region occur at the same resonance peaks and in the same energy ranges as it is for the pressure tube region (Figure 9(b)). us, the different neutron absorption rate in the pressure tube and cladding regions influences the perturbations of neutron flux in the fuel pin regions. Although the changing of neutron flux in the thermal neutron range (<1 eV) calls a significant increase of Δρ CE-uDF in fuel regions (−34 pcm at the peak ∼0.2 eV for outer fuel pins, Figure 9(b)) due to larger absorption cross sections, Δρ CE-uDF in the resonance and fast energy range (>1 eV) results in a positive Δρ CE-uDF in fuel regions, which compensates for and exceeds the negative Δρ CE-uDF obtained in the pressure tube region. ere are Δρ CE-uDF in the neutron energy range of 20-120 eV, which cannot be explained by changes in neutron flux (Figure 9(b)). ese differences should be only related to the inaccuracy during resonance processing of cross sections. As can be seen in Figure 5, the more detailed resonance processing model with uDF gives larger Δρ CE-uDF in the case of 238-group data library application in comparison to the case with 252-group data library. e defined larger Δρ CE-uDF can be explained by the remaining discrepancies in Δρ CE-uDF in the mentioned 20-120 eV energy range (see Figure 6(b)), while the 252-group data library gives significantly lower Δρ CE-uDF at this energy range. us, it can be argued that the resonance processing of cross section is more accurate with the application of 238 group data library for transport simulations as such an approach allows the calculation of reactivity values closer to the CE case reference values. 3.3. e Impact of Updated Cross-Section Data Library. All cross-section libraries available within the SCALE code were processed from ENDF/B-VII. 0 and ENDF/B-VII. 1 evaluated data files. e comparison of neutron multiplication factor calculated using v7-238, CE v7.0, and v7-252, CE v7.1 libraries is depicted in Figure 11. It is evident that the Science and Technology of Nuclear Installations values of neutron multiplication factor for cases with libraries compiled of ENDF/B-VII. 1 data (CE and MG) are notably lower (around 0.5-0.9%) compared to ENDF/B-VII. 0 data. e decomposition analysis performed allowed us to explain the differences in ENDF/B data libraries versions. It should be noted here that, of course, the differences could only be related to the updated cross-section data, as the only difference found between the simulations is the version of nuclear data. erefore, only the difference in cases with 252 energy groups (ENDF/B-VII.1 data) is demonstrated and analyzed, focusing on the CE case. Results of the decomposition analysis indicated that the differences in reactivity are mainly influenced by the change of fractional absorption in the graphite region (Figure 12(a)). All other differences are associated with variations of neutron fluxes due to the changed neutron absorption rate in the graphite where, after the update, the data of cross sections were employed. Although only the nonsignificant part of neutrons is absorbed in graphite, the relative change in fractional absorption value is the most considerable: it increases from 3.24% for the v7.0 version to 3.67% for the v7.1 version (see Figure 4). e decomposition analysis of Δρ CE7.0-CE7.1 to separate energy groups revealed the significant increase of neutron absorption in thermal and epithermal energy range (0.025-0.25 eV) for the updated version of the library (Figure 12(b)). us, lower values of the multiplication factor for simulations with the v7.1 library are related to higher absorption cross-section data in this energy range in comparison with the v7.0 data. e graphite neutrons absorptions cross sections used in the simulations and generated from ENDF/B-VII. 0 and ENDF/B-VII.1 CE libraries are presented in Figure 13. e data for temperatures of 700 K and 800 K are presented as the temperature of 750 K for graphite was assumed in simulations. Previous studies [15,16] Figure 11: Neutron multiplication factor for CE and MG cases. systematic discrepancies in the reactivity estimations were determined in cases of applying ENDF/B-VII.0 data library to analyze the graphite region considered as a moderator. Such deflection in the energy range relevant to RBMK, that is, thermal-epithermal range, was determined in this study as well ( Figure 13). Improved compliance with the measured Science and Technology of Nuclear Installations data when using the latest ENDF cross-section library release, namely, the ENDF/B-VII.1 library, has also been described in [16]. Conclusions e neutron transport simulations were performed for the RBMK-1500 fuel assembly in order to investigate available options of resonance processing models and to evaluate the applicability of different cross-section data libraries with the SCALE 6.2.3 code. e neutron transport simulations included the application of both multigroup (MG) and pointwise CE libraries from ENDF/B-VII.0 and the updated ENDF/B-VII.1 evaluated data files. e results of the investigation on a more detailed resonance processing model showed that the application of user-defined DFs (uDFs) affects the decrease of reactivity in fuel assembly by 455 pcm and 317 pcm for v7-238 and v7-252 data libraries, respectively. As expected, the results of the performed decomposition analysis indicated that reactivity changes are influenced mainly by the increase of neutron absorption in the resonance energy range, due to the reduction of DF which directly affects the decrease of resonance escape probability. e comparison of uDF results with the reference simulation using CE data libraries showed that the determined reactivity was underpredicted by 60 and 226 pcm for v7-238 and v7-252 data libraries, respectively. In the case of MG data application, the following decomposition analysis disclosed that the discrepancies originated from the overprediction of pressure tube cross sections. Some additional discrepancies of neutron absorption in the fuel region were found for v7-252 data library in the energy range of 20-120 eV. e performed comparison of the results showed that fuel resonance cross sections are not processed correctly enough in this energy range. e resonance processing with the application of the v7-238 data library delivered more accurate results considering the smaller deviation from the reference simulation. e discrepancies between ENDF/B-VII.0 and ENDF/B-VII.1 evaluated data-based libraries were indicated by analyzing the change of the neutron multiplication factor. It was proven that the discrepancies are related to the updated graphite cross sections. e neutron capture rate increases in the graphite block and the neutron multiplication factor decreases with the application of ENDF/B-VII.1 evaluated data-based libraries. us, the neutron multiplication factor is overpredicted when applying the v7-238 data library. On the other hand, the use of the v7-252 data library generates a higher deviation from the reference CE simulation due to the resonance processing. Finally, the selection and usage of MG libraries for the neutron transport simulations of RBMK-1500 fuel assembly must be chosen carefully. e CE library based on ENDF/B-VII.1 evaluated data proves to be the best option as it gives the most accurate results on the analyzed system reactivity. e neutron transport simulations, being an integral part of irradiation calculations, remain an issue to be considered during the estimations of spent fuel characteristics. Further studies are focused on investigating the effect of a more Data Availability e data are available upon request to the authors.
6,204.8
2021-03-10T00:00:00.000
[ "Computer Science" ]
Application of a self-enhancing classification method to electromyography pattern recognition for multifunctional prosthesis control Background The nonstationary property of electromyography (EMG) signals usually makes the pattern recognition (PR) based methods ineffective after some time in practical application for multinational prosthesis. The conventional EMG PR, which is accomplished in two separate steps: training and testing, ignores the mismatch between training and testing conditions and often discards the useful information in testing dataset. Method This paper presents a novel self-enhancing approach to improve the classification performance of the electromyography (EMG) pattern recognition (PR). The proposed self-enhancing method incorporates the knowledge beyond the training condition to the classifiers from the testing data. The widely-used linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA) are extended to self-enhancing LDA (SELDA) and self-enhancing QDA (SEQDA) by continuously updating their model parameters such as the class mean vectors, the class covariances and the pooled covariance. Autoregressive (AR) and Fourier-derived cepstral (FC) features are adopted. Experimental data in two different protocols are used to evaluate performance of the proposed methods in short-term and long-term application respectively. Results In protocol of short-term EMG, based on AR and FC, the recognition accuracy of SEQDA and SELDA is 2.2% and 1.6% higher than conventional that of QDA and LDA respectively. The mean results of SEQDA(C) and SEQDA (M) are improved by 2.2% and 0.75% for AR, and 1.99% and 1.1% for FC respectively when compared to QDA. The mean results of SELDA(C) and SELDA (M) are improved by 0.48% and 1.55% for AR, and 0.67% and 1.22% for FC when compared to LDA. In protocol of long-term EMG, the mean result of SEQDA is 3.15% better than that of QDA. Conclusion The experimental results show that the self-enhancing classifiers significantly outperform the original versions using both AR and FC coefficient feature sets. The performance of SEQDA is superior to SELDA. In addition, preliminary study on long-term EMG data is conducted to verify the performance of SEQDA. http://www.jneuroengrehab.com/content/10/1/44 Introduction Surface electromyogram (EMG) signal is a noninvasive measurement and contains rich information associated with the muscle electrical activities. It is considered to be an important input for the control of electrically powered prostheses, referred to as myoelectric control [1]. Conventional myoelectric control systems enable the amputees to operate a single device such as a hand or a wrist [2], simply based on amplitude decoding of the EMG signal recorded from the separable forearm muscles. The early myoelectric controllers can only operate in an on-off mode to control electrically powered hands with openclose functions [3]. Controlling a multi-degree prosthetic hand requires more sophisticated technique for decoding of different muscle states from the recorded EMG [4]. To increase the number of motion classes, much attention has been drawn to a pattern-recognition (PR)-based approach to the myoelectric control of multifunctional prostheses in last two decades. Unlike the conventional EMG decoding method that assigns each function to a specific control muscle, the PR-based approach extracts useful information from several EMG channels to form a feature vector and maps it to a motion class, maximizing the separability between each motion. Several types of EMG PR systems are introduced to fulfill the multifunctional prothesis control [2,[5][6][7][8][9][10]. The feature extraction and classifier design are the major components of PR-based control strategy. The performance of EMG PR is mainly evaluated by the classification accuracy. Various EMG feature sets have been employed to extract the most discriminant information for improving the classification accuracy. The feature extraction methods include autoregressive (AR) model [11], multivariate AR model [12], time domain statistics [2,13], root mean square (RMS) [14], higher-order statistics [15], cepstral coefficients [16], time-frequency representation [17,18] and EMG preprocessing method, e.g. the individual principal component analysis (iPCA) [8]. To achieve a high classification accuracy, researchers have extensively explored different types of classifiers, such as MLP [2], LDA [7,19], Gaussian Mixture Model [9], hidden Markov model [6], support vector machine [10], fuzzy logic [20], K-nearest neighbor classifier [21] and unsupervised clustering [22]. In addition, due to the large number of EMG channels [23] and high dimensionality of feature set, feature selection and feature projection methods such as sequential feedforward selection [8,23], PCA [17] and uncorrelated linear discriminant analysis [24] are used to transform the EMG features to a lower dimensional subspace. Usually, a successful classifier of EMG PR method is accomplished by two separate parts: (1) training step that aims to train the classification model from the knowledge of training data and (2) testing step that simulates the situation in real-world application and evaluates the classification performance using the testing data. However, the training EMG data are normally acquired at one time during a short period, and the contained information is limited, so they cannot be representative to the data of whole temporal span in application period including testing step. In real-world application, if an EMG classifier is trained well for a specific amputee, the amputee can control the prothetic hand well at the early stage, but the performance is degraded as the time moves on. This phenomenon is very common, and it is mainly because of the nonstationary property of EMG signals. The possible EMG variation is contributed to these factors such as electrodes condition, muscle fatigue, sweating and so on [25][26][27]. It is a big problem hindering the commercialization of advanced myoelectric controlled prosthetic hand that was developed in laboratory environment. Therefore, we plan to make further exploration in the testing stage since it simulates the real application situation, and expect to develop a kind of robust or adaptive classifier. In previous research, the training and testing steps are two independent processes. When there exists mismatch between training and testing conditions, the performance of the EMG PR might deteriorate, i.e. the classification accuracy decreases. Enlarging the EMG recordings in training step that contain more information may be a possible solution, but it is a time-consuming task and can give additional burden to the users. So we are inspired to retrain the classifier with the testing data in addition to the training data, which perhaps can alleviate the mismatch problem. In previous research, the parameters of original classifiers, e.g. the mean vector and pooled covariance in LDA, are estimated from the training set only. We believe using more available data to train classifiers can lead to more accurate and stable parameter estimation that is close to the true sampling distribution. Exploiting information in testing dataset is a possible way to enlarge the data pool for training and further increases the recognition accuracy of classifiers. In order to guarantee the stable performance of the continuous EMG PR in view of above remarks, the idea of self-enhancing classifiers is presented in this paper. As far as we have known, few previous works in myoelectric pattern recognition focus on the classifier adaptation, especially in designing an adaptation procedure for the continuous classification. In this paper, we extend the LDA and quadratic discriminant (QDA) classifiers to self-enhancing versions since LDA is a popular classifier used widely in many previous studies. It is easy to use and its classification performance is not inferior to other complicated classifiers [28]. The remainders of the paper are organized as follows. Section 'Method' explains the methods applied in the EMG signal classification process, including data acquisition, http://www.jneuroengrehab.com/content/10/1/44 feature extraction, and proposed self-enhancing LDA and QDA (SELDA and SEQDA) classifiers. Section 'Experiment results' provides the experimental results. Section 'Discussion' is the discussion. Finally, conclusions are presented in Section 'Conclusion' . Method The traditional process of EMG PR method generally contains segmentation, feature extraction, and classification. The decision streams are finally generated for the motion controller. A self-enhancing mechanism is added to the traditional process in this work, and Figure 1 illustrates the flowchart. The key components will be expounded in the following parts. Segmentation The N-sample analysis window, which is used to estimate the feature, segments the raw EMG signal and slides with m-sample window increment. The procedures of the feature extraction and classification are completed in the window-increment intervals. The continuous classifier sequentially produces a stream of prediction decision for each analysis window. The self-enhance classifier is initiated by training set and then updated its model using the classified continuous EMG data. The self-enhancing step works as a feedback process to the classifier when assessing the classifier in testing step or applying it in real-world application. To easily fit the manner of continuous EMG PR, the self-enhancing algorithm adopts an incremental mode (updating window by window). The parameters of classifiers are continuously adjusted to each new-coming testing data. The data is then thrown away after completing the classifier updating. Namely, the incremental self-enhancing method has the advantage of the small storage requirements. In order to completely evaluate the proposed methods, two protocols are designed for EMG data recording. One is the conventional case, testing data is collected adjacently after the training data measurement. The other case is to let the testing data be collected about 7 hours later after the training data measurement. Generally the EMG data used are collected during a short period (2∼3 hours) in previous research, i.e. the data are short-term. Towards the practical application in future, long-term EMG data are more meaningful. EMG feature extraction The surface EMG signal detected during the voluntary contraction resembles stochastic noise due to the variability of MUs (Motor Units) firing rate and recruiting rate. Although EMG signal recording from different motions is a non-stationary process, it has demonstrated that the signal can be assumed to be wide-sense stationary under the 0.5 s analysis windows if the contractions are isotonic and isometric [29]. For the continuous EMG PR, it has no advantages to use the time-scale methods, such as wavelet and wavelet pack [7], to extract EMG features from steady-state signals. Time and frequency analyses are selected to extract the useful features of EMG signal in terms of classification accuracy. Previous studies have shown that the feature set AR + RMS, which respectively describes the amplitude and spectral information of EMG, presents better classification performance than other features [10,28]. The cepstral coefficient is an efficient feature in speech recognition. The AR-derived cepstral coefficient has been applied in EMG PR task and presents good classification performance [16]. Another way of cepstrum coefficients derivation is based on the Fourier spectrum [30]. The discrete cosine transform (DCT) [31] is used for converting fourier spectrum to the meaningful cepstral feature since it can decorrelate the feature and compress spectral information. The Fourier-derived cepstral (FC) is well studied in [32], and it shows better performance compared with other EMG features. The FC coefficients are achieved by two steps: 1) calculate the energy spectrum using the discrete Fourier transform (FT) 2) calculate FC coefficients from the nonlinear magnitude of the Fourier-spectrum transform directly using DCT | is the magnitude of Fourier coefficients, and N is the number of FC coefficients. In addition, it should be noted that the computation of the FC feature extraction is mainly dependent on the fast FT (FFT) and DCT algorithms and it is computationally efficient. Since AR and FC have shown superior performance in previous study, they are selected as the EMG feature sets to evaluate the performance of the self-enhancing classifiers proposed in this paper. More details about AR and FC can be found in [11,32], respectively. Classifier design Our improvement is based on two conventional linear and nonlinear classification methods: LDA and QDA. The LDA and QDA classifiers are the Gaussian Maximumlikelihood classification methods based on the Bayes' rule. LDA has been demonstrated to be suitable for the EMG PR. In addition, LDA and QDA classifiers have no manually specified hyperparameters that significantly affect the generalization performance, thus eliminating trial-anderror approaches such as cross-validation, and the whole classifiers are determined by the training set. Given an input feature vector x for classifiers, the Bayes decision rule shows that the minimum error decision is based on the posterior probability of class membership p(ω i | x) as [33] where p(x | ω i ) is the class-conditional probability density function (PDF), p(ω i ) is the prior probability, p(x) is the unconditional PDF, and ω i denotes the ith class, The common assumption is that all class-conditional PDF are the normal distribution with means μ i and covariance matrices i . The final decision rule can make use of the following discriminant function: where the unbiased estimates of μ i and i are defined as It is shown that the discriminant function constructs the pairwise linear decision surface if all covariances i are the same as pooled within-class scatter matrix W : where n is the total number of the EMG patterns. It is called the LDA classier. If i is assumed to be different, the decision boundaries are the hyperquadric surface and this is the QDA classifier. For sufficient data condition, QDA is superior to LDA since the specific covariance estimates accurately characterize the secondorder information in the classification model and has nonlinear separability for different classes. Otherwise, LDA using the averaged pooled covariance controls less parameters and has better performance for small data condition. Self-enhancing method for classifiers We extend the LDA and QDA classifiers to the selfenhancing versions (SELDA and SEQDA) using additional knowledge from the classified data in testing set. The parameters of the original classifiers are adjusted by updating the mean vector and covariance matrix. Suppose that there are N patterns used for training the classifier, and the new-coming testing EMG feature patterns are acquired as x N+1 , x N+2 , x N+3 , . . .. To illustrate the proposed self-enhancing procedure, we make the case of the first testing x N+1 pattern updating as an example. Let the pattern x N+1 be z and labeled as the kth class by the original classifier, there are original nc j patterns for each class before updating, where j = 1, 2, . . . , C . After the z pattern updating, the number of patterns in kth class becomes nc k = nc k + 1. The updated mean vector μ k for the kth class is The relation between S k and S k for the kth class is http://www.jneuroengrehab.com/content/10/1/44 The parameters of other classes are unchanged for the z pattern updating. Then, let For the SEQDA classifier, the class covariance matrix k is updated by For the SELDA classifier, the pooled covariance matrix W is updated by The entire procedure of self-enhancing classifier works in two steps. First, the parameters of original classifier are initiated by the training set. Second, the trained classifier is evaluated by the testing set. The continuous classifier receives the EMG feature data and predicts the class labels for them. The proposed incremental self-enhancing method updates the parameters of the discriminant classifier immediately by above equations (9), (11) and (12) when the current EMG feature is classified to one output of the possible motions. Therefore, the information of testing data is continuously incorporated into the classification model. This sequential parameter updating is suitable for the continuous EMG PR in the real-world application. In addition, the self-enhancing automatically proceeds through the testing stage without manual operations. EMG data acquisition The experiment included ten classes of hand and wrist motions, which are pronation, supination, hand closing, hand opening, radial flexion, ulnar flexion, flexion, extension, palmar and cylinder grasp. We collected the EMG data using a portable EMG system (ME6000, Mega Electronics Ltd, Kuopio, Finland) with a band-pass filter of bandwidth 8-500 Hz and a 14 bit A/D converter. CMRR is Typ. 110 dB. The 1000 Hz sampling frequency was satisfactory for obtaining sufficient information on the surface EMG signal, as the most relevant information is contained in the range of 20-500 Hz. Two surface Ag/AgCl disc electrodes of one bipolar-electrode pair were placed 2 cm apart, after first rubbing the skin with alcohol. Four channels of surface EMG signals were used for the data acquisition, placed on palmaris longus, flexor carpi ulnaris, flexor digitorum supercifialis, extensor digitorum (shown in Figure 2). All recruited subjects have signed the informed consent. The procedures conformed to the Declaration of Helsinki. Ethical approval was obtained from the Bioethics Committee, School of Biomedical Engineering, Shanghai Jiao Tong University. The EMG measurement was designed in two protocols as shown in Figure 3. In first protocol, the testing data and training data are collected at one time, i.e. there is no break between testing data measurement and training data measurement. This is the general case like most previous research. While, in the second protocol, the time scan is about 9∼11 hours. It is close to the real-world application situation, and it is the first try in this area. In the first protocol, ten able-bodied subjects (seven males and three females) participated, and the age ranges from 22 to 28. Before the data collection, the instruction photographs of hand and wrist motions were shown to the participants. They could practice the desirable movements for a moment in order to be familiar with the experiment procedure. During the experiment, all participants naturally extended their arms toward to the ground, and performed each motion with natural force as that in their daily life (no need to use large force on purpose). In each cycle, the participants were instructed to sequentially perform ten motion classes. Each contraction was held for 5 s and separated by 5 s resting interval. The participant http://www.jneuroengrehab.com/content/10/1/44 could relax every two cycles and no fatigue was reported. The experiment collected twenties cycles of ten motions for each participant. The whole experiment of data acquisition lasted for about 2 to 3 hours for each participant. In EMG PR evaluation, the first 6 cycles were assigned as a training set and the next 14 cycles as a testing set. In the second protocol, four able-bodied healthy subjects (all males) participated and the age ranges from 22 to 25. The EMG data were acquired at two separate times for each subject in one day. One time was in the morning, and the other was in the evening. The time interval was about 6∼7 hours. During the interval, the EMG electrodes were not removed, and the subjects still could do the daily activities as usual. The other procedure of EMG measurement is the same as that in the first protocol. For each subject, 35 cycles of measurement were conducted (15 cycles in the morning, and 20 cycles in the evening), and each trials contained five cycles. The data of the first 5 cycles are used for training, while the data of rest 30 cycles are used for testing. Experiment results Novel classifiers with self-enhancing are proposed, while the available feature extraction methods are not improved in this work. To evaluate the performance of selfenhancing classifiers, AR and FC feature sets are prepared, where the 6th order AR coefficients with RMS value of each channel form the AR feature, and the first seven FC coefficients of each channel construct the FC feature. The two feature sets are 24 dimensional vectors. For EMG feature extraction, the data from a 200 ms analysis window are used to estimate the feature, with the analysis window incremented by 25 ms. The traditional classifiers (LDA and QDA) and the proposed classifier (SELDA and SEQDA) are applied respectively, and their performance is compared. Please note that the results in subsections 'Comparison of self-enhancing methods with the traditional classifiers, Effect of mean vector and covariance updating on the classification performance and Changes of recognition accuracy and classifier parameters across different testing trials' are accomplished using the EMG data from the first protocol, and subsection 'Evaluation on long-term EMG data' will show the results using the data from the second protocol. Comparison of self-enhancing methods with the traditional classifiers We compare the SELDA and SEQDA classifiers with their original versions using both AR and FC feature sets. The parameters of SELDA and SEQDA, such as class mean vectors, class covariances and pooled covariances, are updated using the testing data respectively. The LDA and QDA classifiers keep the original model learned by the training set. Table 1 lists the participant-specific and http://www.jneuroengrehab.com/content/10/1/44 We have also studied the mean RA results of individual motions. For the prosthesis control, the reliability of systems requires high accuracy not only within the mean RA rate but also within the RA of each motion. The poor recognition of certain specific motions would be of hazard to the safe operation of prostheses. It is found that the selfenhancing method raises RA results for most motions. For SEQDA + FC method, the RAs of motions are all above 93%. Effect of mean vector and covariance updating on the classification performance The self-enhancing mechanism is realized by two types of updating, the class mean vectors and the class (or pooled) covariances, which respectively characterize the first order and second order information in the LDA and QDA classifiers. This experiment aims to evaluate how these parameters impact on the RA results. The SELDA (M) or SEQDA (M) and SELDA (C) or SEQDA (C) denote the mean vectors updating and covariances updating respectively. Table 2 lists the participant-specific and mean classification accuracies of different classifiers. It shows that each parameter updating has the positive effect for improving the classification performance. The mean results of SEQDA(C) and SEQDA (M) are improved Changes of recognition accuracy and classifier parameters across different testing trials To compare the recognition performance of the selfenhancing and original classifiers across the testing stage, we plot Figure 4 displaying the mean RA results for each testing cycle, where the ith mean RA averages the classification results over the past i testing cycles, and the final result is the overall mean RA. These plots show that the RA rates of the classifiers change over time (testing cycles), and the final RA rates of the original classifiers are lower than their preceding rates. Figure 5 presents the RA performance based on SEQDA and SELDA for ten motion classes across testing cycles. We have investigated the changes of some classifier parameters across different testing cycles. Under the assumption of data with a Gaussian distribution, the class mean vectors μ and the covariances of the discriminant classifier describe the distribution of each class by a hyperellipsoid. The class mean vectors indicate the difference between classes, and the covariances depict the shape of distributions referring to equations (5), (6). The principal axes of these hyperellipsoids are given by the eigenvectors of the covariances, and the eigenvalues determine the lengths of these axes [34]. To describe the direction changes of principal axes and mean vectors, the cosine of angle between the original vector (the training one) and the current vector (the ith testing cycle) is given by where v 0 and v i denote the original and current vectors respectively, · denotes the internal product, and |v| denotes the norm of the vector. Based on the FC feature, we study the changes on SEQDA and SELDA for a specific subject (P6) respectively. The four kinds of parameters are further considered: length of class mean vectors, length of first two principal axes of class covariances, cosine of angle of class mean vectors, and cosine of angle of first two principal axes of class covariances. All the parameters more or less show some changes in different testing cycles along the time, but there is no very significant and useful information. We can only find that the changes on pooled covariance of SELDA, the class mean vectors of both SELDA and SEQDA are relatively small. So perhaps they make minor contribution to adaptivity of the proposed classifier. Evaluation on long-term EMG data In this part, based on the EMG data collected in the second protocol mentioned in EMG data acquisition section, we tested the performance of the proposed classifier. As QDA (SEQDA) generally performs better than LDA (SELDA), we just present the results on QDA (SEQDA). Only FC is used as the EMG feature here. The results on RA of QDA and SEQDA for the four subjects are shown in Table 3. It is obvious that the general performance of SEQDA (97.58%) is 3.15% better than that of QDA (94.43%). Without loss of generality, we select the result of a subject (S1) to observe the change of classification accuracy along at different time points. In comparison, the results on average RA of 30 testing cycles and 6 trials (each trial contains 5 cycles) using QDA and SEQDA are illustrated in Figure 6. We can see the details from results represented in cycles, and find the general trend from results represented in trials. For a clear view, the difference of average RA between QDA and SEQDA (RA of SEQDA minus RA of QDA) is shown in Figure 7. The trend that SEQDA outperforms QDA can be observed. At the early stage, the difference is very small, while the difference becomes significant after certain time. Discussion For feature sets in Table 1, FC shows better performance than AR when using the QDA. The possible reason is that Table 3 Average recognition accuracy of 10 types of motions on four subjects (S1-S4) using long-term EMG data covariances of FC vary from different classes and has nonlinear feature distribution. Therefore, nonlinear classifier such as QDA can better discriminate it. In the experiment, the FC feature presents better performance than the AR feature. A paired t-test [35] is employed to examine the statistical significance of the improvement by the use of self-enhancing method. The SEQDA significantly outperforms the QDA in the statistical test using both AR and FC features (p < 0.01). The SELDA is also significantly better than the LDA using both AR and FC features (p < 0.01). In addition, from Table I, we find that the FC+SEQDA is determined as the best combination of the featre and classifier for nine out of ten participants. AR+LDA is widely considered as a benchmark EMG classification method due to its good performance [8,24]. The proposed FC+SEQDA has the RA rate roughly 4% higher than AR + LDA. http://www.jneuroengrehab.com/content/10/1/44 In Table 2, for SEQDA, the class covariance updating presents greater improvement than the class mean vectors updating. On the contrary, the pooled covariance updating has less improvement than the class mean vectors updating for SELDA. The class mean vectors updating has different classification strength on SELDA and SEQDA. This might be caused by the different effects of the two (class or pooled) covariance estimates upon the classification performance. The combination of mean vector and covariance updating can further increase the RA results except the SELDA classifier using the AR feature. The paired t-test shows that the RA results are significantly improved when using the SEQDA(C), SEQDA(M) and SELDA(M) for both AR and FC features. The improvement of SELDA(C) is not significant. In a word, the class covariance updating and the class mean updating play major roles in both SEQDA and SELDA classifiers. In Figure 4, RA rates of the traditional classifiers (LDA and QDA) decreases obviously. The reason of this RA decrease can be attributed the unobserved changes of experiment condition in 2-3 hours, including perspiration, humidity, cognitive intent variations or contraction intensity changes, soft tissue fluid fluctuations (slight spatial change) and so on. Perhaps, the experimental participants already have slight fatigue but they cannot exactly feel it, so they did not report it. It can be found that the http://www.jneuroengrehab.com/content/10/1/44 RA differences of the two types of classifiers are enlarged with the increase of testing cycles. This may be attributed to the fact that the self-enhancing method can incorporate more information from testing set to the initial models and can accurately estimate the parameters of classifiers with change of the different testing cycles. It is observed that the performance declines in Figure 4 for all but the FC+SEQDA. It means that FC+SEQDA might be more robust than other combinations. In the experiment, the length of testing cycles might not be long enough for the declining of FC+SEQDA. Figure 5 shows the classed-based performance for ten motion classes, where the classification performance of most motion classes declined with the increase of testing cycles. But a few classes (e.g. extension and cylinder grasp) have the increasing performance. The possible reasons for this phenomenon may include: 1) the adaptation enlarges the data size for training and therefore leads to more accuracy estimation of classifier parameters, particularly with covariance. 2) the training data have much difference from the first two or three testing cycles. The adaptation mechanism allows the classifier to learn the information in testing data and enhance the performance. Regarding the evaluation on long-term EMG data as the results shown in Figure 6. The RA of QDA degrades obviously, which indicates the traditional QDA without http://www.jneuroengrehab.com/content/10/1/44 adaptation cannot guarantee stable performance in a long duration. For SEQDA, the performance does not degrade much in general, although the performance is not good at several points. This is reasonable, because none can get absolutely perfect information from the testing data, and there must be some unexpected disturbing data. However, even the worst case of SEQDA (RA=0.885) is still better than any result of QDA. The experimental evaluation confirms the efficiency of the proposed self-enhancing approaches. The SELDA and SEQDA classifiers outperform the original versions using both AR and FC features. The adaptation of classifier parameters has meaning at two levels. First, it can incorporate the information of testing data into the classifier. Second, it indeed enlarges the data for training the classifier. We think that these two adaptation factors will mutually improve the classification performance. The results also show that SEQDA is superior to SELDA and suggest the individual class covariance updating can give more accurate estimation of the second order information than the pooled covariance. The possible reason is that the class covariance updating takes the individual class information into accounts and thus is a type of semisupervised method (using the classified labels), and the pooled covariances updating is an unsupervised method. The self-enhancing method provides the feedback on each testing EMG data to update the classification algorithm. Using online testing feedback of the current state of the prostheses will help the users to recognize the misclassification and to adjust themselves to proper conditions. It is expected that the two types of feedbacks, one to algorithms and another to users, will mutually improve the classification performance further. Moreover, similar to their original classifiers, SELDA and SEQDA have no hyperparameters and require no time-consuming trialand-error procedures, facilitating their application to the prosthesis control. Computational efficiency is an important implemental issue of the classification method. In our EMG PR algorithm, the AR model is estimated by the Burge algorithm, and the FC coefficients are computed by the fast algorithms such as FFT and DCT. The experimental hardware platform is a personal computer, consisted of a Core2 Duo 2.0G Hz CPU, 2G DDR2 memory. The software platform is Matlab version 7.1 under the windows XP operating system. To process 200 samples EMG data, the time cost of AR feature set is about 4 ms, the time cost of FC feature set is about 2 ms and the classifier requires 1∼2 ms. The FC feature extraction has relatively faster computing speed than AR by the use of fast algorithms. In addition to the original classification procedure, our self-enhancing method needs additional step to update the parameters of the classifier and it cost about 2 ms. More sophisticated digital signal processing hardware will expedite the online processing. Moreover, the self-enhancing method stores the class mean vectors, class covariances and pooled covariance for saving the model information after each updating and has no need to store the large EMG data. The most promising highlight of the self-enhancing method is for the long-term EMG PR task, since it provides a basis for prosthetic control in real-world application. Our method continuously adds the immediate information of the EMG pattern to the classifier by updating the model parameters. The measurements involve the EMG data for about 8∼10 hours that may include the possible variation factors. The testing data have larger size than the training data and the ratio is 7:3. The results have verified the performance of adaptive ability of the proposed algorithm. It can be seen that RA results of SEQDA outperforms QDA in the testing stage especially in the late stage. The good RA results of self-enhancing classifiers exhibit their robust characteristics for long-term application. Actually, there are many factors contributing to the nonstationary changes of long-term EMG signals such as electrode position, muscle fatigue, or other physiological/psychological condition [25][26][27]. The underlying physiological mechanism needs more investigation, and this work does not focus on this issue. Evaluating on the longer-term EMG data such as over days and months may shed more lights on the self-enhancing approach into practice. Conclusion In summary, this paper proposes a self-enhancing method for EMG classification based on the traditional LDA and QDA classifiers, which can incorporate the useful information of EMG signal in testing data to the classification model. The improved classifiers named as SELDA and SEQDA continuously update their parameters such as the class mean vectors, the class covariances and pooled covariances using the labelled EMG feature data. We have shown that the self-enhancing classifiers significantly improve the recognition performance of the EMG PR system including the preliminary application on long-term EMG data.
7,749
2013-05-01T00:00:00.000
[ "Computer Science" ]
Prediction of the Shear Tension Strength of Resistance Spot Welded Thin Steel Sheets from High- to Ultrahigh Strength Range The tensile strength of newly developed ultra-high strength steel grades is now above 1800 MPa, and even new steel grades are currently in development. One typical welding process to join thin steels sheets is resistance spot welding (RSW). Some standardized and not standardized formulas predict the minimal shear tension strength (STS) of RSWed joints, but those formulas are less and less accurate with the higher base materials strength. Therefore, in our current research, we investigated a significant amount of STS data of the professional literature and our own experiments and recommended a new formula to predict the STS of RSWed high strength steel joints. The proposed correlation gives a better prediction than the other formulas, not only in the ultra-high strength steel range but also in the lower steel strength domain. Introduction High strength steels (HSS) are gaining more and more attention and application in mechanical engineering, especially in the automotive industries [1][2][3]. Among high strength steels, advanced high strength (AHSS) and ultra-high strength (UHSS) steels are the most developing research areas due to their excellent mechanical strength (tensile strength R m > 1500 MPa) and adequate ductility, which are achieved during carefully selected thermo-mechanical heat treatment processes [4]. These mechanical properties make these types of structural steels lucrative for the application in automotive industries, e.g., crash boxes, car bodies, etc. [5]. Moreover, the increasing strength leads to the reduction in wall thicknesses. The smaller wall thicknesses have allowed engineers to manufacture lighter vehicles, which is very important in terms of fuel consumption, and, thus, in environmental considerations [6,7]. The most important joining process of high strength thin sheets is welding. The arc welding of AHSS and UHSS can be challenging due to the unwanted phase transformations and the possible coatings [8][9][10][11]. For these reasons, the mostly applied joining process for AHSS and UHSS thin sheets is resistance spot welding (RSW) [12,13]. RSW welding process can be easily automated [14,15], robotized, therefore RSW is also an optimal process for mass production. RSW is also one of the most used welding process in car body manufacturing. To improve weld quality and welding process efficiency new types of power sources with advanced electrical controls have recently been developed. They focus on the electronic control of the welding current and, thus, the heat input. Recently, extensive research hase been done in the field of the application of different pulsed welding technologies in the case of HSS welding. Kim et al. [15] investigated different pulse profiles to improve the weld quality of CP1180 steel. They have found that the volume of the weld nugget can be increased by pulse welding, and the weldable current range can be extended compared to single pulse welding. Pulse welding can also have beneficial effects in terms of metallurgical weldability. Wintjes et al. [16] have found that pulse welding schedules can reduce liquid metal embrittlement sensitivity in the case of zinc coated TRIP1100 steels. Liu et al. [17] have found that double pulse welding with higher secondary current can lead to an enhancement in shear tension strength in the case of Q&P 980 steel, due to the reduction of the partially melted zone. Multiple welding current pulses also act as a post weld heat treatment (PWHT). Stadler et al. [18] have found that the second welding current pulse remelted the center of the weld nugget of a 0.1 C, 6.4 Mn, 0.6 Si (wt%) medium Mn-steel, leading to a recrystallization and homogenization of the initial weld microstructure, thus improving the mechanical properties. For the optimization of welding process parameters design of experiments (DoE) methods have widely been used in RSW. Soomro et al. [19] have used Taguchi DoE to optimize PWHT parameters in order to obtain the maximum peak load and failure energy in RSW of DP590 steel. Tutar et al. [20] have used Taguchi method to optimize welding parameters for the RSW of TWIP sheets. They have found that the weld current has the highest statistical effect on the tensile-shear load, followed by the welding time and the electrode force. Artificial neural network is also a useful tool in terms of optimization. Rao et al. [21] have used neural network algorithm to obtain the optimized welding parameters. With the evaluation of shear tension strength, coach-peel strength and weld nugget size, the proper parameters were selected for the RSW of DP590 steel. Beside of the these highly developing welding technologies, design and evaluation methods, the conventional weld parameter design is still based on the shear tension strength and the failure mode of the RSW joint. The shear tension strength (STS) values found in the literature is presented on Fig. 1 (according the data of [12,17,) for similar and dissimilar joints. To the designer to plan joint configurations some formula is needed to predict the joint strength of RSWed high strength steels. Therefore, we made our research to refine such a correlation to predict the STS value. Our current research is a follow up paper of a previously published work "About the shear tension strength of ultra high strength steels" [22]. Here a new correlation has been proposed to predict the STS values with better accuracy in the UHSS steel range (R m > 1340 MPa) for thin sheets (≤ 3 mm thickness). Now with more experimental and more literature data an even better correlation is proposed which is applicable for the whole high strength range for steels. There are several equations to predict the STS values for resistance spot welded steel sheets. One approach is according to the mentioned AWS D8.1M standard [167], which gives a guide for the minimum acceptable shear tension strength values (STS AWS ) in Eq. (1) for automotive applications. In this formula R m is the tensile strength of the steel in MPa, and t the sheet thickness in mm. Due to the nature of this correlation (it has maxima at R m = 1340 MPa), the required STS AWS values start to decrease in the ultra-high strength steel range. It can be explained with the conservative nature of the standard, at some places of the joints even cracks are allowed. Presumably, the welding of such high-strength steel grades is challenging, and joint flaws are inevitable. Several research showed that UHSSs can be welded without defects free [28, 35, 44, 47, 72, 86-88, 98, 101, 166]. Nevertheless, this equation is not suitable for the design of RSWed steel structures with R m > 1340 MPa. To achieve the same structural strength, more spot welds are required than in case of lower strength base material. For example, Investigating the professional literature and previous experiments of the authors about RSW of UHSS steels, it seems that the STS does not decrease in the R m > 1340 MPa range (Fig. 1). Therefore in our previous work [22], we modified the formula of AWS D8.1:2003 standard (Eq. (1)), to increase the minimum required STS value above the range R m > 1340 MPa (Eq. (2)). 1 5 1000 . (2) In this formula R m is the tensile strength of the steel in MPa, and t the sheet thickness in mm. This correlation gives a better prediction of the STS values in the UHSS range R m > 1340 MPa. Nevertheless, there is a shortcoming of both equations namely the required STS to actual STS ratio is decreasing with increasing R m of the base material. This ratio can be interpreted as a kind of safety factor, but the change over the R m range is not beneficial for the joint design. The authors have also investigated other standards and correlations. The ISO 14373:2015(en) [168] gives a minimal requirement for low carbon (C < 0.15%, Mn <0.6%) steels (uncoated and zink coated till 3 mm thicknes). Most of the UHSS steels have also a low carbon content; therefore, we investigated this correlation too (Eq. (3)). STS t d R kN In this formula d w is the weld nugget diameter in mm, t the sheet thickness in mm, and R m is the tensile strength of the steel in MPa. For this correlation, a required weld nugget diameter is needed. Similarly, Radakovic and Tumuluru [59] defined some formulas for interstitial free (IF), transformation induced plasticity (TRIP), and dual phase steels (DP). One correlation is for predicting STS for pullout (PO) and one for interfacial (IF) fracture. Generally, the preferred fracture mode of RSWed joints is PO therefore the correlation for PO fracture (Eq. (4)) has been considered. In this formula k PO is a constant with the value of ~2.2, R m is the tensile strength of the steel in MPa, d w is the weld nugget diameter in mm, and t the sheet thickness in mm. This equation is similar to the STS ISO function; only k PO is lower than the constant (2.6) in Eq. (3). In both equations (Eqs. (3) and (4)) weld nugget size is an important parameter. The minimal weld nugget diameter commonly considered at least 3.5 · t (under this value is a risk of lack of fusion defects) and the maximum nugget size 5 · t or 6 · t (above that size there is a great risk of splash) [168]. Therefore, these correlations have been investigated in the d w = 3.5…5 · t range. Comparison of the different STS prediction models The graphical representation of the previous models (Eqs. (1)-(3)) in the thin sheet range is shown in Fig. 2. The decreasing trend of STS AWS in the UHSS range (R m > 1340 MPa) is apparent. Also, there is a significant difference in the STS values of the different models with increasing R m and sheet thickness. For example, in Fig. 3 the minimal STS values are plotted for the commonly available 1 mm sheet thickness in correlation with the tensile strength. The Therefore, RSW experiments were performed and evaluated together with the literature data to better correlate the STS values. RSW experiments To complement the STS data from the literature, weld optimizations were made in the HSS and UHSS range in similar and dissimilar combinations. With the exception of the TRIP steel, which was produced as a test production by ISD Dunaferr Ltd, the other grades were produced by the company SSAB. The main properties of the base materials are listed in Table 1 The RSW joints were optimized to achieve the highest STS value. The experiments were arranged with a central composite design (with Box-Wilson optimization) method [169]. The tensile-shear tests were performed with an MTS 810 universal material testing machine according to AWS D8.1M standard [167]. Results and discussion 4.1 Experimental STS data The objective of the RSW experiments was to achieve the highest STS value with an acceptable weld quality. The STS values of the optimized joints and their welding parameters are listed in Table 2. It is evident from the table that the higher strength steels need to be welded with a shorter work schedule and higher current values. Moreover, the possible STS values are increasing with the base materials thickness. All joints were defect-free and had the favorable pullout type fracture during tensile-shear testing. It must be emphasized that the optimization for the highest STS was made within the boundaries of the RSW machine used in the experimental tests. Higher STS values could be achieved by: (a) using flat tip electrodes (other machine arm assembly required), (b) higher electrode force, (c) MFDC machine, (d) complex work schedule. This means the measured STS values are little less than the than the maximal achievable values for a given steel sheet. Which is not a big problem, because little underestimation of the highest achievable STS means staying on the safe side for the joint design for shear loading. Evaluation of the STS data according to the literature models The experimental STS data and those obtained from the literature are investigated here. Altogether those STS values are examined based on the different STS prediction models. On Figs. 4 and 5, the actual measured STS values are divided by the corresponding values obtained from the various models. Note: for dissimilar welds the calculations were done for the weaker side (according to th investigated formula) of the joint. Dividing the actual values by the ISO (Eq. (3)) and the Radakovic and Tumuluru equations (Eq. (4)), a clear decreasing trend can be observed (Fig. 4(a) and (b), respectively) for similar and dissimilar welds in the whole tensile strength region. The ratio of the measured and predicted STS values can be handled as a kind of safety factor (if greater than 1); therefore, it would be better if the ratio of the measured and predicted values would not change with the base materials R m range. In Fig. 4(a), this ratio for d w = 3.5 · t is decreasing from ~ 2.5 till 1 at R m = 1600 MPa, at higher R m this model overestimates the actual STS of the welds. For a larger weld nugget this ratio decreases from ~2 to 1 till R m = 1200 MPa. In Fig. 4(b), the measured values being divided by the Radakovic and Tumuluru equation, which has the same characteristic as the ISO equation, showing very similar plots, only the transition of this quotient from > 1 to < 1 occurs at different base materials R m . This ratio for d w = 3.5 · t begins with ~3 to decrease till 1 at base materials R m = 1600 MPa, for d w = 5 · t from ~ 2 to 1 at R m = 1400 MPa. In Fig. 5, the values are divided by the AWS function (Eq. (1)) and at base materials R m > 1340 MPa by the previous correlation proposed by the authors (Eq. (2)). This plot can be divided into two characteristic parts. Until R m ~ 1200 MPa, the STS measured / STS AWS values continuously decrease from ~ 4 to ~ 2, this ratio increases at higher base materials R m (on Fig. 5 indicated by white arrow at R m > 1340 MPa). This means the measured values are lot higher than the predicted ones. It is not very beneficial for the designers, because designing according the STS AWS function means that they have to plan with more weld nuggets than necessary. For thath reason was Eq. (2) proposed previously [22]. In Fig. 5 it is evident that the increasing part of the ratio ceased with the application of Eq. (2) and stabilized around the ratio of 1-3. Determination of a new STS prediction model To have a more constant STS measured / STS predicted ratio a new function has been determined based on experimental data and the STS data of about 150 papers [12,17,. All the STS data is represented in Fig. 6 as a 3D plot. Several types of linear and nonlinear surfaces have been fitted on the STS values. There is no significant difference between them for the current available information; therefore also for easier handling, a 3D plane function has been determined (STS New2 ) (Eq. (8)). STS R t kN In this formual R m is the tensile strength of the steel in MPa and t the sheet thickness in mm. For comparison the different STS models are plotted for 1 mm sheet thickness in Fig. 7. In this case, the STS New2 function predicts a higher STS value than the other equations, while at R m ~ 1345 MPa it predicts lower strength than the STS ISO function for d w = 5 · t . For larger sheet thicknesses, this transition shifts for the smaller base materials strength, e.g., for 2 mm sheet thickness the transition occurs at R m ~ 750 MPa, while at R m ~ 1600 MPa the predicted STS New2 value is smaller than the STS R&T value for d w = 5 · t . So it seems Eq. (8) approximates better the measured STS values for the whole high strength base materials 400 MPa < R m ~ < 2000 MPa range for thin sheets. Evaluation of the STS data according to the different models for selected HSS and UHSS types The different STS prediction models were investigated especially for HSS ans UHSS steel grades. The most literature data was available for the DP, TRIP and martensitic steel grades in this strength regions. The measured STS values of these three grades (in similar joints) are divided by the different STS prediction functions are shown in Fig. 8. DP steel grades are in the 400 MPa < R m < 1300 MPa range ( Fig. 8(a)). The STS measured / STS function values for the AWS model continuously decrease from approx. 2.5-3.5 range to 1-1.5 range with the higher R m . In case of the ISO function (for d w = 5 · t ) this decreasing trend can still be observed, but at a smaller extent from approx. 2-1.5 range to 0.5-1.5 range. The values computed with the New 2 function (Eq. (8)) scatter in the whole R m range, homogeneously in the 0.5-1.5 range. TRIP steel grades are in the 500 MPa < R m < 1300 MPa range ( Fig. 8(b)). The STS measured / STS function values for the AWS model are approx. 15-2.5 in the whole tensile strength range. In case of the ISO function (for d w = 5 · t ) this range is significantly narrower approx. 0.5-1.5. In the New 2 function (Eq. (8)), this range is slightly smaller approx. 0.4-1.4, and even smaller for R m < 700 MPa. Martensitic steel grades are in the 700 MPa < R m < 1900 MPa range (Fig. 8(c)). The STS measured / STS function values for the AWS model are continuously decreasing from approx. 2.1-2.8 range to 1.8-2.5 range at the higher R m . Above R m = 1340 MPa, this ratio starts to increase again till ~ 2.6-3 range. These values for the New1 function (Eq. (2)) do not increase but are in the approx. 1.3-2.5 range. In the case of the ISO function (for d w = 5 · t ), this range is significantly narrower but the STS measured / STS function values are continuously decreasing from the approx. 1-1.5 range to the 0.5-0.9 range at higher R m . This means that this formula first underestimates the real STS values, than from approx. R m > 1200 MPa it overestimates them. In case of the New 2 function (Eq. (8)), STS measured / STS function range is smaller approx. 0.4-0.6 and the values are more homogeneously distributed in the whole R m range. In Fig. 9 the measured STS values from the literature and from the experimental work are divided by the developed STS New2 function (Eq. (8)) for similar and dissimilar joints. As it can be seen, the values scatter around 1, a fitted line with fixed intercept at 1 had a minimal slope of 3 × 10 -5 MPa -1 , with the R 2 of 0.9. Therefore it can be concluded, that the new function approximates better the measured STS values in the whole R m range for similar and dissimilar joints than the existing literature and standards equations. Moreover, the equation has the advantage to be only dependent on the base materials tensile strength and the sheet thickness, and not on the weld nugget size. Conclusions In this current research, a large amount of literature and experimental data have been investigated to better predict the shear tension strength (STS) of resistance spot welded high and ultra-high strength thin steel sheets. From the available data, the following conclusions can be drawn: • The standardized AWS D8.1M function underestimates the measured STS values, and the ratio of STS measured / STS AWS decreases with the material tensile strength, and due to the nature of this function it starts do increase again above 1340 MPa. • The standardized ISO 14373:2015(en) [168] and the Radakovic D. and Tumuluru M. functions underestimate the measured STS values at lower base materials tensile strength, and the ratio of STS measured / STS function decreases with the material tensile strength. At certain tensile strength (depending on the function and the required weld nugget size), this ratio is below 1 meaning that the functions start to overestimate the STS values. • A new formula has been proposed (Eq. (8)), which gives a homogeneous STS measured / STS function ratio (0.5-1.5 range) over the 400-1900 MPa base material tensile strength range. It also gives a narrower range of STS measured / STS function values at any selected base materials strength. It also works better for DP, TRIP and martensitic steels. This formula is dependent on the base materials tensile strength and the sheet thickness, and not on the weld nugget size. • The proposed new function can be a loocrative tool for the designers in the planning stage of resistance spot welded components made of thin sheets (approx. < 3 mm) under tensile-shear load. for similar and dissimilar joints
4,942
2021-12-15T00:00:00.000
[ "Materials Science" ]
COVID-19 Drug Discovery Using Intensive Approaches. Since the infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was reported in China during December 2019, the coronavirus disease 2019 (COVID-19) has spread on a global scale, causing the World Health Organization (WHO) to issue a warning. While novel vaccines and drugs that target SARS-CoV-2 are under development, this review provides information on therapeutics which are under clinical trials or are proposed to antagonize SARS-CoV-2. Based on the information gained from the responses to other RNA coronaviruses, including the strains that cause severe acute respiratory syndrome (SARS)-coronaviruses and Middle East respiratory syndrome (MERS), drug repurposing might be a viable strategy. Since several antiviral therapies can inhibit viral replication cycles or relieve symptoms, mechanisms unique to RNA viruses will be important for the clinical development of antivirals against SARS-CoV-2. Given that several currently marketed drugs may be efficient therapeutic agents for severe COVID-19 cases, they may be beneficial for future viral pandemics and other infections caused by RNA viruses when standard treatments are unavailable. Introduction Since an unusual type of pneumonia, which was distinct from common pneumonia in symptoms and lethality, was reported in Wuhan, China, in December 2019, nations across the globe have paid attention to this new infectious disease. On 12 January 2020, the World Health Organization (WHO; https://www.who.int) temporarily designated the virus causing this disease as the 2019 novel coronavirus (2019-nCoV). On February 11, 2020, the WHO officially renamed this infectious disease coronavirus disease . The coronavirus study group within the International Committee on Taxonomy of Viruses also renamed 2019-nCoV, as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). At present, the COVID-19 pandemic is spreading all over the world, with cases reported in China [1] and 168 other countries, areas, and territories. As of 20 March 2020, the COVID-19 disease caused 8778 deaths as noted by the WHO (https://www.who.int). To fight against this pandemic, scientists and healthcare workers have started to share their knowledge. Given the rapid spread of COVID-19 and the smaller timeframe available for developing new therapies, drug repurposing may be an ideal strategy that allows healthcare workers to treat COVID-19 using previously approved or investigational drugs [2]. Here, we gathered information that may be pertinent to drug discovery for COVID-19 via a systemic review of the PubMed database (https://www.ncbi.nlm.nih.gov/pubmed) from 2000 to 2020. We searched the papers with "corona", "COVID", "MERS" and "SARS" as keywords. The publications that were described as the concerning biological characteristics, interaction with human or Homo sapiens, therapeutic targets, and therapeutic medications for their viruses, are included in this review from 2000 to 2020. Since some information is protected by patents, this article surveyed published and shared information to establish a therapeutic strategy against COVID-19. Currently Undergoing Clinical Studies for COVID-19 Several drugs, such as chloroquine, favipiravir, remdesivir and umifenovir, are currently undergoing clinical trials to test their efficacy and safety in the treatment of COVID-19. Most of these studies are currently taking place in China [3,4]. Favipiravir (Avigan, T-705) Favipiravir has been developed as an anti-influenza drug and is licensed as an anti-influenza drug in Japan [5]. One of the unique features of favipiravir is its broad-spectrum activity against RNA viruses, including influenza virus, rhinovirus and respiratory syncytial virus. Previous studies demonstrated that favipiravir is effective at treating infections with Ebola virus, Lassa virus and rabies, and against severe fever with thrombocytopenia syndrome [5]. However, favipiravir is not effective against DNA viruses. With regard to its mechanism, it is reported that favipiravir antagonizes viral RNA synthesis by acting as a chain terminator at the site where the RNA is incorporated into the host cell. By contrast, oseltamivir (Tamiflu), a neuraminidase inhibitor, blocks the cleavage of sialic acid and the subsequent entry of the virus into the cell [5]. Importantly, favipiravir, unlike oseltamivir, does not seem to generate resistant viruses [5]. This property of favipiravir suggests a potential benefit in the treatment of critical infectious diseases such as COVID-19 ( Figure 1). Figure 1. Proposed acting points of anti-SARS-CoV-2 in the replication cycle of the virus. When SARS-CoV-2 particles bind to their receptors, such as angiotensin-converting enzyme 2 (ACE2), aminopeptidase N (APN; CD13) and dipeptidyl peptidase 4 (DPP4; CD26), viral RNA is passed to the host cell, and RNA-dependent RNA polymerase (RdRp) produces viral RNAs. During RNA methylation, the RNA cap is formed, which protects against the host innate immune response, which involves the secretion of interferons (IFNs) and cytokines (CKs). The viral (guanine-N7)methyltransferase (N7-MTase) plays a critical role in RNA capping, using the methyl donor Sadenosyl-methionine (SAM). The process of viral RNA synthesis and the translation of proteins is associated with pH-dependent membrane stress, which can elicit adverse effects against immune and non-immune cells. If the viral replication cycle is not inhibited and infected cells are not eradicated, packed viruses will be disseminated to other cells in the host. Proposed drugs and their possible acting points against COVID-19 are shown by bold lines. Remdesivir (GS-5734) Remdesivir is a nucleotide analog that is used for the treatment of infections caused by the Ebola virus and the Marburg virus [6]. However, it has also shown activity against respiratory syncytial virus, Junin virus, Lassa fever virus, Nipah virus, Hendra virus, and the MERS and SARS coronaviruses [7][8][9]. Remdesivir inhibits RNA-dependent RNA polymerases, most likely through the delay of RNA chain termination in the cell [10]. It is therefore one of the most promising compounds for treating COVID-19 [4]. Umifenovir and Lopinavir/Ritonavir is a potent Russian-made broad-spectrum antiviral agent, that is used to treat influenza A and B viruses and hepatitis C virus (HCV) [11]. Although the mechanism slightly differs depending on the virus, it is reported that umifenovir inhibits viral fusion with the host cell membrane and subsequent entry into the host cell [11]. Recently, a trial involving the use of lopinavir/ritonavir (LPV/r), which are protease inhibitors used to treat HIV, in adults hospitalized with severe COVID-19, showed no observable benefit of LPV/r treatment beyond the standard of care [12]. When SARS-CoV-2 particles bind to their receptors, such as angiotensin-converting enzyme 2 (ACE2), aminopeptidase N (APN; CD13) and dipeptidyl peptidase 4 (DPP4; CD26), viral RNA is passed to the host cell, and RNA-dependent RNA polymerase (RdRp) produces viral RNAs. During RNA methylation, the RNA cap is formed, which protects against the host innate immune response, which involves the secretion of interferons (IFNs) and cytokines (CKs). The viral (guanine-N7)-methyltransferase (N7-MTase) plays a critical role in RNA capping, using the methyl donor S-adenosyl-methionine (SAM). The process of viral RNA synthesis and the translation of proteins is associated with pH-dependent membrane stress, which can elicit adverse effects against immune and non-immune cells. If the viral replication cycle is not inhibited and infected cells are not eradicated, packed viruses will be disseminated to other cells in the host. Proposed drugs and their possible acting points against COVID-19 are shown by bold lines. Remdesivir (GS-5734) Remdesivir is a nucleotide analog that is used for the treatment of infections caused by the Ebola virus and the Marburg virus [6]. However, it has also shown activity against respiratory syncytial virus, Junin virus, Lassa fever virus, Nipah virus, Hendra virus, and the MERS and SARS coronaviruses [7][8][9]. Remdesivir inhibits RNA-dependent RNA polymerases, most likely through the delay of RNA chain termination in the cell [10]. It is therefore one of the most promising compounds for treating COVID-19 [4]. Umifenovir and Lopinavir/Ritonavir methyl]-indole-3-carboxylate hydrochloride monohydrate; trade name Arbidol) is a potent Russian-made broad-spectrum antiviral agent, that is used to treat influenza A and B viruses and hepatitis C virus (HCV) [11]. Although the mechanism slightly differs depending on the virus, it is reported that umifenovir inhibits viral fusion with the host cell membrane and subsequent entry into the host cell [11]. Recently, a trial involving the use of lopinavir/ritonavir (LPV/r), which are protease inhibitors used to treat HIV, in adults hospitalized with severe COVID-19, showed no observable benefit of LPV/r treatment beyond the standard of care [12]. Another retrospective cohort study tested umifenovir combined with LPV/r, versus LPV/r alone, against COVID-19 [13]. The results showed a favorable clinical response with umifenovir and LPV/r compared to LPV/r alone [13]; nevertheless, further studies will be necessary to determine efficacy and the occurrence of resistance. Since SARS-CoV-2 needs to undergo activation on the cell surface, umifenovir combined with LPV/r will help prevent the entry of the virus. The identification of more specific mechanisms may be beneficial for future clinical applications. Chloroquine Phosphate It was reported that chloroquine phosphate, a well-established drug used to treat malaria, was shown to have apparent efficacy, and was acceptably safe, when used against COVID-19 in multicenter clinical trials conducted in China [14]. In China, it was recommended that chloroquine phosphate be included in the next version of the Guidelines for the Prevention, Diagnosis, and Treatment of Pneumonia Caused by COVID-19 issued by the National Health Commission of the People's Republic of China [14]. Chloroquine, which has been used since 1934, has several anti-inflammatory and antiviral effects that have been reported by previous studies [15]. For instance, chloroquine exerts direct antiviral effects by inhibiting the pH replication of several viruses, including flaviviruses, coronaviruses, and retroviruses such as HIV [15]. Moreover, it is reported that chloroquine has immunomodulatory effects that involve decreasing the production and release of tumor necrosis factor-α (TNFα) and interleukin (IL)-6 [15]. During a viral infection, the immune response is activated and the production and release of pro-inflammatory cytokines, TNFα, IL-1, IL-6 and interferon-gamma (IFNγ) is increased. Chloroquine, however, blocks these events [15]. Accordingly, chloroquine also prevents further deleterious mechanisms that may lead to acute respiratory syndrome, such as the alteration of tight junctions, the further release of pro-inflammatory cytokines, and increases in microvascular permeability [15]. Previous studies showed that the inhibitory effects involve the inhibition of autophagy [16]. Autophagy is a response mechanism to cellular membrane stress, induced by nutrient deprivation, hypoxia, and exposure to radiation and chemotherapeutic agents [17]. In animal experiments, chloroquine is highly effective in treating avian influenza A H5N1 virus infection by inhibiting autophagy [16]. Since chloroquine and its analog hydroxychloroquine are clinically relevant inhibitors of autophagy [17], the application of chloroquine may be reasonable and facilitated. A recent study using cancer stem cells demonstrated that mefloquine hydrochloride, an antimalarial drug used to treat patients with resistance against chloroquine, efficiently eliminated colorectal cancer stem cells by disrupting endolysosomal proteins RAB5/7 [18]. Given that this lysosomal-dependent mechanism is a common platform for viral infection [19], other inhibitors of autophagy may be worth examining for the treatment of emerging infectious diseases, such as COVID-19. In the context of drug repurposing for COVID-19, it is also suggested that resistance against inhibitors of autophagy may be worth further examination. The latter study demonstrated that the receptor-binding domain (RBD) of the viral spike (S)-protein in SARS-CoV-2 shows a strong interaction with human ACE2 molecules, despite its sequence diversity [21]. The authors also suggested that SARS-CoV-2 poses a significant public health risk for human transmission via the S-protein-ACE2 binding pathway [21]. Interestingly, the study showed that ACE2 was preferentially expressed by a small population of type II alveolar cells, and that males have higher ACE2 expression than females [1,21]. The study also suggests that the binding of SARS-CoV-2 to ACE2 will increase the expression of ACE2 [1]. In many human and rodent studies, ACE2 expression is induced by treatment with ACE inhibitors (ACEIs) or angiotensin II receptor blockers (ARBs), which are commonly used as antihypertensive drugs [23]. The expression of sodium-dependent neutral amino acid transporter B(0)AT1 depends on the presence of ACE2 in the respiratory tract [24]. Given that COVID-19 includes symptoms such as fever (98%), cough (76%), dyspnea (55%) and fatigue/muscle pain (44%) [1], its symptoms may be relevant to the respiratory expression of ACE2. A recent retrospective study indicated that COVID-19 patients with cardiovascular disease (CVD) have a higher risk of mortality [25]. Lower lymphocyte counts and higher body mass indices (BMI) are more often seen in patients with serious conditions [25]. A recent study showed that the use of ACEIs or ARBs for treating CVD does not affect the morbidity and mortality of COVID-19 [25]. In addition, it has been reported that the small intestine is the organ expressing ACE2 most highly [23]. Given that SARS-CoV-2 can be detected in the excrement of COVID-19 patients [26,27], these observed cases might involve cells in the small intestine infected with the SARS-CoV-2 binding receptor. The crystal structures of S-protein binding to ACE2 has been revealed as an important interaction between host and SARS-CoV-2 [28,29]. In addition, it is known that ACE2 binds to Angiotensin II receptor type 1 (ATR1) and the sodium-dependent neutral amino acid transporter B 0 AT1, also known as SLC6A19, and that their bindings affect the binding between ACE2 and S-protein [30,31]. Moreover, Phosphatidylinositol 3-phosphate 5-kinase (PIKfyve), two-pore channel subtype 2 (TPC2) and cathepsin L are important for entry into cells [32]. Among them, it was reported that SARS-CoV S murine polyclonal antibodies, targeting conserved S epitopes, inhibited SARSCoV-2 entry [33]. Many therapeutic targets in the entry pathway via ACE2 have been reported, meaning ACE2 would therefore be a promising target for therapy of SARS-CoV-2. Dipeptidyl Peptidase 4 (DPP4; CD26) It was reported that dipeptidyl peptidase 4 (DPP4) is a functional receptor for the emerging human coronavirus via S-protein, as well as ACE2 [34]. The interaction between the virus and the host cell membrane allows for viral S-protein-directed cell-cell fusion, and the resultant spread of viral infections [35]. As another example relevant to drug repurposing and the ideal strategy for confronting COVID-19, the specific role of DPP4 on COVID-19 remains to be investigated. Further research is necessary to utilize DPP-4 as a therapeutic target for COVID-19. Aminopeptidase N (APN; CD13) It was previously reported that aminopeptidase N (APN) is involved in broad receptor engagement, which promotes the cross-species transmission of COVID-19 [36]. Interestingly, previous studies identified APN as a surface marker for cancer stem cells in the human liver [37]. Repurposing previous studies also allowed for the development of a poly(ethylene glycol)-poly(lysine) block copolymer-conjugate (Ubenimex) that targets APN specifically [38]. As drugs that can be repurposed, low doses of APN inhibitors, including Ubenimex or its derivatives, may be beneficial for inhibiting the spread of the virus. Control of Virus-Specific RNA Modification in COVID-19 Although the modification of RNA by methylation is critical in biology, methylation is also important for the process of RNA capping in coronaviruses [39]. Like the coronaviruses that cause SARS and MERS, the mechanism of RNA capping may also be a draggable target in SARS-CoV-2. RNA capping plays a role in the transcription of viral RNA, as well as stability, replication, and evasion from the host's immune response. Many RNA viruses, including the coronaviruses, have evolved mechanisms for generating their cap structures with methylation at the N7 position of the capped guanine, and the ribose 2'-O-position of the first nucleotide. This mechanism plays a critical role in pre-mRNA splicing, mRNA export [40], RNA stability (via the blocking of degradation by 5'-3' exoribonuclease) [41], translation initiation (by promoting host eukaryotic translation initiation factor 4E (eIF4E) binding) [42], and escaping the host's innate immune system [43]. In general, 5'-end-capped mRNAs are produced through several steps [39]. Although there is no evidence to demonstrate the existence of an RNA guanylyltransferase (GTase) that is unique to coronaviruses, the coronaviral (guanine-N7)-methyltransferase (N7-MTase) plays a role in processing RNA to produce the cap-0 structure (m 7 GpppN) [42] in the proceeding reaction by 2'-O-MTase, to form the cap-1 (m 7 GpppNm) and cap-2 (m 7 GpppNmpNm) structures [44]. Both N7-MTase and 2'-O-MTase are catalyzed via the transfer of a methyl group from S-adenosyl-methionine (SAM) to the RNA substrate through the DxGxPxG/A SAM-binding motif. During the methylation process, S-adenosyl-homocysteine (SAH) is generated as a byproduct. Conclusions Although specific treatments, including vaccines, have not yet been developed for COVID-19, effective prevention methods are now recommended on a global scale. Accordingly, to overcome this pandemic, developing specific inhibitors for viral entry and replication, as well as drug repositioning, will be necessary. As above, several clinical trials and drug repositioning studies are currently ongoing. Eventually, new studies will allow us to better control this pandemic and identify new treatments. Computational calculation and artificial intelligence would help the rapid development of a therapeutic method. On the other hand, accurate crystal structure determination and much drug-response data are necessary for its success. The efficient sharing of information will be important for overcoming this pandemic in the era of globalization. Funding: This work was supported in part by a Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology (15H05791; 17H04282; 17K19698; 18K16356; 18K16355); AMED, Japan (16cm0106414h0001; 17cm0106414h0002). Partial support was received from the Princess Takamatsu Cancer Research Fund.
3,901.8
2020-04-01T00:00:00.000
[ "Medicine", "Chemistry" ]
The Philosophical Significance of Secondary Uses of Language in Wittgenstein’s Late r Philosophy This paper aims to provide an account of Wittgenstein’s employment of the distinction between primary and secondary use of words. Against views that circumscribe its relevance to aesthetics and ethics, the paper demonstrates that there are many instances of secondary uses in Wittgenstein’s work that are not reducible to those limited applications. Additionally, as secondary uses are often interpreted as having an expressive function, the paper argues that we cannot reduce secondariness to a single unifying principle, because the distinction is philosophical , as it works as a powerful device to tackle different, often unrelated, philosophical issues. It is unclear where the philosophical significance of such a distinction lies, as well as its general function within Wittgenstein's conception of language. In the literature, at least three different interpretative approaches can be pointed out regarding secondariness. The first sees secondary uses as mostly irrelevant, almost uninteresting by-products of our life with language. As Oswald Hanfling points out, Wittgenstein's examples are "idiosyncratic" and "abnormal" (1990: 122), they belong to the "margins of language " (2002: 152). If secondary uses are marginal, exceptional -not "essential" to the concept of meaning and language, as Hans Johann Glock argues (1996: 40) -we should not be much bothered to understand their nature. On the other hand, many scholars have employed secondariness to understand our aesthetical discourse (Tilghman 1984, Hanfling 1990, 2002, Budd 2006. Ben Tilghman, for instance, complains that the examples 'for me the vowel e is yellow' and 'Thursday is lean' are idiosyncratic and infelicitous, as they risk discrediting the distinction between secondary and primary uses and its broader -and fruitful -applications to art and aesthetics (1984: 160). In a nutshell, these authors argue that many aesthetic descriptions in art -for instance when we say that a painting is 'dynamic,' a musical theme 'triumphant' or 'plaintive,' a dance step 'woody,' and so forth -are based on secondary uses of ordinary words. The appeal to secondary use to explain the logic of our attribution of aesthetic qualities has the undeniable advantage of dismissing or bypassing any theory that explains said attribution through the problematic notion of resemblance (see Hanfling 1990: 117-119 andBudd 2006: 135 -141). Along the same line, Cora Diamond employs secondary uses to account for our ethical discourse. In particular, the distinction between absolute and relative good that Wittgenstein makes in the Lecture on Ethics, is interpreted as an application of the later distinction between primary and secondary uses of the word 'good' (Diamond 1991). Unlike the first, this second approach conceives secondary uses as a crucial feature of our life with language, which can be fruitfully employed to clarify certain issues in aesthetics or ethics. Their philosophical significance, even though circumscribed only to a limited area of language, may it be art or ethics, is fully recognized. Finally, Michel ter Hark's work on Wittgenstein's philosophy of psychology outlines a third theoretical option for understanding the logic of secondary use. According to ter Hark, secondary uses of words such as 'meaning' and 'experience,' as well as 'Thursday,' 'yellow,' and 'lean,' have an expressive function (2011: 515): we employ words secondarily to express ourselves, to convey something about us and our experience. Accordingly, secondariness should be bound to our psychological discourse about ourselves and our inner experiences. In this paper, I aim to offer an overview of Wittgenstein's employment of the distinction between primary and secondary uses of words, and develop a fourth interpretative option, which refuses to deny any significance to secondariness (as in the case of the first approach), without circumscribing it only to a limited area of language (as in the case of the second and the third). I will show that ter Hark's interpretation, even though substantially correct, is theoretically and philologically partial, insofar as it risks excluding those employments of words that Wittgenstein himself often mentions in the Nachlass, that are hardly expressive, and yet can be said to be secondary. It is the case of telling absurdist tales, attributing emotions to inanimate objects, giving instructions about how to play music, and describing aesthetic qualities. It follows that secondariness is far from being at "the margins of language." Second, in the light of this discussion, I will argue that the distinction is not reducible to an overarching principle, or function, able to capture every instance of secondariness, as its function is primarily philosophical, that is, it is meant to be a logical tool for distinguishing different uses of those words that can lead to confusions if conflated. Accordingly, the distinction is strictly problem-relative, as it is usually mentioned or introduced to tackle those puzzles that are caused by such confusions. The paper is structured as follows. First, I will give a quick description of those criteria Wittgenstein points out to define secondariness in the second part of the Investigations. I will focus on the difference between secondary uses and metaphors, meant to highlight an important feature of secondariness: its unavoidable and yet spontaneous character. Second, I will illustrate ter Hark's interpretation and show thateven though it covers many cases of secondariness in the Nachlass -it is still unsatisfactory to confine secondariness to expressive language. Finally, I will proceed to show how this proliferation of examples is due to the philosophical nature of the distinction. 1.Unavoidable and yet Spontaneous: Secondariness and Metaphors In the second part of the Investigations, Wittgenstein elaborates on the distinction between primary and secondary use by laying out three main criteria to identify secondariness. The first involves explanation: we cannot point to words used secondarily as an explanation of their use. If we must teach the meaning of 'yellow,' we certainly point to a sample of the colour, drawings might be involved. Surely, the vowel e is not mentioned as an example of a yellow object (PPF: § 275). The fact that words used secondarily do not change the way we explain them, is also the reason we are not inclined to talk about different meanings here: the words 'fat,' 'lean,' and 'yellow,' are all explained in their usual way. The second involves use (PPF: § 276): one cannot use a word secondarily without knowing its primary meaning. We are here dealing with, to say with ter Hark, a form of "logical dependency 1 of a use upon another one" (2011: 515). These criteria are perfectly extendible to metaphors and figurative language too. The third criterion, then, is meant to rule these out of the picture. The main difference between metaphors and secondary uses is that metaphors are mostly optional figures of speech that can be explained by paraphrase and thus avoided if they bring about misunderstandings. To use Cora Diamond's example, if I say that 'man is the cancer of our planet,' I can rephrase my thought way less emphatically by stating that humankind is an invasive species that is destroying natural ecosystems (1991: 227). This kind of explanation is precisely what is excluded in the case of secondary uses: we cannot appeal to another piece of language to paraphrase what we mean. Secondary uses are in this sense not optional; they are unavoidable, insofar as we are bound to use those specific forms of expression to convey the meaning we want to convey. "I could not express 1 Sometimes, in order to emphasise this kind of logical dependency, secondary uses are said to be parasitic (Hanfling 1990: 131, Baker Hacker 1990. However, I will avoid this qualification to define secondariness. Even though correct, it hides an implicit evaluation of secondary uses as a philosophically irrelevant phenomenon, something I explicitly challenge in this paper. what I want to say in any other way than by means of the concept of yellow," Wittgenstein points out (PPF: § 278). At the same time, the unavoidable character of secondariness is not to be read as a psychological compulsion. That would mean that we could contemplate alternative forms of expression and not choose them because we feel urged otherwise. On the contrary, there is no alternative we can envisage when we use words secondarily. There is no other way to convey what I mean by saying that e is yellow than saying that e is yellow and the words I apply here feel like a natural and spontaneous extension of their use, which flows naturally from the very meanings of the words involved. 2 Incidentally, it is important to stress that Wittgenstein's point is entirely negative. He is not sketching in this regard a theory of metaphors, that is, he is not committed to the claim that every metaphor must be paraphrasable and must be optional. If so, it would be hard to defend such a theory. For instance, dead metaphors -the cannon is said to have a mouth, the table legs -are metaphors which became part of the common heritage of our language, and they are far from being optional; we just use them as the most appropriate forms of expression in certain circumstances. The very notion of paraphrase is not that uniform either. In the case of Diamond's example, it is easy to appeal to non-figurative applications of words to convey the same thought. Not so for the controversial Shakespearian themed example of 'Juliet is the Sun,' where the paraphrase too is clearly figurative. We can explain that Juliet's role in Romeo's life is a centre of gravity that illuminates and makes his life bright, but these would be, again, figurative uses of the words 'centre of gravity' or 'illuminate.' Thus, there are metaphors that are paraphrasable through other figurative words, others that are not, whereas others are hardly optional, as in the case of dead metaphors. This indeterminacy, however, is not an argument against Wittgenstein's point, which remains narrow: the comparison with metaphors aims only to stress the unavoidable character of secondary uses, and together the fact that they are bound to the primary meanings of 2 I am here using the word 'spontaneous' with the sole intention to stress the difference with psychological compulsion. It is not a Wittgensteinian way to define secondariness. Spontaneity is mentioned in a (rather obscure) remark in the second part of the Investigations, in a way that is, however, compatible with my use: "What is new (spontaneous, 'specific') is always a language game" (PPF: § 335). Notably, spontaneity is connected here with novelty, a distinctive trait of secondary uses (LW I: § 61). their words. As such, to convey what I want to convey by saying that e is yellow I surely can use synonyms if needed, but I would be nevertheless bound to the meanings of the words I choose. Not so, if I want to describe the role of Juliet in my life as a lover or the physical prowess of my friend Jason when I say that he is a bull. To sum up, secondary uses are in a way figurative, without being optional or explainable by paraphrase. They have an unavoidable character, as the meaning we want to convey is bound to the forms of expression we use, without however being compulsory. They feel as a natural extension of the use of words, in this sense they are an immediate or spontaneous use of language, without however constituting the primary meanings of words (by saying that Thursday is lean I am not changing the meaning of weekdays to be found in a dictionary). As in the case of metaphors, by using words secondarily the speaker means something through what their words conventionally mean. Differently from metaphors, however, what we mean when words are used secondarily is directly and immediately inscribed in the words and concepts we use, so that any explanation that is not a mere repetition of the same words (or synonyms) is in principle opted out. 2.Secondariness and Expression Even though the second part of the Investigations lays out the criteria for secondariness, it is dramatically short of examples. In order to get a clearer picture of what Wittgenstein had in mind, beyond the puzzling cases of coloured vowels and lean Thursdays, we should look in the Nachlass. There, as already noticed by Michel ter Hark, Wittgenstein looks committed to the idea that we use words secondarily to express ourselves, to convey a certain experience, as the following remark implicitly suggests: But why do you use just this expression for your experience? -such a poor fit! -That To give some context to the remark, Wittgenstein is here investigating the use of those requests, or orders, asking to utter an ambiguous word, and together mean only one of its meanings. Wittgenstein's example involves the German word Bank, which can mean either 'bench' or 'bank' (RPP II: § 571). These orders make sense only if we experience the meaning of our words, something a meaning blind is frequently said to be unable to do (Z: § 184, LW II: § 711). Hence, secondary uses are mentioned in a context where Wittgenstein is exploring the role of experience of meaning in our use of language. The sentence 'the vowel e is yellow' is connected to a certain experience, an experience we express precisely by using the words secondarily. Primitive expressions of pain, such as cries and yells, are here introduced as a simple term of comparison of language used expressively. Importantly, Wittgenstein suggests that, if we overlook the function our sentence is supposed to have, we might take it "the wrong way," that is, as if those words have a different function (a descriptive one, for instance), and confusions might arise. We might wonder what kind of experience we do really express by talking about coloured vowels. It is hard to see how we would vent to our own feelings by simply talking about the colours of sounds. It has been suggested that the experience in question is the one of psychological synaesthesia, a particular psychological condition that makes us experience involuntary cross-modal sensations, such as seeing a sound as colourful, or hearing a colour with a certain tone (Kindi 2009). It is likely that synaesthetes would use such expressions to compare their experiences, or to exemplify their own experience to somebody for whom letters do not have colours. However, the twin example of the lean Thursdays puts cross-modal sensations out of the picture. Even in this case an experience is expressed, but no synaesthesia is involved. Accordingly, we can admit that psychological synaesthesia is expressible through secondary use, but the concept of experience Wittgenstein had in mind was probably broader. As ter Hark already suggested (2011: 516), the experience in question is more likely to be what Wittgenstein calls, in the context of his discussion of the experience of meaning, the atmosphere of our words. 3 A murky notion in Wittgenstein's Nachlass, the atmosphere can be defined as "the corona of faintly indicated uses" that familiar words carry in use (PPF: § 35). In the Last Writings, it is analogously defined as "a picture of a word's use" (LW II: 39). To sum up Wittgenstein's scattered employment of the term, we can say that a word acquires an atmosphere -a sort of psychological trace the word carries in use -once we get accustomed to applying the word in the multifaceted contexts of our life and culture. The atmosphere can also be defined as a felt experience of the meaning of our words, which is rooted in the broader context of our ordinary life and language use. What is more, such an experience can only be referred to "by repeating the original expression" through which we convey it in the first place (ter Hark 2011: 516). By saying that e is yellow, then, I am giving expression to the peculiar symbolism the vowel has incorporated in my own life, by associating it with a certain colour (the same goes for the weekdays example). Such a symbolism might be naturally caused by the vowel's sound, the role of the colour yellow in our life, certain associations to our cultural habits, or whatever. As the following remark points out, the experience can be expressed only through these words, just as we can express pain through natural pain reactions: Would it be more correct to say that yellow 'corresponds' to e than e is yellow'? Isn't the point of the game precisely that we express ourselves by saying e is yellow? Indeed, if someone were inclined to say that e 'corresponds' to yellow and not that it is yellow, wouldn't he be almost as different from the other as someone for whom vowels and colours are not connected? (LW I: § 59) We get struck by the strangeness of the expression 'e is yellow. ' We "take it the wrong way," and think that it should be reformulated, as it cannot be that we are really attributing a colour to a sound (sounds cannot be colourful, after all!). So, we might suggest that what we really mean is only that the vowel 'corresponds' to the colour. However, here lies the double mistake. First, the verb 'to correspond' is likewise used secondarily (the quotation marks are there to stress this aspect). We would not really be getting further, as we would only swap one controversial word with another one, equally controversial. Second, it is the point of the game that we use the verb 'to be' here. If we replace the words that we use to express what we mean, with 'correspond,' or with the expression 'is like' because we think that we deal with a simile here, we lose it, we do not play that game, we simply do not convey what we want to express. We would not be expressing the atmosphere of the words. If somebody is really inclined to talk about correspondence and not about being here, their case would be similar to that of a person who simply does not get the kind of connection we envisage between vowels and sounds at all, because they have a completely different experience of those very words. The expressive function of secondary expressions is less controversial if we think of another, less idiosyncratic, example: 4 The feeling of the unreality of one's surroundings […] Everything seems somehow not real; but not as if one saw things unclear or blurred; everything looks quite as usual. And how do I know that another has felt what I have? Because he uses the same words as I find appropriate. But why do I choose precisely the word "unreality" to express it? Surely not because of its sound. (A word of very like sound but different meaning would not do.) I choose it because of its meaning. But I surely did not learn to use the word to mean: a feeling. No; but I learned to use it with a particular meaning and now I use it spontaneously like this. One might saythough it may mislead-: When I have learnt the word in its ordinary meaning, then I choose that meaning as a simile for my feeling. But of course what is in question here is not a simile, not a comparison of the feeling with something else. (RPP I: § 125) We can talk about 'a sense of unreality' when everything feels unreal. This expression is usually employed to report a particular feeling of detachment 5 and alienation, and it 4 There are other examples that stress the connection between secondariness and expressiveness from the Nachlass. LW I: § 69-73, for instance, examine the forms of expression that we employ to describe the character of proper names as secondary uses. For instance, in the sentence 'the name "Schubert" fits Schubert's work' the verb 'to fit', is used secondarily. Again, Wittgenstein points out that we do not really describe Schubert or its music by using these expressions. We rather formulate "a pathological statement about the speaker" (LW I: § 73), as we reveal something about our own cultural world (our musical taste, beliefs, and opinions concerning Schubert's music). characterizes certain psychological disorders. Notably, this expression is used secondarily, as 'unreality' is not learnt to mean a feeling, and we would not mention the sense of unreality to explain its meaning. The use is in a sense figurative, but not metaphorical; it does not constitute a simile either, as no comparison is really being drawn. We learn the word in its primary meaning, and we use it in a new way, we apply it in a new context in which the point is to express a particular feeling. The word can be seen as "the bearer of another technique" (RPP I: § 126), that is borrowed or co-opted from another language game. This extension of use is spontaneous -no one teaches it to us -and is based on the primary meaning of the words in question. Wittgenstein says that this feeling can be conveyed to others, as whoever talks about a feeling of unreality uses the same words secondarily as other members of the same community would. There is no other criterion for understanding apart from our tendency to use the same forms of expression in the same circumstances. As ter Hark suggestively frames it, the person will know what we are talking about because we are "in tune with the same expression" (2011: 516). Strikingly, what is in common between people using the same words secondarily is not only a cluster of rules (the conventions through which we learn the primary meanings of the words involved). What is common is that we use the same forms of expression to convey the same experience. We use the same expressions because we are accidentally attuned with each other in the same community of experience. The same can be said about the coloured vowels and the fat Thursdays. Finally, it is worth mentioning that Wittgenstein talks about experience and secondariness also in the case of telling our dreams. We have already encountered dreams in RPP II: § 574, where the expression 'in my dream I knew that…' was paired up with the sentence 'e is yellow'. The most explicit passage about dreams is the following, where, once again, drams are mentioned in connection to experience (in this case, the feeling of unreality): Now I am not using the word (meaning) for something else; rather, I am using it in a different situation [just as I am not using 'know' to refer to two different things when I say 'In my dream I knew.' CF. also: feeling of unreality]. (LW I: § 57) When we report the content of a dream, it can happen that we say that in the dream we knew something. This 'knowledge' is different from the knowable information we share in the every-day, insofar as it entertains a different relationship with the dreamer: when we say that, in a dream, we know something, we are not really committed to it, we do not really believe that we knew anything. Furthermore, and more importantly, what we know in a dream usually does not require the same level of epistemic warranty: we simply know things, even though we have no reasons or grounds to believe so. We do not learn to use the verb 'to know' this way, yet we use it: we co-opt the word for other means. The same thing can be said of other analogous forms of self-attribution, such as They are compared to dreams in the following way: Is a dream a hallucination? -The memory of a dream is like the memory of a hallucination, or rather: like the memory of a real experience. This means that sometimes you would like to say: "I just saw this and that", as if you really had just seen it. (LW I: § 965) The memory of a hallucination is similar to the memory of real experience and to that of dreams. We can say, for instance, that we saw something we later found out not to be real, yet we use the verb 'to see' even if we did not really see anything. The verb is here used secondarily, as much as the verb 'to know' when reporting a dream: a natural extension of a word used in a new context, to play a different game. The cases of dreams and hallucinations are more complex and ambivalent than the other cases of expressive uses of language. For sure, we are neither expressing ourselves when talking about our dreams, nor are we talking about how we feel, as in the case of the feeling of unreality. It is true, however, that while telling a dream we give substance to an experience we had, rather than describing an actual thing in the world, or a certain knowledge. Whoever does not dream would not understand our practice of recalling of a dream, very much like the case of a speaker who does not understand what we are getting at by saying that e is yellow. In a community where no one has ever hallucinated, it is likely that we could not be understood when describing a mirage either. 3.Beyond Expression: Description and Absurdism Now, if we focus only on the examples of vowels and weekdays, or the one about the sense of unreality, as ter Hark does in his paper, we might be tempted to conclude that secondary uses of words have only an expressive function. 6 This, however, would be a mistake. Let us think about the case of teaching music and art or giving instructions on how to play a song. The famous jazz musician Wayne Shorter was once reported to have said to one of his band members, Danilo Perez, to 'put more water in the chords'. 7 This expression is secondary, as it is logically dependent on the primary meaning of its words, we cannot use it as an example to elucidate the meaning of 'water' and it is not a metaphor either, because it is not paraphrasable. The sentence presupposes a certain understanding on how to play music and, if we want, even a certain experience of music 6 This is because ter Hark's main goal is to clarify Wittgenstein's employment of the concept of experience: "Other concepts […] e.g., 'secondary use', 'illusion', 'inclination', and 'primitive reaction', turn out to be part of one and the same conceptual survey of meaning-experience" (2011: 502). We find a similar, even stronger, claim in Gilead Bar-Elli, as he claims that "the phenomenon of using words in their secondary sense depends on the experience of meaning" (2006: 2043). Differently from ter Hark, however, Bar-Elli envisages in Wittgenstein a theory of meaning as experience and interprets secondary senses as an epiphenomenon of the experience that every word supposedly bears in use. As I will show in the following paragraph, this general approach to secondary uses is partial, as it implicitly denies that words can be employed secondarily without an experience being expressed or involved. 7 As attested in Shorter's biography (Mercer 2007: 302). and sound. 8 However, it is neither a sentence through which Shorter expressed anything about himself, as in the case of the feeling of unreality, nor it is a sentence to express the atmosphere of our words; he was rather giving an instruction about how to play a song through an implicit description of the sound he wanted to achieve. Descriptions, Wittgenstein points out, are "instruments for particular uses" (PI: § 291). There is no univocal model of what a description is, it varies from case to case and depends on what we want to do with it. As such, I can describe a sound as watery if I want to convey a certain information, even though I feel as if I cannot use any other form of expression to do it. In this case, I would use language secondarily to provide a description. Other valuable examples of descriptive secondariness come from poetic language. It is the case, for instance, of poetic synaesthesia. Expressions like 'soft silence,' 'black scream,' and 'silver voice,' are conjured because of their evocative power and poetical effectiveness, yet they can be used to describe actual features in the world. It might be suitable to call a scream 'black,' for instance, when it is loaded with dread, bereavement, and despair. 9 In this case, using the synaesthesia can be considered to be the most natural, appropriate, even accurate way to provide a description. Aesthetic qualities, in general, usually require secondary employments of words to be described: paintings that are dynamic, melodies that are plaintive, wines that are round if tasted, and so forth. 10 Besides, in the Investigations, there is another important instance of secondariness to consider, that can hardly be explained as expressive or descriptive. Amidst the private language argument, while addressing the privacy of pain and its relationship with pain behaviour, Wittgenstein addresses the objection that we can talk about pots and their feelings in a fairy tale, even though there is no pain behaviour imaginable accompanying pain in this case. This is how Wittgenstein addresses this issue: We do indeed say of an inanimate thing that it is in pain: when playing with dolls, for example. But this use of the concept of pain is a secondary one. Imagine a case in which people said only of inanimate things that they are in pain; pitied only dolls! (When children play trains, their game is connected with their acquaintance with trains. It would nevertheless be possible for the children of a tribe unacquainted with trains to learn this game from others, and to play it without knowing that it was imitating anything. One could say that the game did not make the same kind of sense to them as to us). (PI: § 282) Here, the notion of secondary use is employed to neutralize the objection that, insofar as we can attribute pain to inanimate objects, pain is somehow logically independent from pain behaviour. This conclusion can be easily blocked if we carefully distinguish the different uses of the word 'pain' in this case. It is true that we can attribute pain to dolls -this is what children frequently do while playing with their toys -but this use of the word is secondary, and nothing about the nature of pain follows from it, just as nothing about the nature of weekdays can be assessed if we say that Thursday is fat or thin. It is easy to see why we deal with a case of secondariness. First, we do not point at dolls to explain what pain is (at most, we can point at a certain expression in a doll face as an example of a pain expression). Second, this use is logically dependent on another one, as we need to begin by learning the word 'pain' in its connections with pain behaviour. Wittgenstein underlines this kind of logical dependency, so typical of secondariness, when he asks us to imagine a group of people that pity only dolls. We can certainly imagine something like that; only, the word would have a different sense for them, as it would not share the same connections with our life as our word does (analogously, if the word 'yellow' were used only to talk about vowels, its meaning would be quite different). The case is comparable to that of children playing with toy trains: in the case of a tribe of children playing with trains but ignoring their connection with real trains, the game would not have the same sense -the same role -in their life: it would not present the same connections with its context, the things children would say about it would be different, the way the game is played would diverge in significant ways, and so forth. 11 Third, this kind of use is not metaphorical, as there is no other way to say that a doll is in pain in the context of a children's game. Notably, it is clear that children neither describe anything while playing, nor express their own feelings. They just play a game of pretence, where dolls cry and suffer because they are taken to be real people and represent living bodies. Even though it consists of a spontaneous and natural extension of a word's use in a new context, this case of secondariness is not forced on us in the same way the coloured vowels and the other expressive uses are. The attribution of pain is in fact markedly stipulative: children expand the use of words by establishing a new instance of use in the context of a certain game. Thus, here secondariness acquires a further dimension that is not reducible to the other examples already described, a feature that is however fully captured by the three criteria for secondary uses. The general picture of secondariness becomes even more complex. Now, as the case of the attribution of pain to dolls is introduced as a term of comparison to understand why we can say that pots talk and feel, we can make a step beyond Wittgenstein's text, and suggest that secondariness is also involved when we tell fantastic and absurdist tales. Pots do not talk and feel, we cannot even imagine what the behaviour of a pot in pain would be like. Where does a pot have a mouth, how could it express its pain? Yet when we tell a story -within the specific language game of telling a fiction -a sentence like 'the pot shrieked in pain' is perfectly fine: it does what it is required to do in the context of telling a tale. Words are spontaneously projected into a new context of use and combined to represent absurd and unimaginable situations. 11 Oswald Hanfling argues that the case of toy trains is an example of a weaker form of secondariness, as we can play toy trains without being acquainted with real trains and thus knowing the primary use of the word (1990: 127). However, this is misleading. The fact that the whole sense of the game would change means nothing but the fact that we would play a different game. The example that Wittgenstein makes about talking and moaning pots might be silly, yet it should not be overlooked, as it gives us the chance to understand the role of secondariness when it comes to understanding certain employments of words in literature. Let us think about Franz Kafka's Metamorphosis, an eerie and absurdist novella about a man, Gregor Samsa, inexplicably transformed into a monstrous cockroach. This example is not accidental, as it struck the attention of Oxonian philosophers in their disputes about attributions of identity, as reported in Isaiah Berlin's memories: 12 The principal example of the latter [the problem of identity] that we chose was the hero of Kafka's story Metamorphosis, a commercial traveler called Gregor Samsa, who wakes one morning to find that he has been transformed into a monstrous cockroach, although he retains clear memories of his life as an ordinary human being. Are we to speak of him as a man with a body of a cockroach, or as a cockroach with the memories and consciousness of a man? 'Neither' Austin declared. 'In such cases we should not know what to say. This is when we say "words fail us" and mean this. We should need new words. The old ones just would not fit. ' (1973: 11) Kafka depicts an impossible scenario -a man wakes up and finds himself to be transformed into a cockroach -and in order to do that he employs old words in a new context of use: the word 'cockroach' is employed to describe a human being. This description is neither figurative nor derogatory; it is secondary. In the story, Samsa is a cockroach. We cannot make sense of this strong identity. We may in fact wonder: what does it mean for a human being to be an insect? How can a cockroach be angry, suffering, thoughtful, as Samsa as a human being can be? Yet, in the context of storytelling, all these questions are pointless, just as it is pointless to question whether dolls are in pain or not when a child plays with them. Most importantly, we cannot weaken the identity -by saying that Samsa is rather a man in the body of a cockroach, for instance (a formula that is also committed to a certain dualistic preconception of the mind-body problem: this is what Berlin and Austin were interested to investigate) -without losing the sense of Kafka's tale, as much as we lose what we want to express with 'e is yellow' if we rather say that e is like yellow or corresponds to yellow. The strong identity is just a more appropriate form of expression. Kafka's Metamorphosis can then be seen as an exercise in secondary language use: within a certain narrative, words can be employed in a new context, their use is extended to convey a certain meaning, a meaning that is bound to the specific combination of words we employ to express it. It is not, as Austin suggested, that we lack words to explain Samsa's condition, that words fail us, and thus we should invent a new vocabulary to make sense of it. The point is exactly the opposite: to express Samsa's condition we can use only those words as Kafka uses them. 4.A Philosophical Distinction Considering the wide application of the notion of secondary uses in Wittgenstein's work, and its further fruitful applications beyond his texts, it is hard to see how we can reduce the plurality of examples we have discussed so far to a single unifying principle, such as a univocal function that secondariness is supposed to fulfil. We can use words secondarily to express ourselves, to convey a certain experience, or feeling, and to give instructions on how to play an instrument. Sometimes, secondary uses have a descriptive function, sometimes they are stipulative, as in the case of pain attribution to dolls, or in fiction. The reasons why we can provide so many different instances of secondary uses in Wittgenstein's work, is that the distinction is meant to be primarily philosophical, that is, it is designed to be a helpful tool to distinguish different shades of word use, and thus clarify those areas of language that confuse us and lead us astray. As the primary aim of Wittgenstein's philosophy is clarification through the dissolution of philosophical problems, then, the distinction is markedly problem-relative, that is, it acquires a certain meaning in relation to the problem it is designed to solve. Accordingly, a single unified account of secondariness is in principle impossible. Let us examine how Wittgenstein appeals to secondary uses to clarify language and tackle a variety of different problems. First, it should be noticed that Wittgenstein mentions the coloured vowels for the first time in the context of an elaborated discussion on commonality (BB: 138-139). The example is introduced to tackle a certain preconception, according to which every application of a word needs to have something in common with all the others in order to be used legitimately . 13 We can define 'blue' to be the colour that all its specific shades have in common, but this does not necessarily imply that blue is a thing that can be pointed at and recognized before learning and applying the word. One conclusion, reported in the Brown Book, is that, when we talk about blue as the thing common to all its shades, we are merely saying that we use the word in in all those cases, and nothing more: we are certainly not committed to assuming a common thing that we can point at which all the shades share (BB: 135). In other words, there is no commonality that can be first acknowledged and then appealed to as a reason for our use of the word. There is nothing beyond use that works as a justification for it: we simply apply the same word to all these shades of colour. Notably, ordinary descriptive sentences like 'the shades A and B are both blue' and expressions like 'e is yellow' are akin, as in both cases we employ a colour word without having a reason to do so, that is, without having a clear perception of a commonality that could justify the new use in the new context. 14 In this case, then, the sentence 'the vowel e is yellow' is useful for proving a general point about the reasons we provide for using words as we do, and dissolve a certain picture of commonality as a required condition for use. In the case of PPF: § 274-278 -those remarks where the distinction is clearly laid out -a careful analysis of their context helps us understand that Wittgenstein's aim was to clarify our ordinary employment of the word 'meaning' to talk about the way we experience the meaning of words, as when we say that a word is 'loaded' with meaning. Especially while reading a poem loud, Wittgenstein points out, words acquire a special meaning, a different ring (PPF: § 264) that we feel. Now, this use of the word 'meaning' is apparently problematic, as it does not refer to the use of the words in question, rather to an experience or a feeling, and we know that use for Wittgenstein defines the meaning of meaning (PI: § 42). Is, then, the word "meaning" polysemous, or ambiguous? Should 13 The gist of this discussion is also present in the Investigations, where Wittgenstein lists different examples of our use of the expression "to see what is in common" (PI: § 72), in the same sections addressing family resemblance. 14 Those interpretations that aim to account for the attribution of aesthetic qualities to works of art through secondary uses, emphasise this aspect of secondariness, as it allows us to avoid any appeal to commonality or resemblance to justify why we use words as we do when describing works of art (see, in particular, Tilghman 1984 andBudd 2006). The employment of secondariness to tackle certain specific aesthetic problems is thus consistent with the general idea that the distinction between primary and secondary uses is a useful philosophical tool to solve specific confusions. we dismiss the definition of meaning as use because sometimes the word seems to be attributed to a feeling of a sort? Wittgenstein implicitly rejects all these questions, in the following way: But the question then remains why, in connection with this game of experiencing a word, we also speak of 'the meaning' and of 'meaning it'. -It is a characteristic feature of this language-game that in this situation we use the expression 'We pronounced the word with this meaning' and take this expression over from that other language-game. (PPF: § 273) Wittgenstein is here rephrasing the criteria for secondariness through the vocabulary of language-games. Much as in the case of secondariness, we use the word 'meaning' in virtue of its meaning to give expression to an experience; this new use is logically dependent from its primary meaning, as we must borrow the expression from a language-game to another, an expressive one. The word meaning is neither polysemous nor ambiguous, as secondary uses do not constitute a new meaning, and we can maintain the definition of meaning as use, as it is because of it that we can use the word 'meaning' secondarily. As a further proof, the remarks that immediately follow PPF: § 273 are those that introduce the distinction between primary and secondary uses: further examples are provided to strengthen the idea that in language these cases of use are more frequent than we might expect. We might wonder, however, what is the problem Wittgenstein aims to tackle by appealing to secondariness here. The answer is the following: through secondariness, we can give a perspicuous description of a certain employment of the word 'meaning' -the one through which we refer to the experience of words -that could lead us to relapse into some form of mentalism, that is, the theory of meaning according to which meaning is an inner or private experience of sorts that accompanies the words in use. 15 If we do not carefully distinguish the primary use of the word 'meaning' from its secondary use, we might be tempted to take the secondary use of the word as an actual proof that meaning is something mental accompanying our words. It is not so: we talk about 15 Mentalism is also the philosophical target of other two instances of secondary uses of words we did not mention, the case where we say that we 'calculate in the head' (PPF: § 279, LW I: § 801, 802, 804) and read silently (LW I: § 803). In both cases, to stress that words are used secondarily is helpful to avoid the temptation to claim that calculating in the head or reading silently actually refer to an inner, hidden, process in our mind. meaning as an experience in those cases where we are giving expression to the feelings a certain word evokes in us, for instance in a poem, and we can do so by employing words secondarily. Given that Wittgenstein's remarks are scattered and sketchy, we cannot easily point out what kind of philosophical problem the case of dreams is related to. Arguably, however, it can be seen as an important case study to tackle the premise of Cartesianism, that is, the assumption that what we know in a dream and what we know in real life are indistinguishable, so that philosophy acquires the task of grounding our knowledge and proving that we do not live in a dream of sorts. On the contrary, if we stress that our talk of knowledge, while reporting a dream, is secondary, then we are less tempted to assume that Cartesian doubt is legitimate. It is legitimate, only if we conflate two distinguishable uses of the same verb, 'to know.' Analogously, we can see the philosophical importance of distinguishing between primary and secondary uses when it comes to understanding the logic of our discourse about hallucinations: if we stress that in this case words are used secondarily, we are less tempted to consider the report of an hallucination as the same as the description of a state of affairs or perceptual reports. If so, then we can target scepticism regarding perception at its core. The case of telling fantastic stories was originally introduced in the private language argument to tackle the idea that we can detach pain from pain behaviour. Indeed, we can attribute pain to dolls and pots while telling stories. Only, this is a secondary use that does not reveal anything significant about the primary uses of our pain vocabulary (just as Kafka's cockroach does not reshape our zoological taxonomy). This vocabulary is learnt in the broader context of our life, in close contact with the pain expressions and behaviour of the other members of our linguistic community. Without it, there would be no pain vocabulary as we know it. Thus, we can point out that pain attribution to dolls is secondary, and so we neutralize an observation that is meant to back up the idea that our concept of pain refers to an inner thing that is logically independent from our behaviour. Beyond Wittgenstein's examples and actual remarks, the case of giving instructions in how to play music might be helpful to clarify our concepts of musical understanding and musical explanation. To ask somebody 'to put more water in the chords' is a form of expression that Shorter used to feel as the only appropriate one to convey a certain idea on how to play. The instruction works as a tool to lead somebody to understand how the piece should be played, much as we do when we lead others to grasp the meaning of a poem (PI: § 533). As in the case of the experience of meaning, to distinguish different cases of what we call 'explanation' and 'understanding' in language might be useful in order to avoid any temptation to conflate different cases of understanding, and thus relapse into any mentalist preconception, according to which understanding always requires a mental act, or an experience. It can do so in the case of understanding music, but only through a secondary usage of language.
11,251.8
2022-08-14T00:00:00.000
[ "Philosophy" ]
Design and Analysis of Multimedia Mobile Learning Based on Augmented Reality to Improve Achievement in Physics Learning  Abstract— The incorporation of mobile technology is necessary for physics teachers to improve students’ learning experience. Therefore, this research aimed to develop and analyze mobile learning based on Augmented Reality for physics education in Senior High School. The research employed a mixed-methods approach, which consisted of two stages. First, the use of the research and development (R&D) employing the Instructional Design Analyze, Design, Development, Implementation and Evaluation (ID ADDIE) Model’s, which comprised a series of steps for analysis, design, development, implementation, and evaluation. The second stage is using empirical analysis with limited classes. The validity of the learning device was assessed using an instrument that included aspects of planning, pedagogy, content, and technique. The results of the validation indicated high scores, with an average of 0.91 for planning, 0.94 for pedagogy, 0.96 for content, and 0.90 for technique, thus confirming the validity and reliability of the mobile learning approach for physics education. The empirical analysis conducted revealed a high level of reliability, with an alpha value of 0.82, which resulted in the determination that the mobile learning approach was valid and reliable for physics education. The second stage of the research was the experimental method. Two classes were randomly selected among six classes of student’s grade XI of SMA Pekanbaru, A class was designed as the experimental group, while another served as the control group which both groups consisted of 34 students which was selected based on homogeneity and normality test results. The results of the experiment indicated that multimedia mobile learning based on Augmented Reality can have a positive impact on students’ achievement in physics. I. INTRODUCTION Physics is a branch of science that investigates natural phenomena [1], and most of the material is abstract and difficult to describe with certainty [2]. According to Chiappetta [3], science encompasses a way of thinking, a method of investigation, and a collection of knowledge. Furthermore, sciences can be classified into two categories: micro and macro sciences, based on the size of the objects they study [4]. The formation of a positive attitude towards the study of physics involves the development of belief, curiosity, imagination, reasoning, and self-awareness [5]. The results of the 2019 National High School Physics Examination, administered to both public and private school students, Manuscript The results of the national exam on high school physics, which revealed a low level of achievement among students, indicate the need for an investigation into the difficulties faced by students in their study of physics. The low percentage of correct answers suggests a lack of understanding of the material taught, as many students are only able to solve problems with the aid of examples provided by their teachers. This research aims to address the difficulties that students encounter in comprehending the concepts of physics, particularly in the area of mechanical waves. This is in line with the findings that physics material can be abstract and challenging to grasp, as evidenced by Depdiknas [5]. A comprehensive literature review has been conducted and it was found that AR can provide 3-dimensional images and integration with objects, as highlighted in [6,7]. The objective of this research is to answer the following research question: 1) How to develop multimedia mobile learning based on AR for the material on mechanical waves? 2) What is the level of validity of multimedia mobile learning based on AR for the material on mechanical waves? 3) What is the level of reliability of multimedia mobile learning based on AR for the material on mechanical waves? 4) Can multimedia mobile learning based on AR enhance students' learning and cognitive skills? Reality (AR) media enhances the learning experience by combining two-dimensional and three-dimensional animated images in a more realistic manner [15]. This media has the potential to offer a new perspective and mode of learning, making it a promising educational tool [16]. Previous studies have investigated the use of three-dimensional techniques in learning. For example, Virvou and Katsionis's research [17], explored the effectiveness of games in the learning process and found that virtual reality games can be highly motivating and enhance educational outcomes. Similar developments have also been conducted by Gosalia [18], who developed three-dimensional animations for e-learning games. According to Ismayani [19], the term AR was first coined by Thomas Caudell and David Mizel in 1990 while working at Boeing. AR was defined as the integration of virtual images into the real world. It is a technology that combines computer-generated objects, two-dimensional or three-dimensional, in a natural environment around the user in real-time. The experience displayed helps users to come up with new ideas to adapt to the real world [19]. AR is a technology that combines the real world with the virtual [15]. Azuma [20] defined the term as a combination of real and virtual objects, which can run interactively in more realistic situations. There is integration between objects in three dimensions that are integrated into the virtual or real world. According to Nasir, AR education has the potential as one of the emerging technologies [8] with great pedagogical potential and are increasingly recognized in line with [1,7,9,[39][40][41][42][43][44]. AR mobile learning-based systems are focusing more on games or simulations [8,19,26,45]. The capabilities of mobile devices, features, properties, such as portability, and social interactivity, simplify reality by bringing in material things. Therefore, the information does not directly affect the user who does not interact directly with the real-world communication, such as streaming video [46]. The Indonesian government, through the Ministry of Research, Technology, and Higher Education in 2017, reported that the number of smartphone users in Indonesia reaches 25% of the total population or 65 million people, [47]. Meanwhile, on its website, the Ministry of Communication and Information of Indonesia wrote a report on the digital marketing research company. Emarketer in 2018 [48,49] stated that the number of active smartphone users was more than 100 million people. The completeness of smartphones with sophisticated operating systems provide users with extensive access in the form of data and information as well as various multimedia contents and interesting applications, including their potential to be used in developing AR. III. RESEARCH METHOD The research is a combination of development and experimentation with the ADDIE ID model. It consists of five stages: Analyze, Design, Development, Implementation, and Evaluation. This study focuses on two crucial stages: development of learning media (augmented reality) and the experimental stage. The research procedure and method for developing augmented reality multimedia mobile learning is showed in Fig. 1. 1 shows the two-phase research. The first phase involved the development of AR-based multimedia learning using the ADDIE instructional model. The second phase was an experimental study to evaluate the impact of multimedia learning on physics achievement. AR is a technology with interactive properties in more real-time space and is in the form of three-dimensional animation that combines the real world with the virtual. In its use [21,22], AR requires the aid of electronic devices such as smartphones or tablets with the Android operating system to functions. Its accessibility and ease of use through mobile devices make it a valuable asset for not only teachers but also students in the field of education [23]. A. Multimedia Mobile Learning Development Procedure The development of multimedia mobile learning by instructional design ADDIE Model's (ADDIE ID Model's) is given in Fig. 2, which is based on Augmented Reality as shown in Fig. 3, 1) Analysis phase (analyze) The analysis stage focuses on identifying the problem and developing AR learning media. It includes several sub-studies, such as needs and task analysis, which can be described as follows: a) Needs analysis The purpose of the needs analysis is to determine the problems or difficulties and characteristics of high school students in learning about Mechanical Waves in physics, as outlined in the previous background. This stage includes a review of previous research results to identify the problems and their underlying causes. b) Task analysis Task analysis is carried out to define the topic and content of AR as a learning media that suits the needs. This analysis consists of several steps, including the following: 1) Material Structure Analysis It analyzes the core and basic competencies under the development of fundamental problems. 2) Analysis of Learning Objectives Learning objectives are based on the main problems developed under the core and basic competencies in the 2013 Curriculum. 3) Concept Analysis It includes making the main concepts that should be in AR learning media. The development of AR learning media is intended to be more coherent and systematic. 2) Design Phase At this stage, the research designed the learning media according to their needs. 3) Development This stage is an activity of making learning media. All the steps and components designed are carried out at this development stage to form a complete product per the plan. 4) Implementation The learning media has been completed at this stage, and its use will be tested. The project is conducted to determine the consistency of the learning media with previous plan. 5) Evaluation The evaluation stage focuses on identifying any deficiencies and errors in the ADDIE development learning media stage. Based on the evaluation results, the product can be revised to create the desired learning media. The next step after the five stages is to test the validity of the learning media product. This validation is conducted by experienced physics education experts who act as lecturers. The aim of this validation is to obtain recognition of the feasibility of the learning media. If the learning media is found to be valid, it will be revised and finalized to produce the final product. B. Mobile Learning Experimental Procedure Experimental research was conducted to assess the impact of multimedia mobile learning on students' learning and cognitive skills. According to Creswell [51], a Quasi Experiment was carried out. This type of experimental research, known as Quasi Experiment, has a nonequivalent control group design, as shown in Table I. The instrument used for data collection is a validation sheet of educational game learning media adapted from the instrument made by Retnawati [52] and the validation assessment items can be seen in Table IV. The letters used are appropriate or easy to read 3. Images in the media according to the content 4. The images used help students' to understanding 5. The images used help with learning 6. The colours used are suitable for reading 7. The sound used is appropriate or not disturbing 8. The buttons or signs used are easily recognizable 9. The positioning of text, graphics, video and markers is Consistent 10. Software instructions and user guide is Complete Aspect -2 : Pedagogy 11. Teaching competencies are clearly written 12. Teaching competencies can be achieved 13. Competency formulation becomes a guideline for media users 14. Topics according to competencies 15. Presentation of topics attracts students' attention 16. The information conveyed is easy to understand 17, This media encourages students to think creatively 18. Presentation the material is organized and easy to follow 19. Examples and exercises given are in accordance with the material 20. Learning methods are suitable for multimedia learning Aspect -3 : Content 21. Learning materials are in accordance with the Curriculum K- 13 22. Learning materials are in accordance with the competence 23. Learning materials are appropriate to the level of students' abilities 24. Learning materials are appropriate de for students' basic knowledge 25. Lesson materials contain an educative value 26. Lesson materials are accompanied by exercises 27. Exercises according to the topic of the lesson 28. Lesson materials are accompanied by formative tests 29. Lesson materials are accompanied by summative tests 30. Formative tests and summative tests according to lesson materials Aspect -4 : Technical 31 Users can control the learning process 32. Media has many branches to other parts 33. Users do not get stuck while browsing the media 34. The journey of presenting media content is easy to follow 35. There is more than one acquisition of information 36. Users can easily find the information they need 37. Users can exit the media whenever they want 38. Software easy to use (operate) The questionnaire assessment category uses a Likert Scale [51] which is presented in Table V Determine the calculated validity value using the following Aiken's V [54] formula: Table VI. From the calculation of the results, an item or device can be categorized based on its index. Furthermore, when the index is equal to 0.4, 0.4-0.8 and greater than 0.8, it is stated to be less valid, moderate validity, and very accurate [52,54,55], respectively, in line with Anggraini et al.'s researches [56]. Therefore, learning media is declared valid and feasible to use when the assessment indicators on the validity instrument have Aiken's V validity coefficient value >0.4 [56]. A. Result of Development of Research Stage This result is based on the Instructional Design ADDIE Model, which includes the Analysis, Design, Development, Implementation, and Evaluation stages. The analytical phase found that the need for this research arose from students' poor understanding of physics concepts, as evidenced by their low scores on the National Exam. In 2019, only 45.23% of students at State and Private SMA levels answered Physics questions correctly. This is lower than in 2017 (48.67%) and higher than in 2018 (44.00%). The lowest scores were in wave material, with 44.67% correct answers in 2017, declining to 40.61% in 2018, then rising to 44.42% in 2019. This suggests that using multimedia and mobile learning methods could improve students' performance in Physics. Results of the designing stage as shown in Figs. 4-11 Meanwhile, to be able to see the appearance of the mechanical waves in the form of Augmented Reality, a system was designed as shown in Fig. 8. The system described in Fig. 8 is compiled to form an application (APK) and installed on a cellphone as shown in Fig. 9. The application as described in Fig. 9 is called multimedia mobile learning based on Augmented Reality application. This study is called Multimedia Mobile Learning based on Augmented Reality (AR) The validation results on the design aspect were calculated using Aiken's V formula presented in Tables VII-X. Cronbach's Alpha value is reliable when greater than 0.7 (> 0.7) [58,59], and the value based on Table IV with the total number of 20 items is = 0.908, greater than 0.7. Therefore, the media is stated to be reliable following the results of media assessment analysis through questionnaire items [56,60]. B. Results of the Experimental Stage Data on learning outcomes in the experiment and control classes were collected from 34 experimental class students by administering a pre-and post-test consisting of 25 questions. The results of the experimental class is showed in Table XII The paired sample t-test analysis reveals that the improvement in learning outcomes in the experimental class is significantly different from that of the control (sig. < 0.05). It can therefore be concluded that the use of AR-based learning media can enhance students' understanding of mechanical wave content, as supported by previous researches [3,[21][22][23][24][25][26][27][28][29][30][31][32]. V. CONCLUSION Based on the results of the research and discussion that has been carried out, it was found that interactive multimedia mobile learning based on augmented reality (AR) was developed using the ADDIE Instructional Design Model (ADDIE ID Model's) learning which includes four aspects, namely aspects of design, pedagogy, content and techniques. The results of the expert validity analysis show that the interactive multimedia Mobile Learning based on AR is valid in terms of design, pedagogy, content and techniques. While the results of empirical analysis show that interactive multimedia mobile learning based on AR is valid and reliable in terms of design, pedagogy, content and techniques. The results of experiments on learning in class show that interactive multimedia mobile learning based on AR can improve students' physics learning outcomes. This interactive multimedia Mobile Learning based on AR is effective in improving student physics learning outcomes.
3,715.6
2023-01-01T00:00:00.000
[ "Physics", "Education", "Computer Science" ]
RECEDING HORIZON CONTROL FOR THE STABILIZATION OF THE WAVE EQUATION . Stabilization of the wave equation by the receding horizon framework is investigated. Distributed control, Dirichlet boundary control, and Neu- mann boundary control are considered. Moreover for each of these control ac-tions, the well-posedness of the control system and the exponential stability of Receding Horizon Control (RHC) with respect to a proper functional analytic setting are investigated. Observability conditions are necessary to show the suboptimality and exponential stability of RHC. Numerical experiments are given to illustrate the theoretical results. asymptotic stability, 1. Introduction. In this work we deal with the stabilization of the wave equation within the scope of Receding Horizon Control (RHC) y − ∆y = 0, where y = y(t, x) is a real valued function of real variables t and x, andÿ stands for the second derivative with respect to time. Our RHC acts on either a part of the domain or within Dirichlet or Neumann boundary conditions. The stabilization problem for the wave equation has been studied extensively by many authors, see for instance [2,23,26,34,38,47,50] and the references cited therein. In these contributions the stabilization problem is obtained by means of a proper choice of a feedback control law, and only few of them provide numerical results. In this work, we use a control law which rests on the solutions of a sequence of open-loop optimal control problems governed by the wave equation on finite intervals. To study the open-loop optimal control problems for the wave equation, numerically and analytically, we refer to [14,24,25,32,33,45,46]. where, similar to the above case, Ω ∈ R n is a bounded domain with smooth boundary ∂Ω. Moreover, the two disjoint components Γ c , Γ 0 are relatively open in ∂Ω and int(Γ c ) = ∅. 3. Neumann control: In this case, we are dealing with the following one-dimensional wave equation with a Neumann control action at one side of boundary − yxx = 0 in (0, ∞) × (0, L), y(·, 0) = 0 in (0, ∞), yx(·, L) = u in (0, ∞), (y(0, ·),ẏ(0, ·)) = (y 1 0 , y 2 0 ) in (0, L), where L > 0. By denoting Y(t) := (y(t),ẏ(t)), and choosing an appropriate control space U, each controlled system in the above cases can be rewritten as a first order controlled system in an abstract Hilbert space H: where for each case, the state space H, the unbounded operator A, and the control operator B will be specified appropriately below, compare also e.g., [36,49,51]. In particular it will be guaranteed that for every T > 0 and u ∈ L 2 (0, T ; U), there exists a unique solution Y ∈ C 0 ([0, T ]; H) to (AP) which satisfies the estimate where the constant c est is independent of Y 0 and u. Now we can reformulate our infinite horizon problem as the following problem inf{J ∞ (u; Y 0 ) | (Y, u) satisfies (AP), u ∈ L 2 (0, ∞; U)}, where the incremental function : H × U → R + is given by and β is a positive constant. To deal with the infinite horizon problem (OP ∞ ), one can employ the algebraic Riccati equation, see, e.g., [27,37,39]. But for the case of infinite-dimensional controlled systems, discretization leads to finite-dimensional Riccati equations of very large order and ultimately one is confronted with the curse of dimensionality. Model reduction techniques do not offer an efficient alternative either. In fact, the transfer function corresponding to the controlled system (2)-(4) has infinitely many unstable poles and thus, the model reduction based on balanced truncation will not produce finite H ∞ −error bounds, see, e.g., [16]. An alternative approach to deal with (OP ∞ ) is the receding horizon framework. In this framework, the stabilizing control, namely, RHC is obtained by concatenation of a sequence of open-loop optimal controls on a sequence of overlapping temporal intervals. Further, the process of generating the sequence of intervals and concatenation are carried out in such way that the resulting control has a feedback mechanism and is defined on the whole of the interval [0, ∞). Indeed, the receding horizon framework bridges to a certain degree the gap between the openand closed-loop control. In the past three decades, numerous results have been published on RHC for finite-dimensional systems, among them we can mention [13,20,22,29,44,48] and the references therein. More recently, some authors have addressed the case of infinite-dimensional systems as well [3,21,28]. Here we employ the receding horizon framework which was proposed in [48] for finite-dimensional controlled systems, and in [3] for infinite-dimensional controlled systems. In this framework, neither terminal costs nor terminal constraints are imposed to the subproblems in order to guarantee the stability of RHC. But rather, by defining an appropriate sequence of overlapping temporal intervals and applying a suitable concatenation scheme, one can ensure the stability and also suboptimality of RHC. In the previous work [3], this RHC was applied for the stabilization of the Burgers equation with different boundary conditions. In addition, based on a stabilizability condition, the asymptotic stability and suboptimality of RHC were investigated. In the present work, we investigate the suboptimality and exponential stability of RHC for all the cases 1-3 of the wave equation with respect to an appropriate functional analytic setting. The key properties are the observability conditions which were not available for the Burgers equation in [3]. By help of these conditions, we obtain not just asymptotic stability but also exponential stability of RHC. Turning to the receding horizon approach, we choose a sampling time δ > 0 and an appropriate prediction horizon T > δ. Then, we define sampling instances t k := kδ for k = 0 . . . . At every sampling instance t k , an open-loop optimal control problem is solved over a finite prediction horizon [t k , t k + T ]. Then the optimal control is applied to steer the system from time t k with the initial state Y rh (t k ) until time t k+1 := t k + δ at which point, a new measurement of state is assumed to be available. The process is repeated starting from the new measured state: we obtain a new optimal control and a new predicted state trajectory by shifting the prediction horizon forward in time. The sampling time δ is the time period between two sample instances. Throughout, we denote the receding horizon state-and control variables by Y rh (·) and u rh (·), respectively. Also, (Y * T (·; Y 0 , t 0 ), u * T (·; Y 0 , t 0 )) stands for the optimal state and control of the optimal control problem with finite time horizon T , and initial function Y 0 at initial time t 0 . Next, we summarize these steps in Algorithm 1. Algorithm 1 Receding Horizon Algorithm Require: Let the prediction horizon T , the sampling time δ < T , and the initial point (y 1 0 , y 2 0 ) ∈ H be given. Then we proceed through the following steps: 1: k := 0, t0 := 0 and Y rh (t0) := (y 1 0 , y 2 0 ). 2: Find the optimal pair (Y * 1.1. Stability and Suboptimality of RHC. Throughout this paper, we use the following definitions: Definition 1.1 (Value function). For every pair (y 1 0 , y 2 0 ) =: Y 0 ∈ H, the infinite horizon value function V ∞ : H → R + is defined as Similarly, the finite horizon value function V T : H → R + is defined by In order to show the exponential stability and suboptimality of the receding horizon control obtained by Algorithm 1, we need to verify the following properties: Since, in Algorithm 1, the solution of (OP ∞ ) is approximated by solving a sequence of the finite horizon open-loop optimal controls, one needs, apriori, to be sure that any of these optimal control problems in Step 2 of Algorithm 1 is well-defined: P1: For every Y 0 ∈ H and T > 0, every finite horizon optimal control problems of the form admits a solution. Moreover, we require the following properties for the finite horizon value function V T : P2: For every T > 0, V T has a quadratic growth rate with respect to the H-norm. That is, there exists a continuous, non-decreasing, and bounded function γ 2 : P3: For every T > 0, V T is uniformly positive with respect to the H-norm. In other words, for every T > 0 there exists a constant γ 1 (T ) > 0 such that we have and the proof is complete. Remark 1.6. It is of interest to derive the exponential decay inequality (13) in an alternative way as in the above. In particular, the constants ζ and c can be estimated in a different manner. Namely, due to [3,Theorem 7], there exists a T * > 0 such that for every T ≥ T * we have where the constants c and ζ are given by ). Here θ 1 (T, δ), θ 2 (T, δ) are defined as in Lemma 1.3. Using properties P2 and P3 we obtain where θ 1 , θ 2 are defined as in Lemma 1.3 and T * is chosen such that α(T * ) > 0 holds. Now since γ 2 (T ) is a bounded function and δ is fixed, we have Therefore, asymptotically the RHC strategy is optimal. The rest of paper is organized as follows: Sections 2, 3, and 4 deal, respectively, with the cases in which RHC enters as a distributed control, a Dirichlet boundary condition, and a Neumann boundary condition. In each of these sections, first wellposedness of the finite horizon optimal control problems (i.e., property P1) and the corresponding optimality conditions are investigated. Then, relying on observability conditions, properties P2 and P3 are analysed. Finally, in Section 5, we demonstrate numerical experiments in which Algorithm 1 is implemented for each type of the control actions. In addition, for each example the performance of RHC is evaluated and compared for different choices of the prediction horizon T and a fixed sampling time δ. 2.1. On the finite horizon optimal control problems. For our subsequent work we need to gather some facts on the finite horizon optimal controls of the form (OP T ) given by over all u ∈ L 2 (0, T ; L 2 (ω)), subject to where (y 1 0 , y 2 0 ) ∈ H 1 and the incremental function is defined by (21). Property P1 is verified by means of the following proposition. Proof. For the proof we refer to [40]. In following we derive the first-order optimality conditions for (P dis ). Due to the presence of the tracking term for the velocityẏ in the performance index function of (P dis ), we will see that the solution of adjoint equation exists in the very weak sense. Proposition 2.6. Let (ȳ,ū) be the optimal solution to (P dis ). It satisfies the following optimality conditions Proof. The proof is given in Appendix A.1. 2.2. Verification of P2 and P3. In this subsection we deal with the verification of properties P2 and P3. Concerning this matter, we recall some aspects on the stabilizability of the wave equation with a distributed feedback law. Specifically, we consider the following controlled system with the feedback control u given by u(y) := −a(x)ẏ, where the function a ∈ L ∞ (Ω) satisfies a 1 ≥ a(x) ≥ a 0 > 0 for almost every x ∈ ω, and a(x) = 0 in Ω\ω. The following observability conditions will be used later. To specify the required observability conditions, for any (φ 1 0 , φ 2 0 ) ∈ H 1 we denote by φ the weak solution of the following system Then we can formulate the following observability conditions: OB1: There exists T ob1 > 0 such that for every T ≥ T ob1 , the weak solution φ to (29) where the positive constant c ob1 depends only on T and ω ⊆ Ω. OB2: There exists T ob2 > 0 such that for every T ≥ T ob2 , the weak solution φ to (29) where the positive constant c ob2 depends only on T and Γ c ⊆ ∂Ω. The observability conditions OB1-OB2 are satisfied if and only if the Geometric Control Conditions (GCC) hold (see, e.g., [8,11]). Roughly speaking, GCC for (Ω, ω, T ob1 ) (resp. (Ω, Γ c , T ob2 )) states that all rays of the geometric optics should enter in the domain ω (resp. meet the boundary Γ c ) in a time smaller than T ob1 (resp. T ob2 ). For a comprehensive study, we refer to Reference [8]. The following equivalence is frequently mentioned in the literature. Since it is not straight forward to find a proof, we provide the arguments here. Proof. First we show that OB1 implies exponential stabilizability. We set u(y) := −aẏ in (27). In this case the resulting closed-loop system is well-posed (see, e.g., [12]) and for its unique weak solution we have ). Now, for an arbitrary T > 0 consider the following controlled system By taking the L 2 -inner product of (31) withẏ, and integrating over [0, T ], we obtain the following estimate Now by using a density argument and passing to the limit, it can be shown that the inequality (32) is also true for the weak solution of (31) with the initial data (y 1 0 , y 2 0 ) ∈ H 1 . Moreover the solution y of (31) can be expressed as y : By the observability condition OB1, and estimate (24) for (33) we have for a constant c 1 > 0 which is independent of (y 1 0 , y 2 0 ). By (32), (34) we obtain , we have for every k = 1, 2, . . . and, as a consequence, for every t ∈ [kT ob1 , (k + 1)T ob1 ] we infer that Thus we conclude (30). Next we show that the stabilizability property (30) implies the observability condition OB1 for (29) with an arbitrary initial pair (y 1 0 , y 2 0 ) ∈ H 1 . Setting u(y) := −aẏ in (27) with a ∈ L ∞ (Ω) satisfying (28), taking the L 2 -inner product of (27) withẏ, and integrating over [0, t] for t > 0, we obtain where a 1 is specified in (28). Further by (30), for a large enough T > 0 we have Moreover, the solution φ to (29) with the initial pair (y 1 0 , y 2 0 ) can be rewritten as φ := y − ψ, where y is the weak solution to (31) and ψ is the weak solution to (33) for T instead of T . Now assume that the solution of (33) is smooth enough. Taking the L 2 -inner product of (33) withψ and integrating over [0, T ] we have By using a density argument and passing to the limit, it can be shown that the inequality (37) is also true for the weak solution of (33) with −aẏ as a forcing function. Moreover, (37) implies Note also that BEHZAD AZMI AND KARL KUNISCH Combining (36), (38), and (39), we complete the proof with Now we are in the position that we can investigate properties P2 and P3. Proposition 2.8. Suppose that the observability condition OB1 holds. Then for every T > 0, there exists a controlû ∈ L 2 (0, T ; L 2 (ω)) for (26) such that for every initial pair (y 1 0 , is a nondecreasing, continuous, and bounded function. Moreover, there exists a constant (26), and using Proposition 2.7 for the choice Here the constants M and α were defined in Proposition 2.7. Integrating from 0 to T implies that By the definition of the value function V T we have and thus (40) holds. To verify (41), we use the superposition argument for equation (26) with an arbitrary control u ∈ L 2 (0, T ; L 2 (ω)). We rewrite the solution of (26) as y = φ + ϕ where φ is the solution to (29) with the initial pair (y 1 0 , y 2 0 ) instead of (φ 1 0 , φ 2 0 ), and ϕ is the solution to the following equation By OB1 for (29) with the initial pair (y 1 0 , y 2 0 ) and ω replaced by Ω, and (24) for (42), we obtain Since u ∈ L 2 (0, T ; L 2 (ω)) is arbitrary, we obtain (41) for a constant c 1 (T ) independent of u and (y 1 0 , y 2 0 ). Remark 2.9. Thus from Propositions 2.5 and 2.8, we conclude that Theorem 1.5 is applicable for the wave equation with distributed control and guarantees the exponential stability of RHC obtained by Algorithm 1. Proof. The proof is similar to the one of Theorem 2.1 in [46]. Next we specify the first-order optimality conditions for (OP dir ). Since the objective function in (OP dir ) involves the tracking term of the velocityẏ in the space L 2 (0, T ; H −1 (Ω)), the solution to the adjoint equation gains more regularity than the one to (48) and this solution exists in the weak sense. Proposition 3.5. Let (ȳ,ū) be the optimal solution to (OP dir ). It satisfies the following optimality conditions is the solution of the adjoint equation. Proof. The proof is given in Appendix A.2. Verification of P2 and P3. Similarly to the previous section, we show first that there exists a feedback law u(y) that stabilizes the controlled system (3) with respect to the energy which is defined along a trajectory y. Lemma 3.6. The observability condition OB1 is equivalent to the following observability inequality: OB3: For every T ≥ T ob1 , the very weak solution φ to (29) with (φ,φ) ∈ C 0 ([0, T ]; H 2 ) satisfies the inequality where the constants c ob1 , T ob1 have been defined in the observability condition OB1. Similarly, the observability condition OB2 is equivalent to the following observability condition: OB4: For every T ≥ T ob2 , the very weak solution φ to (29) with (φ,φ) ∈ C 0 ([0, T ]; H 2 ) satisfies the inequality where the constants c ob2 , T ob2 have been defined in the observability condition OB2. for positive constants M , α independent of (y 1 0 , y 2 0 ), if and only if the observability condition OB2 holds. Proof. The proof of the first direction in the equivalence can be found in, e.g., [1]. Nevertheless, we provide here a proof for completeness. First assume that condition OB2 holds. We show the exponential decay inequality (52). H2 for all t ∈ [0, T ], where the constants M and α were defined in Proposition 3.8. By integrating from 0 to T we have T 0 (y(t),ẏ(t)) 2 Moreover, by (52) and (54) we have Using (43), (61), and the definition of the value function V T , we have which gives (59). We will later use the following auxiliary problem. It remains to show that y * is the weak solution to (64) corresponding to the control u * . To see this, we only need to pass to the limit in the weak formulation (65) for the pair of sequences (y n , u n ). For every Moreover, due to (66), for every t ∈ [0, T ] the sequence {ẏ n (t)} n is bounded in L 2 (0, L). Hence, it has a weakly convergent subsequenceẏ n (t) ȳ t with limit y t ∈ L 2 (0, L). For any t ∈ [0, T ], we define I t : This operator is continuous, moreover, for every q ∈ V we have where I * t : V → (H 1 (0, T ; V * )) * is the adjoint operator to I t . Therefore, for every and, as a consequence, we can pass to the limit in (65) with y replaced by y * and y * is the weak solution to (64) corresponding to the control u * . Now since the solution operator S : L 2 (0, T ) → L ∞ (0, T ; H 3 ) defined by u → (y,ẏ) is affine and continuous, the objective function J T (·; y 1 0 , y 2 0 ) is weakly lower semi-continuous and we have 0 ≤ JT (u * ; (y 1 0 , y 2 0 )) ≤ lim inf n→∞ JT (u n ; (y 1 0 , y 2 0 )) = σ. We turn to the first-order optimality conditions for (OP neu ). Due to the presence of the tracking term for the velocityẏ ∈ L 2 (0, T ; L 2 (0, L)) in the objective function of (OP neu ), the solution to the ajdoint equation has less regularity than the one to (64) and exists in the very weak sense only. Proof. The proof is given in Appendix A.3. (72) Then we formulate the following observability inequalities: Numerical Experiments. This section is devoted to numerical simulations. In order to justify our theoretical results for the receding horizon Algorithm 1, we give numerical results for all the cases: Distributed control, Dirichlet boundary control, and Neumann boundary control. We give also a short description about the discretization of the control and the state, the optimization algorithm, and the implementation of Algorithm 1. 5.1. Discretization. Among the many discretization approaches to the wave equation based on finite elements, we can mention the works [4,5,6,7,30,31]. Here we follow the framework which was investigated in [7] and applied for optimal control problems in [32]. In this framework, the open-loop problems are discretized, temporally and spatially, by appropriate finite elements, for which the approaches optimize-discretize and discretize-optimize commute; see, e.g., [10]. In all cases, for the discretization of the state we write the equation as a system of first order equations in time. The spatial discretization was done by a conforming linear finite element scheme using continuous piecewise linear basis functions over a uniform mesh. This uniform mesh was generated by triangulation. For the temporal discretization of the state equation, a Petrov-Galerkin scheme based on continuous 24 BEHZAD AZMI AND KARL KUNISCH piecewise linear basis functions for the trial space and piecewise constant test functions was employed. By doing so, the resulting discretized system is equivalent to the system first discretized in space followed by the Crank-Nicolson time stepping method. Since the temporal test functions have been chosen to be piecewise constant, it is natural to also discretize the adjoint equation and also control by these functions. This implies that the approximated gradient is consistent with both continuous functional and the discrete functional. In the case of the Dirichlet boundary control, the inhomogeneous Dirichlet condition y| Γc = u was treated by interpreting u as the trace of a sufficiently smooth functionŷ and solving the equation for v = y −ŷ instead of y with homogeneous Dirichlet boundary conditions, see, e.g., [18, page 376] for more detail. Optimization. Every discretized open-loop problem was first formulated as reduced problem. The resulting unconstrained optimization problem consists of minimizing a reduced objective function which depends only on the control variable u. Then these reduced problems were solved by applying the Barzilai-Borwein (BB) method [9] equipped with a nonmonotone line search [17]. The optimization algorithm was terminated as soon as the L 2 (0, T ; U)-norm of the gradient for the reduced objective function was less than the tolerance 10 −6 . constant T ∞ defined as the final computation time, we ran Algorithm 2 for all the above mentioned cases. For every example, the receding horizon control u rh was computed for the fixed sampling time δ = 0.25 and different values of the prediction horizon T . In each example, the performance of the computed receding horizon controls for different prediction horizons are compared with each other. Moreover, in order to get more intuition about the stabilization problem, the results related to the uncontrolled problem are also reported. As performance criteria for our comparison, we considered the following quantities: where H 1 = H 1 0 (Ω) × L 2 (Ω). As it is depicted by Figure 1, a single wave propagates and moves from the center of the domain to the boundaries. While moving to the boundaries, it decomposes into several small waves. After hitting the boundaries, the resulting small waves propagate and join together to form a single wave at the center of the domain. This process repeats constantly, as time progresses. We employed RHC computed by Algorithm 2 for different choices of the prediction horizon T and the fixed sampling time δ = 0.25. The corresponding results are gathered in Table 1. Figure 4(a) demonstrates the evolution of the H 1 -energy of the receding horizon states for the different choices of T and fixed δ = 0.25. The evolution of the L 2 (ω)-norm of the corresponding RHCs are plotted in Figure 3. Figure 5 shows the receding horizon state at different time points for the choice of T = 1.5. As expected longer T provides better stabilization performance but requires more iterations. Table 2, Figure 4(b). Figure 6 shows the receding horizon state at different time points for the choice of T = 1.5 and δ = 0.25. Table 3, and Figures 4(c). Figures 7(b) and 7(c) show, respectively, the receding horizon state and control for the choice of T = 1.5. From Tables 1-3 and Figures 4(a), 4(b), and 4(c), we can assert that the results corresponding to the performance criteria are reasonable. Except for the case that Conclusion. Receding horizon control for the stabilization of linear wave equation with different boundary conditions was analysed and its numerical efficiency was investigated. The results encourage further investigations which may include the convergence analysis of the controls obtained by the receding horizon framework as T → ∞, as well as nonlinear problems, and cost functionals different from quadratic ones, as for instance, sparsity promoting functionals. A. Appendix. A.1. Proof of Proposition 2.6. Before establishing the first-order optimality conditions, we prove the following useful lemma. A.2. Proof of Proposition 3.5. In order to show the optimality conditions, we need first to prove the following useful lemma.
6,262.2
2017-11-01T00:00:00.000
[ "Mathematics" ]
Effects of Triple-$\alpha$ and $^{12}\rm C(\alpha,\gamma)^{16}O$ Reaction Rates on the Supernova Nucleosynthesis in a Massive Star of 25 $M_{\odot}$ We investigate effects of triple-$\alpha$ and $^{12}\rm C(\alpha,\gamma) ^{16}O$ reaction rates on the production of supernova yields for a massive star of 25 $M_{\odot}$. We combine the reaction rates to examine the rate dependence, where the rates are considered to cover the possible variation of the rates based on experiments on the earth and theories. We adopt four combinations of the reaction rates from two triple-$\alpha$ reaction rates and two $^{12}\rm C(\alpha,\gamma)^{16}O$ ones. First, we examine the evolution of massive stars of 20 and 25 $M_{\odot}$ whose helium cores correspond to helium stars of 6 and 8 $M_{\odot}$, respectively. While the 25 $M_{\odot}$ stars evolve to the presupernova stages for all combinations of the reaction rates, evolutionary paths of the 20 $M_{\odot}$ stars proceed significantly different way for some combinations, which are unacceptable for progenitors of supernovae. Second, we perform calculations of supernova explosions within the limitation of spherical symmetry and compare the calculated abundance ratios with the solar system abundances. We can deduce some constraints to the reaction rates. As the results, a conventional rate is adequate for a triple-$\alpha$ reaction rate and a rather higher value of the reaction rate within the upper limit for the experimental uncertainties is favorable for a $^{12}\rm C(\alpha,\gamma)^{16}O$ rate. The astrophysical S-factor at this energy, S(300), is estimated to be 100 keV·b and 230 keV·b from Refs. [11] and [12], respectively. In particular, the latter rate, referred to as CF85, has been adopted to explore supernova nucleosynthesis and the final results seem to give a good agreement with the observation of SN 1987A [3]. Another value has been presented using a different method to determine the cross section, which utilizes the reacion of 16 N(β − ) 16 O * → 12 C + α and gives S(300) = 146 keV·b (hereafter referred to as Bu96) [13]. Although the uncertainty is not so large compared to the triple-α rate as shown in Fig. 2, where the representative rates obtained so far are compared, the effects on the nucleosynthesis are significant [14][15][16][17]. It has already been shown that the OKK rate crucially affects the evolutionary tracks of low-mass stars, where the evolution from zero-age main sequence to core He flash/burning for low-, intermediate-, and high-mass stars have been investigated [18,19]. The HR diagram obtained using the new 3α reaction rate disagrees considerably with the observations of low-mass stars; the OKK rate results in the shortening or disappearance of the red giant phase, because helium ignites at a much lower temperature and density compared to the case of the NACRE rate [8]. Furthermore, stellar models in the mass range of 0.8 < M/M < 25 were computed and it was confirmed that the OKK rate has significant effects on the evolution of low-and intermediate-mass stars, while its influences on the evolution of massive stars (M > 10 M ) are minimal [20]; the OKK rate is incompatible with observations but for massive stars. If the OKK rate is correct, we must invoke some new physical processes such as rotational mixing [21,22], turbulence [23], dynamical instabilities [24], or other unknown physical effects. On the other hand, the abundances of helium and heavier elements in globular clusters are open to dispute [25], which may change the scenario of the stellar evolution of low-mass stars. 16 O as a function of temperature. The abbreviations "CF85," "CF88," and "Bu96" are taken from Caughlan et al. (1985) [12], Caughlan and Fowler (1988) [11], and Buchmann et al. (1996a) [13], respectively. The upper panel shows thermonuclear reaction rates per particle and the lower panel shows the ratios with respect to "CF85." Apart from appearances of observations, we can see the effects of the OKK rate on stellar evolution from the ignition properties. A helium core flash is triggered if the nuclear energy generation rates (ε n ) become significantly larger than the neutrino energy loss rates (ε ν ). We can understand clearly that helium ignition under the degenerate condition (ε n = ε ν ) occurs at considerably lower temperature and density points compared with the previous case [26]. The effects of the OKK rate on the evolution of accreting compact stars have been studied: the ignition property for accreting white dwarfs [26] and X-ray bursts on accreting neutron stars [27]. It was also found that the s-process using the OKK rate during core He-burning is very inefficient compared to the case with the previous 3α rates. However, the difference in overproduction is found to be almost compensated by the subsequent C-burning, and the overproduction level is not different as a whole for the two distinctly different 3α rates. Therefore, the weak s-process in massive stars does not testify to the validity of the new rate. Tur et al. [15] investigated the dependence of the s-process and post-explosive nucleosynthesis on ±2σ experimental uncertainties of 3α and 12 C(α, γ ) 16 O reaction rates. However, the impact of such large theoretical uncertainties of 3α rates invoked by the OKK rate [7] with those of 12 C(α, γ ) 16 O have not been explored on the supernova yields of a massive star. In the present paper, we investigate the effects of both 3α and 12 C(α, γ ) 16 O rates on the production of the possible isotopes during the evolution of a massive star of 25 M and its supernova explosion. In Sect. 2, the evolution of massive stars of 20 M and 25 M stars are presented, where the effects of the rates on the evolution and the nucleosynthesis are discussed. The method of calculation for the supernova explosion of 25 M stars is presented in Sect. 3. The results of nucleosynthesis and some discussions are also given by comparing with solar system abundances. In Sect. 4, the most suitable combination of the reaction rates is deduced and the remaining problems are presented. 3 Nucleosynthesis at the presupernova stages The triple-α and 12 C(α, γ ) 16 O rates are the key nuclear reaction rates concerning He-burning in massive star evolution. As a consequence, explosive nucleosynthesis and the resulting supernova yields of a massive star would be influenced seriously by the two rates. We select four combinations from the available nuclear data for the two rates, that is, Fynbo-CF85, Fynbo-Bu96, OKK-CF85, and OKK-Bu96, which must cover the possible uncertainties inherent to the experiments and/or theories. The stellar evolutionary code is almost the same as Ref. [2,3] but for the revised reaction rates [11]. To study detailed abundance distributions including s-nuclei, we perform a post-process nucleosynthesis calculation with a large nuclear reaction network using the same methods as described in Ono et al. [30] and Kikuchi et al. [31]. Let us explain our nuclear reaction network for completeness. Our network contains 1714 nuclei from neutron and proton to uranium isotopes up to 241 U linked through particle reactions and weak interactions [30,32]. The reaction rates are taken from the JINA REACLIB compilation [33], where updated nuclear data has been included for charged particle reactions and (n, γ ) cross sections after those of Bao et al. [34]. The finite temperature and density dependences of beta decay and electron capture rates for nuclei above 59 Fe are included based on Ref. [35]. After helium core formation, gravitational contraction leads to ignition by the 3α reaction. Near the end of core He-burning, the 12 C(α, γ ) 16 O reaction begins to operate significantly. As a consequence, the production of 12 C and 16 O proceeds appreciably and these elements should become the main products after core He-burning in all massive stars. From core C-burning to the end of core oxygen burning, carbon continues to decrease little by little due to shell burnings. This fact is almost universal from zero to solar metallicity stars, because massive stars form helium cores after hydrogen burning, except for extremely massive stars which could induce pair instability supernovae [36]. Figure 3 shows the evolutionary tracks for the density and temperature at the center in the cases of 20 M and 25 M stars, where the Fynbo-CF85 and OKK-CF85 rates are adopted. The evolution with respect to the 25 M stars leads to presupernova stages for both rates. The situation becomes rather complex if we examine the evolution of the 20 M stars. For the case of Fynbo-CF85, the presupernova stage is attained as before [2,3]. For the case of OKK-CF85, Si-ignition barely occurs and the evolution would lead to the formation of an Fe-core. In the present study, we stopped the calculation of Si-burning; since the shell burnings of O, Ne, and C are often very active, the computation becomes difficult to continue. The details of the tracks depend on both the strength of the shell burnings and the extension of the convective mixing. Finally, the presupernova stages are attained, as seen in Fig. 4 for the 25 M stars. On the other hand, we find that the presupernova stage cannot be obtained for a 20 M star with the Fynbo-Bu96 model, but instead the star begins to cool, as seen in Fig. 4; the central region cannot attach the ignition curve of Si on the density-temperature plane. In general, whether nuclear ignition occurs or not can be judged from the temperature to which the burning region heats up. If the temperature does not reach the ignition temperature, the region begins to cool. The central temperature depends significantly on the strength of shell burnings. In particular, the production of carbon at the end of core helium burning is closely related to the subsequent evolution of the stars, because the carbon shell determines the boundary between the inner core of the carbon-oxygen core and the helium envelope. Furthermore, active carbon burning hinders the increase of the central temperature and leads to a delay in gravitational contraction. Therefore, the formation of the Fe-core is doubtful if we adopt the combination of the reaction rates of OKK-CF85 or Fynbo-Bu96, because the central temperature is just around the ignition line for Si. We note that once the ignition of silicon begins, an Fe-core forms at the center and gradually grows towards the Chandrasekhar mass [2]. Although the model of OKK-CF85 may not be excluded as a progenitor, the evolutionary scenario will become complex compared to the previous scenario [2,3]. It should be noted that the evolution becomes very complex for massive stars whose masses are less than 20 M ; this can be inferred from Figs at the center [37]. As a consequence, the evolutionary scenario of less massive stars (M ≤ 20 M ) would change significantly. Figures 5 and 6 illustrate the overproduction factors X (i)/ X (i) , where X (i) denotes the mass fraction of element i, concerning main products (12 ≤ A ≤ 27) normalized by the oxygen one, . The two dotted lines show the values whose ratios to the normalized overproduction factor of 16 O are two or one-half. Figure 5 shows the case of the 20 M stars at the beginning of Si-burning for three combinations of reaction rates; it is noted that the abundances referred to in the panel do not change appreciably after Si-burning. Figure 6 shows the case of the 25 M stars at the presupernova stages for the four combinations. As a whole, the Fynbo-CF85 model gives a reasonable range of abundance ratio compared to solar abundances, except for 12 C which is supplied through ejection from AGB stars [38,39]. For other models, however, the overproduction factors of some elements are beyond the values whose ratios to those of oxygen are two (upper dotted lines). In Figs. 5 and 6, the models except for Fynbo-CF85 produce a significantly large amount of 20 Ne, 23 Na, and 24 Mg because of strong C-burning. Since these elements remain in the burning layers of O and Ne, significant amounts will survive even after the shock wave propagation during the explosion. The amounts can be obtained by simulating the explosion. The fate of a star towards Fe-core formation is still uncertain due to convective mixing, even under the assumption of spherical symmetry [2,3]. However, we have succeeded in the stellar evolutionary calculations until the beginning of Fe-core collapse only for the 25 M star for the four reaction sets, as seen in Figs. 3 and 4. It is desirous to adopt the complete set of presupernova models to examine the effects of four sets of reaction rates on nucleosynthesis. Therefore, we show the nucleosynthesis after the supernova explosions in the next section by concentrating on the 25 M stars. In Kikuchi et al. [31], we have already investigated the effects of the OKK rate on the production of the s-process nucleosynthesis for a 25 M star. However, the calculation was stopped at the end of central C-burning. Therefore, we can discuss the s-process nucleosynthesis after central C-burning in the present paper. Overall, at the end of C-burning, the overproduction factors of s-nuclei are roughly consistent with those of Ref. [31]. After C-burning, neutron irradiations in C-burning shells enhance the overproduction factors of some s-nuclei by a factor of 3-4 (for 80 Kr, even greater than 10) compared to those at the end of C-burning. The significant enhancement of the s-process elements during the later evolutionary stages has already been claimed by Tur et al. [15]. We have confirmed that the level of the enhancement of overproduction factors due to the later burning stages is roughly consistent with that in the Ref. [15]. Since s-elements become seeds of p-process nucleosynthesis in massive stars and the p-process has been believed to occur during a supernova explosion at the bottom of the oxygen-rich layer, the amount of s-elements that can survive is crucial to discussing the nucleosynthesis of p-elements in supernovae. Let us examine the nucleosynthesis for nuclei above A > 40, other than s-nuclei, concerning 25 M stars. In Fig. 7 Ta (410), where the numerals inside the brackets are the overproduction factors with respect to their initial values. In the present case, we summed the mass of each nucleus above approximately 1.5 M , which corresponds to the "mass cut" described in Sect. 3. On the other hand, other models do not produce much of those nuclei except for 40 K, where the overproduction factors are 1092, 1347, and 1281 for Fynbo-Bu96, OKK-Bu96, and OKK-CF85, respectively. For other nuclei, three models result in overproduction factors less than 100. Exceptionally, Fynbo-Bu96 and OKK-Bu96 models give overproduction factors of 142 for 76 Se and 141 for 50 Cr, respectively. It is noted that the first three nuclei ( 40 K, 50 V, 50 Cr) are the products after the oxygen burning. As seen in Fig. 6, oxygen production overwhelmed carbon production after He-burning for the Fynbo-CF85 model. As a consequence, the model tends to considerably overproduce these nuclei, as can be seen in Fig. 7. 180 Ta is mainly produced by the (γ , n) reaction of 181 Ta. The overproduction of 180 Ta for Fynbo-CF85 is attributed to the enhanced (γ , n) reaction owing to higher temperatures during the later evolutionary stages, as seen in Fig. 3. After the supernova explosion, these nuclei around the bottom of the oxygen-rich layers are destroyed and/or transformed to other nuclei. Therefore, except for some nuclei, overproductions are decreased, as is discussed in the next section. In the following section, we focus on the nucleosynthesis of 25 M stars, because we can succeed in getting the presupernova models for the four models of Fynbo-CF85, Fynbo-Bu96, OKK-CF85, and OKK-Bu96. The more massive stars will result in straightforward evolution toward Fe-core collapse. The less massive stars experience rather complex evolutions, as inferred from Fig. 4. Supernova nucleosynthesis and overproduction factors We investigate the production of the elements for massive stars of 25 M . To estimate the amount of material ejected into the interstellar medium from the exploding star, we perform a simulation of the supernova explosion. The procedure of this calculation has been described in a preceding study [40]. Therefore, we explain the calculation method briefly. The equations of hydrodynamics are as follows, using the Lagrange mass coordinate m (e.g., Ref [41]): Equation (1) describes the equation of continuity with the specific volume V = ρ −1 , where r and v are the radius and velocity, respectively. Equation (2) is the conservation of momentum, where p is the pressure, q is the scalar specific momentum described as q = ν · r, and G is the gravitational constant. Equation (3) gives the equation of energy conservation; e is the specific energy expressed as e = v 2 2 + U − Gm r , where U is the specific internal energy, and H is the heating term for the nuclear energy generation (energy per unit mass per unit time). Concerning the initial models, we adopt the presupernova models obtained in Sect. 2. The input physical values are the temperature, density, pressure, and chemical compositions. Our hydrodynamical code includes an α-network which contains 13 species: 4 He, 12 C, 16 O, 20 Ne, 24 Mg, 28 Si, 32 S, 36 Ar, 40 Ca, 44 Ti, 48 Cr, 52 when the shock wave reaches the surface of the helium core; at the same time the explosion energy is calculated. The explosion is initiated by injecting thermal energy around the surface of the Fe-core. To see the effects of different combinations of the reaction rates, we fixed the explosion energy and ejected 56 Ni mass to be 1.0 × 10 51 erg and 0.07 M , respectively. We adopt these values from SN 1987A [42] as a core-collapse supernova explosion model. The injected energy is adjusted to obtain an explosion energy of 1.0 × 10 51 erg. We note that the locations in the Lagrange mass coordinate at which the thermal energies are injected, i.e. the surfaces of the Fe-cores, are different for each model, because different combinations of the reaction rates result in different Fe-core masses. After the nucleosynthesis calculation described later, we redefine a boundary between the ejecta and the compact object which is a so-called "mass cut" (M cut ) to obtain 0.07 M 56 Ni in the ejecta. It is assumed that the material between M Fe and M cut (M Fe < M cut ) falls into the compact object. Table 1 shows the physical quantities concerning the explosion, that is, the mass of the Fe-core M Fe , the injected energy E in , and the mass cut M cut . Using the results of the density and temperature evolution during the shock propagation, we calculate nucleosynthesis with a large nuclear reaction network. The calculations are performed until 10 17 s after the explosion, which leads to stable nuclei (we extrapolate the density and temperature after 300 s to continue the nucleosynthesis calculation assuming an adiabatic expansion). The reaction network is almost the same as that of the evolution calculation with 1714 species, but we add proton-rich elements around the Fe group nuclei for explosive nucleosynthesis, whose network includes 1852 nuclear species. To compare the results with observations, overproduction factors, X (i)/ X (i) , are considered. We show the results for stable elements lighter than A = 210 in Fig. 8 [43][44][45]. Type Ia supernovae synthesize the nuclei between Cl and the Fe group nuclei. R-nuclei could be produced by neutron star mergers [46][47][48][49], magnetorotationally driven supernovae [32,50], and/or neutrino-driven supernovae [51]. In the following sections (Sect. 3.1 and 3.2), we discuss the overproduction factors by focusing on different results due to the four sets of reaction rates. Overproduction factors for A ≤ 110 Here, we consider overproduction factors averaged in the ejecta at the time of 10 17 s after the explosion. In Figs The weak s-process produced the elements up to A = 90 in all four models. Some s-nuclei are destroyed by the explosion but almost all the other s-nuclei survived. Therefore, the yields after the explosion are nearly the same as those in the presupernova stage. The overproduction factors for the OKK-CF85 model are the least enhanced among the four models, and those of the OKK-Bu96 model are the most enhanced, especially around A = 90. These differences are related to He-and C-burning, which are crucially important for the weak s-process [31]. For the OKK-Bu96 model, 12 C is produced appreciably compared to the other three models, which leads to the increase in neutron production during C-burning. Overproduction factors of p-nuclei In general, p-nuclei are produced by way of photodisintegration of seed s-nuclei during a supernova explosion. The condition for an adequate p-process to occur has been found. The relationship between synthesized p-nuclei and the peak temperature T p , the maximum temperature at each Lagrange mass coordinate during the passage of the supernova shock wave, has been given with the mass number A and neutron number N as follows [52,53]: (1) T p = 2 × In Fig. 12, the overproduction factors of all p-nuclei after the explosion are plotted. Most p-nuclei are produced in descending order of the Fynbo-CF85, OKK-Bu96, OKK-CF85, and Fynbo-Bu96 models. To consider the reason for the differences among the models seen in Fig. 12, we take into account the relationship between the p-process and the peak temperature. We define the so-called p-process layers (hereafter PPLs) [52] as the regions with peak temperatures of (2-3.5) × 10 9 K. Peak temperatures against the Lagrange mass coordinate are shown in Fig. 13. The size of the PPL for the Fynbo-CF85 model is equal to 0.65 M , the largest amount among the four models. (Fynbo-Bu96 model). Therefore, the overproductions of p-nuclei are in descending order of the sizes of the PPLs. Since peak temperatures attained by the shock wave propagation depend on the density distribution or the stellar radius at the presupernova stage, the amount of p-nuclei is affected by the gravitational contraction and/or shell burnings. It is noted that isotopes of 92,94 Mo and 96,98 Ru are still underproduced. Furthermore, both 113 In and 115 Sn are produced to some extent. These nuclei have been known to be significantly underproduced [52]. Variations of production of p-nuclei depend on the survived seed s-nuclei. The [52] are defined as the regions with the peak temperature of (2-3.5) × 10 9 K. difference in overproduction of the above p-nuclei compared to the previous study [52] is attributed to the detailed calculations of the nucleosynthesis during the stellar evolution. Summary of nucleosynthesis We have shown the supernova nucleosynthesis for 25 M stars using presupernova models which are the results of the stellar evolution calculations with four sets of 3α and 12 C(α, γ ) 16 O reaction rates and the postprocessing nucleosynthesis with the large nuclear reaction network. We emphasize that the final results of supernova nucleosynthesis depend not only on the explosion episode but also on the history of stellar evolution towards the Fe-core collapse. Generally speaking, the models with the OKK rate overproduce the isotopes of Ne, Mg, and Na beyond an acceptable level; these originate PTEP 2015, 063E01 Y. Kikuchi et al. from the burning of 12 C and subsequent shell burnings. For all models, the amount of s-nuclei does not change appreciably compared to that of the presupernova stage. As a consequence, He-and C-burnings are significantly important for the weak s-process. On the other hand, the distribution and amount of p-nuclei depend on peak temperatures and the size of PPLs, which are affected by the stellar evolution path, i.e., 3α and 12 C(α, γ ) 16 O reaction rates. Although each overproduction factor is influenced to some extent, the heavy element nucleosynthesis is not affected appreciably by the triple-α and 12 C(α,γ ) 16 O rates as a whole. Therefore, it is difficult to testify to the validity of the two reaction rates by the s-and p-process elements. Discussions We have investigated the effects of 3α and 12 C(α, γ ) 16 O reaction rates on the production of the supernova yields in a massive star of 25 M , where four combinations of the representative reaction rates are selected and incorporated in the nuclear reaction network. Since the evolutionary code used in the present study is almost the same as Refs. [2,3] but for the reaction rates, the differences in the evolutionary path should come from the extent of the convective mixing originated from nuclear burnings due to the different reaction rates. For example, the stellar evolutionary path of 20 M is seriously affected if we adopt the combination of OKK-Bu96 reaction rates, because the carbon produced induces strong carbon-shell burnings. Concerning the 25 M star, we can perform the evolutionary calculations till the presupernova stages and obtain the Fe-cores just before the collapse for all the combinations of reaction rates. As a consequence, we can recognize significant effects on supernova yields. 1) The distribution of abundance before the core collapse becomes very different for each model. 2) The supernova explosion results in distinctive yields if we compare them with the solar system abundances. The Fynbo-CF85 model can reproduce the solar values well for A < 40 and it becomes difficult to reproduce the solar ones in ascending order of the Fynbo-Bu96, OKK-CF85, and OKK-Bu96 models, as seen in Figs. [8][9][10][11]. It should be noted that 23 Na is much overproduced except for the Fynbo-CF85 model. Therefore, Fynbo-CF85 is the most suitable combination of the 3α and 12 C(α,γ ) 16 O reaction rates to be compatible with the solar system abundances. It is noted that the CF85 rate for the 12 C(α,γ ) 16 O reaction is considered to be the upper limit within the experimental uncertainties. As for the heavy nuclei beyond the iron group elements, it is unclear how to judge the compatibility with the observations. The problem of underproduction of p-nuclei remains when compared to the solar values [54]. There exist crucial problems concerning the stellar models, for which we do not have a satisfactory theory of convective mixing. Since we have adopted the Schwartzschild criterion for convection, the convection tends to occur rather easily compared to the Ledoux criterion. Furthermore, the extent of convective mixing is not well known, where convection itself is closely related to nuclear burnings [24]. We should note that these problems arise from the assumption that the stars are spherically symmetric and at present any satisfactory calculation of non-spherical stellar evolution does not exist [56,57]. Although the helium core is assumed to be 8 M , the actual star begins its evolution from the main-sequence stage with a hydrogen-rich envelope. If we consider the hydrogen-rich envelope, we will worry about the mass loss rate which brings out uncertain parameters [58]. As far as the approach of the helium star is concerned, our results would be legitimate because, after the end of hydrogen burning, a star forms the helium core with a clear boundary between the core and envelope, which is equivalent to the helium star [1]. Observationally, our approach has been supported by observations of light curves [42,59] and supernova nucleosynthesis [3,40] as far as SN 1987A is PTEP 2015, 063E01 Y. Kikuchi et al. concerned. Therefore, at present our conclusion could be accepted even if the unsatisfactory theory of convection lies under the calculation of stellar evolution.
6,377
2014-10-28T00:00:00.000
[ "Physics" ]
A MULTIVARIATE VERSION OF HOEFFDING’S INEQUALITY In this paper a multivariate version of Hoeffding’s inequality is proved about the tail distribution of homogeneous polynomials of Rademacher functions with an optimal constant in the exponent of the upper bound. The proof is based on an estimate about the moments of homogeneous polynomials of Rademacher functions which can be considered as an improvement of Borell’s inequality in a most important special case. The following result will be proved. Theorem 1. (The multivariate version of Hoeffding's inequality). The random variable Z defined in formula (2) satisfies the inequality with the constant V defined in (3) and some constants A > 0 depending only on the parameter k in the expression Z. I make some comments about this result. The condition that the coefficients a(j 1 , . . . , j k ) are symmetric functions of their variables does not mean a real restriction, since by replacing all coefficients a(j 1 , . . . , j k ) by a Sym (j 1 , . . . , j k ) = 1 k! π∈Π k a(j π(1) , . . . , j π(k) ) in formula (2), where Π k denotes the set of all permutations of the set {1, . . . , k} we do not change the random variable Z. Beside this, the above symmetrization of the coefficients in formula (2) decreases the number V introduced in formula (3). The identities EZ = 0, EZ 2 = k!V 2 hold. Thus Theorem 1 yields an estimate on the tail behaviour of a homogeneous polynomial of order k of independent random variables ε 1 , . . . , ε j , P (ε j = 1) = P (ε j = −1) = 1 2 , 1 ≤ j ≤ n, with the help of the variance of this polynomial. Such an estimate may be useful in the study of degenerate U -statistics. Thus for instance in paper [10] a weaker form of Theorem 1 played an important role. In Lemma 2 of that paper such a weaker version of the estimate (4) was proved, where the constant 1 2 in the exponent at its right-hand side was replaced by the number k 2e(k!) 1/k . This estimate, which is a fairly simple consequence of Borell's inequality was satisfactory in that paper. (Borell's inequality together with its relation to the problem of this paper will be discussed in Section 3.) However, the question arose whether it can be improved. In particular, I was interested in the question whether such an estimate holds which a comparison with the Gaussian case suggests. In the case k = 1 it is natural to compare the tail behaviour of Z with that of V η, where η is a random variable with standard normal distribution. Theorem A gives an estimate suggested by such a comparison. If Z is a homogeneous random polynomial of order k defined in (2), then it is natural to compare its tail distribution with that of V H k (η), where η has standard normal distribution, and H k (·) is the k-th Hermite polynomial with leading coefficient 1. Theorem 1 yields an estimate suggested by such a comparison. The next example shows that this estimate is sharp. It also explains, why it is natural to compare the random variable Z with V H k (η). For the sake of simplicity let us assume that the random variables ε j , j = 1, . . . , n, in formula (2) are given in the form ε j = h(ζ j ), 1 ≤ j ≤ n, where ζ 1 , . . . , ζ n are independent random variables, uniformly distributed in the interval [0, 1], and h( (Such a representation of the random variables ε j is useful for us, because it enables us to apply the subsequent limit theorem about degenerate U -statistics of iid. random variables with non-atomic distribution.) In this example √ n(n−1)···(n−k+1) k! Z n are degenerate U -statistics with kernel function and a sequence ζ 1 , . . . , ζ n of iid. random variables with uniform distribution on the interval [0, 1]. EZ 2 n = k!V 2 , and a limit theorem about degenerate U -statistics (see e.g. [4]) implies that the random variables Z n converge in distribution to the k-fold Wiener-Itô integral as n → ∞, where W (·) is a Wiener process on the interval [0, 1]. Moreover, the random variable Z (0) has a simpler representation. Namely, by Itô's formula for multiple Wiener-Itô integrals (see e.g. [6]) it can be written in the form where H k (·) is the k-th Hermite polynomial with leading coefficient 1, and η = h(x)W ( dx) is a random variable with standard normal distribution. Simple calculation shows that there are some constants C > 0 and D > 0 such that P (H k (η) > u) ≥ Cu −1/k e −u 2/k /2 if u > D. (Actually, this estimate is proved in [11].) Hence with some appropriate constants C > 0 and D > 0. This inequality implies that the estimate (4) is essentially sharp. It does not hold with a smaller constant in the exponent at its righthand side; this upper bound can be improved at least with a pre-exponential factor. Theorem 1 will be proved in Section 2. It is a fairly simple consequence of a good estimate on the moments of the random variable Z formulated in Theorem 2. These moments will be estimated by means of two lemmas. The first of them, Lemma 1, enables us to bound the moments of Z by those of an appropriate polynomial of independent standard Gaussian random variables. There is a diagram formula to calculate the moments of polynomials of Gaussian random variables. This makes the estimation of the moments of Gausian random variables relatively simple. This is done in Lemma 2. Actually it turned out that it is simpler to rewrite these polynomials in the form of a multiple Wiener-Itô integral and to apply the diagram formula for multiple Wiener-Itô integrals. To make the explanation complete I give a more detailed description of the diagram formula at the end of Section 2. In the final part of this work, in Section 3, I try to explain the background of the proof of Theorem 1 in more detail. In particular, I make some comments about the role of the Gaussian bounding of moments in Lemma 1 and compare the moment estimates obtained by means of the method of this paper with the estimates supplied by Borell's inequality. 2 The proof of Theorem 1. Theorem 1 will be obtained as a consequence of the following Theorem 2. Theorem 2. The random variable Z defined in formula (2) satisfies the inequality with the constant V defined in formula (3). Theorem 2 will be proved with the help of two lemmas. To formulate them, first the following random variableZ will be introduced. where η 1 , . . . , η n are iid. random variables with standard normal distribution, and the numbers a(j 1 , . . . , j k ) agree with those in formula (2). Now we state Lemma 1. The random variables Z andZ defined in formulas (2) and (6) satisfy the inequality and Lemma 2. The random variableZ defined in formula (6) satisfies the inequality with the constant V defined in formula (3). Theorem 2 is a straightforward consequence of Lemmas 1 and 2. So to get this result it is enough to prove Lemmas 1 and 2. Proof of Lemma 1. We can write, by carrying out the multiplications in the expressions EZ 2M and EZ 2M , by exploiting the additive and multiplicative properties of the expectation for sums and products of independent random variables together with the identities Eε 2p+1 j = 0 and Eη 2p+1 j = 0 for all p = 0, 1, . . . that and EZ 2M = The coefficients A(·, ·, ·) and B(·, ·, ·) could have been expressed in an explicit form, but we do not need such a formula. What is important for us is that A(·, ·, ·) can be expressed as the sum of certain terms, and B(·, ·, ·) as the sum of the absolute value of the same terms, hence relation (11) holds. (There may be such indices (j 1 , . . . , j l , m 1 , . . . , m l ) for which the sum defining A(·, ·, ·) and B(·, ·, ·) with these indices is empty. The value of an empty sum will be defined as zero. As empty sums appear for some index in (9) and (10) simultaneously, their appearance causes no problem.) Since Eε 2m j ≤ Eη 2m j for all parameters j and m, formulas (9), (10) and (11) imply Lemma 1. Proof of Lemma 2. I found simpler to construct an appropriate multiple Wiener-Itô integral Z whose distribution agrees with that of the random variableZ defined in (6) and to estimate its moment. To do this, let us consider a white noise W (·) on the unit interval [0, 1], i.e. let us take a set of (jointly) Gaussian random variables W n , js n , and j s = j s for some s = s , 1 ≤ j s ≤ n, 1 ≤ s ≤ k and the k-fold Wiener-Itô integral of this (elementary) function f . (For the definition of Wiener-Itô integrals see e.g. [6] or [8].) Observe that the above defined random variables η 1 , . . . , η n are independent with standard normal distribution. Hence the definition of the Wiener-Itô integral of elementary functions and the definition of the function f imply that the distributions of the random integralZ and of the random variableZ introduced in (6) agree. Beside this, the identity also holds with the number V defined in formula (3). Since the distribution of the random variablesZ andZ agree, formulas (12), (13) together with the following estimate about the moments of Wiener-Itô integrals complete the proof of Lemma 2. In this estimate a function f of k variables and a σ-finite measure µ on some measurable space (X, X ) are considered which satisfy the inequality with some σ 2 < ∞. The moments of the k-fold Wiener-Itô integral of the function f with respect to a white-noise µ W with reference measure µ satisfy the inequality for all M = 1, 2, . . .. This result can be got relatively simply from the diagram formula for the product of Wiener-Itô integrals, and it is actually proven in Proposition A of paper [11]. It can be obtained as a straightforward consequence of the results in Lemma 7.31 and Theorem 7.33 of the book [7]. For the sake of completeness I explain this result at the end of this section. After the proof of Theorem 2 with the help of the diagram formula it remained to derive Theorem 1 from it. Proof of Theorem 1. By the Stirling formula we get from the estimate of Theorem 2 that for any K > √ 2 if M ≥ M 0 (K). Hence the Markov inequality yields the estimate Formula (17) means that relation (4) holds for u ≥ u 0 with the constant A = Ke k . Hence relation (4) holds with a sufficiently large constant A > 0 for all u ≥ 0. Estimation of the moments of a Wiener-Itô integral by means of the diagram formula. Let us have m real-valued functions f j (x 1 , . . . , x kj ), 1 ≤ j ≤ m, on a measurable space (X, X , µ) with some σ-finite non-atomic measure µ such that A white noise µ W with reference measure µ can be introduced on (X, X ). It is an ensemble of jointly Gaussian random variables µ W (A) indexed by the measurable sets A ∈ X such that µ(A) < ∞ with the property Eµ W (A) = 0 and Eµ W (A)µ W (B) = µ (A ∩ B). Also the Wiener-Itô integrals of these functions with respect to the white noise µ W can be defined if they satisfy relation (18). The definition of these integrals is rather standard, (see e.g [6] or [8]). First they are In the present paper only this consequence of the diagram formula will be needed, hence only this result will be described. This result will be formulated by means of the notion of (closed) diagrams. The class of closed diagrams will be denoted by Γ = Γ(k 1 , . . . , k m ). A diagram γ ∈ Γ(k 1 , . . . , k m ) consists of vertices of the form (j, l), 1 ≤ j ≤ m, 1 ≤ l ≤ k j , and edges ((j, l), (j , l )), 1 ≤ j, j ≤ m, 1 ≤ l ≤ k j , 1 ≤ l ≤ k j . The set of vertices of the form (j, l) with a fixed number j is called the j-th row of the diagram. All edges ((j, l), (j , l )) of a diagram γ ∈ Γ connect vertices from different rows, i.e. j = j . It is also demanded that from all vertices of a diagram γ there starts exactly one edge. The class Γ(k 1 , . . . , k m ) of (closed) diagrams contains the diagrams γ with the above properties. If j < j for an edge ((j, l), (j , l )) ∈ γ, then (j, l) is called the upper and (j , l ) the lower end point of this edge. Let U (γ) denote the upper and L(γ) the lower end points of a diagram γ ∈ Γ(k 1 , . . . , k m ). Define the function α γ (j, l) = (j, l) if (j, l) is the upper end point and α γ (j, l) = (j , l ) if (j, l) is the lower end point of an edge ((j, l), (j l )) of a diagram γ ∈ Γ(k 1 , . . . , k m ). For the sake of simpler notations let us rewrite the functions f j with reindexed variables in the form f j (x j,1 , . . . , x j,kj ), 1 ≤ j ≤ m, and define the function Define with the help of the functions F and α γ the constants for all γ ∈ Γ(k 1 , . . . , k m ). The expected value of the product of Wiener-Itô integrals k j !J µ,k (f j ), 1 ≤ j ≤ m, can be expressed with the help of the above quantities F γ . The following result holds. F γ with the numbers F γ defined in (19). These numbers satisfy the inequality Let us consider the above result in the special case m = 2M and f j = f for all 1 ≤ j ≤ m with a square integrable function f of k variables. Let Γ(k, M ) denote the class of diagrams Γ(k 1 , . . . , k m ) in this case, and |Γ(k, M )| the number of diagrams it contains. The above result yields the estimate It is not difficult to see that |Γ(k, M )| ≤ 1 · 3 · 5 · · · (2kM − 1). Indeed, if we omit the restriction that the edges of a diagram can connect only vertices from different rows, then the number of diagrams with 2M rows and k vertices in each row equals 1 · 3 · 5 · · · (2kM − 1). Relation (20) together with this observation imply (14). It is also worth mentioning that the estimate (20) is sharp in the following sense. If with some square integrable function f , then relation (20) holds with identity. In this case k!J µ,k (f ) equals const. H k (η) with some standard normal random variable η and the k-th Hermite polynomial H k (·) because of Itô's formula for multiple Wiener-Itô integrals. 3 Some remarks about the results. The proof of Theorem 1 was based on an estimate of the (high) moments of the homogeneous random polynomial Z of Rademacher functions defined in (2). Although bounds on the tail distribution of sums of independent random variables are generally proved by means of a good estimate on the moment generating function, in the present problem it was more natural to estimate the moments because of the following reason. As the example discussed in Section 1 shows, if Z is a random polynomial of order k, then the tail distribution P (Z > u) should behave for large numbers u as e −const. u −α(k) with α(k) = 2 k . In the case k ≥ 3 a random variable with such a tail distribution has no finite moment generating function. Hence the estimation of the moment generating function does not work in such cases. On the other hand, a good estimate of the (high) moments of the random variable Z is sufficient to prove Theorem 1. It has to be shown that the high moments of Z are not greater than constant times the appropriate moments of a random variable with tail distribution e −const. u −α(k) . Here the same constant is in the exponent as in the exponent of the upper bound in Theorem 1. Theorem 2 contains a good estimate on all even moments of a homogeneous polynomial of Rademacher functions of order k, and it can be considered as a Gaussian type estimate. (It has the same order as the moments of a k-order Hermite polynomial of a standard normal random variable multiplied with a constant.) The moments of degenerate U -statistics were also studied. Proposition B of paper [11] contains a result in this direction. It turned out that high moments of degenerate U -statistics show a worse behaviour. Only their not too high moments satisfy a good 'Gaussian type' estimate. This difference has a deeper cause. There are degenerate U -statistics which have a relatively bad tail behaviour at high levels. Such examples can be found in Example 2.4 for sums of independent random variables and in Example 4.5 for degenerate U -statistics of order 2 in paper [9]. In such cases much worse moment estimates hold than in Theorem 2. Lemma 1 made possible to reduce the estimation of the moments (and as a consequence the tail of distribution) of a homogeneous polynomial of Rademacher functions to the estimation of the moments of a homogeneous polynomial of Gaussian random variables. This result provided a good tail distribution estimate at all high levels. It can be generalized to other polynomials of independent random variables with good moment behaviour. On the other hand, general U -statistics may have a much worse tail behaviour at high levels than the behaviour suggested by a Gaussian comparison. It would be interesting to get a better understanding about the question when a U -statistic has such a good tail behaviour at all levels which a Gaussian comparison suggests, and when it has a relatively bad tail behaviour at very high level. At any rate, the fact that homogeneous polynomials of Rademacher functions satisfy a good 'Gaussian type' estimate at all levels u > 0 has an important consequence. This property was needed for the application of an important symmetrization argument in paper [10]. This symmetrization argument made possible to get a good estimate on the supremum of degenerate U -statistics also in such cases when other methods do not work. There is another result, called Borell's inequality, which makes possible to bound the high moments, and as a consequence the tail distribution of a homogeneous polynomial of Rademacher functions. Actually, this estimate is a simple consequence of the hypercontractive inequality for Rademacher functions proved by A. Bonami [1] and L. Gross [5] independently of each other. It may be interesting to compare the estimates provided by Borell's inequality with those of the present paper. Borell's inequality, (see e.g. [2]) states the following estimate. Theorem B. (Borell's inequality). The moments of the random variable Z defined in formula (2) satisfy the inequality Let us apply Borell's inequality with the choice p = 2M and q = 2 for the random variable Z defined in (2). It gives the bound EZ 2M ≤ (2M − 1) kM (EZ 2 ) M ≤ A(k)(2M ) kM (k!) M V 2M with the constant A(k) = e −k/2 . (The expression in the last part of this inequality is slightly larger than the middle term, but this has no importance in the subsequent consideration.) On the other hand, Theorem 2, more precisely its consequence relation (15), yields the bound EZ 2M ≤ K(2M ) kM k e kM V 2M with some appropriate constant K = K(k) > 0 not depending on M . It can be seen that the inequality k e k < k! holds for all integers k ≥ 1. This means that the estimate of the present paper yields a const. · α M -times smaller bound for the moment EZ 2M than the estimate given by Borell's inequality, where α = 1 k! k e k < 1. As a consequence, Borell's inequality can give the right type of estimate for the tail distribution of the random variable Z, but it cannot give the optimal constant in the exponent. In such large deviation type estimates the moment estimates based on the diagram formula seem to work better.
5,125.4
2006-09-10T00:00:00.000
[ "Mathematics" ]
Digital confocal microscopy through a multimode fiber Acquiring high-contrast optical images deep inside biological tissues is still a challenging problem. Confocal microscopy is an important tool for biomedical imaging since it improves image quality by rejecting background signals. However, it suffers from low sensitivity in deep tissues due to light scattering. Recently, multimode fibers have provided a new paradigm for minimally invasive endoscopic imaging by controlling light propagation through them. Here we introduce a combined imaging technique where confocal images are acquired through a multimode fiber. We achieve this by digitally engineering the excitation wavefront and then applying a virtual digital pinhole on the collected signal. In this way, we are able to acquire images through the fiber with significantly increased contrast. With a fiber of numerical aperture 0.22, we achieve a lateral resolution of 1.5um, and an axial resolution of 12.7um. The point-scanning rate is currently limited by our spatial light modulator (20Hz). Fiber-based confocal endoscopes Confocal microscopy is an important tool in biological imaging, because it substantially improves the contrast of images compared to wide field microscopy, and it allows depthsectioning [1,2]. In essence, the confocal microscope is based on a double filtering operation: a certain volume inside the sample is selectively illuminated by a focused beam, and light originating from this focal volume is selectively observed using a pinhole in the detection pathway. The pinhole is located in a plane conjugated with the focal plane, and suppresses light originating from any location other than the focal volume. With this method, a point of a sample can be probed with higher contrast with respect to its surroundings. Images are built by scanning the probed focal volume inside the sample. In typical biological media, confocal microscopy allows us to obtain clear, backgroundfree images only up to a certain point. Indeed, when focusing at a depth larger than the scattering mean free path, photons on the illumination path are scattered away before they can reach the focal volume. On the detection side, they are diverted from the detection path and blocked by the pinhole. The resulting loss in sensitivity ultimately limits the confocal imaging depth to the superficial layers of the tissue. To image biological structures that are located deep in tissue, fiber-based endoscopes can provide a minimally invasive solution. The existing confocal fiber endoscopes can be divided into two categories: fiber bundle systems and distal scanning systems [3,4]. In fiber bundle systems, a coherent fiber bundle relays the spots created by a conventional confocal microscope to the distal facet of the bundle. The plane of imaging is either the distal facet of the bundle itself (the sample must then be placed in contact with this surface), or an extra lens (e.g. a GRIN rod lens) can be attached to the distal tip of the bundle in order to move the focal plane some distance away from the tip [5][6][7]. This arrangement allows for thin endoscopes (300µm -1mm), but the resolution is limited because of the required inter-core spacing of the bundle, which is in general 3µm or more. A magnifying element can be used at the tip to improve the effective resolution, but in that case diffraction-limited spots may overfill the individual cores of the bundle, decreasing the system's collection efficiency [3]. In addition, magnification reduces the field of view below the probe's size. Another approach is to add a miniature scanning mechanism at the tip of a single-mode fiber. For example, a MEMS scanner can be used to scan the light beam [8,9] or the fiber tip itself can be scanned [10][11][12]. Such devices can reach diffraction-limited resolution, but have large probes of several millimeters. Multimode fiber imaging Recently, multimode fibers have been shown to be an interesting alternative for endoscopic applications thanks to their ability to guide many independent spatial modes of light within a very small cross-sectional diameter, down to 100µm. The multiple modes allow these fibers to transmit images composed of multiple pixels with diffraction-limited resolution, whereas single-mode fibers can only transmit light with a Bessel intensity profile and bundles of single-mode fibers are limited in resolution by the required inter-core spacing between the fibers. The main difficulty in exploiting fiber modes for imaging is that different modes travel with different propagation constants and modes can also couple to each other. Concretely, this means that an image fed into one side of a multimode optical fiber will not retain its shape as it propagates to the other side of the fiber. While the information about the image becomes scrambled in this process, it is however not destroyed. The image can be reconstructed given the knowledge of the propagation characteristics of light inside the fiber. Several techniques have been developed to undo the effects of modal scrambling in multimode fibers. These techniques record the association between the images at the input of the fiber and the scrambled patterns at the output during a calibration phase. Optimization techniques [13][14][15] iteratively find the output pattern associated with each image from a predetermined set of inputs. In digital optical phase conjugation [16][17][18] such output patterns are recorded with a holographic acquisition. The transmission matrix method [19][20][21][22][23] captures the propagation characteristics of the fiber in a matrix linking the input field with the output field. Once the fiber is calibrated, a light modulator is used to shape the input wavefront sent to the fiber, so that it creates spots or other known patterns at the opposite end; these spots or patterns are used to probe the sample. The scanning rate is currently in the kilohertz range in fastest implementations [19,21,24], yielding an effective frame rate of about 1Hz for highresolution scans. Most implementations to date were based on narrowband lasers, but extension to broadband pulsed lasers is underway [15,18]. Ideally, the same multimode fiber should provide illumination to the location of interest, as well as guide the resulting signal back to a detector. Currently, two such imaging mechanisms have been successfully implemented: non-confocal fluorescence imaging [17,19,20,24] and wide field reflection imaging [21,22]. In these demonstrations, modal scrambling was compensated either on the way in or on the way out of the fiber, but not both at the same time. For the fluorescence imaging experiments, the input light was modulated to focus spots on a sample, and the returning fluorescence signal was isolated by means of a dichroic mirror. In the wide field reflection experiments, the sample was illuminated by a random set of speckle fields, and only the returning light was decoded to retrieve spatial information. Confocal microscopy through multimode fibers Here we propose a digital implementation of confocal microscopy combined with multimode fiber imaging. For this, the modal distortions need to be compensated both ways in order to select a particular focal volume during both excitation and detection. In the digital variant of confocal microscopy [25], the light returning from the sample is recorded with digital holography in an intermediate plane, instead of being filtered by a physical pinhole in a conjugate plane. The field is then digitally propagated up to a virtual conjugate plane, where it forms a focus. The digitally focused field can finally be filtered with a virtual pinhole mask, making the detection spatially selective as in classical confocal microscopy. The digital detection of the optical fields provides a large flexibility in the signal processing, allowing for example the dynamic adjustment of the pinhole size as well as the measurement of new contrast metrics such as the focal phase or the focal width [26]. In our case, it also allows to correct for the distortions due to the fiber before filtering with a pinhole. Practically, we use a multimode fiber to guide light to and from the location of interest of a sample, and we implement reflection-mode (non-fluorescent) digital confocal detection at the multimode fiber's tip (see Fig. 1). Prior to the experiment, we measure the transmission matrix (TM) of a multimode fiber and use it to project arbitrary illumination patterns, as well as decode the fields propagating in the reverse direction through the same multimode fiber. Then, we implement the digital filtering required for confocal microscopy. The purpose is to increase the imaging contrast in spot-scanned images, which is important for applications such as imaging inside scattering tissues. A correlation-based filtering technique is also introduced, which offers similar performance for a significantly reduced computational cost. Fig. 1. Overview of the multimode fiber confocal system. A reference laser beam is reflected off a spatial light modulator (SLM). The phase modulated light beam is injected into the fiber in order to produce the desired excitation beam at the sample plane. The field collected back through the fiber is digitally processed in order to render a confocal image of the sample. Imaging setup The output of a diode-pumped solid-state laser at 532nm (CNI MSL-FN-532-100mW) is spatially filtered and collimated to form a plane wave reference beam. After being split by a beamsplitter, the plane waves travel to each side of the multimode fiber (Thorlabs M43L01, Ø105µm core, 0.22 NA, FC-APC). Off-axis holography is used to measure the fields coming out of the fiber. On each side, the fiber facet is first magnified with a microscope objective (Newport MV-40x) and imaged via a lens (Thorlabs AC254-250-A-ML, f = 250mm) onto a camera sensor (PhotonFocus MV1-D1312(IE)-G2-100), where the light field is interfered with the reference plane wave. This is detailed in Fig. 2. The angle between the reference beam and the object beam for off-axis holography is approximately 1.5°. We wish to transmit images in both directions through the multimode fiber, and to avoid confusion we will now designate the side of the fiber with the spatial light modulator as the "proximal side". This is where we control the illumination and perform the confocal detection. The other side is called the "distal side". In the distal side, the holographic acquisition system is used only for calibration. It is the side where the sample is located, and during imaging it is devoid of any hardware besides the fiber itself. In the proximal side, a spatial light modulator (HoloEye Pluto) is used to illuminate the fiber with controlled patterns at a maximal rate of 20Hz. Transmission matrix The first step in controlling the modes of a multimode fiber is to determine how they are transformed between the input and the output of the fiber i.e. measuring its transmission matrix (TM). This can be done experimentally by applying the modes one by one to the input of the fiber with a spatial light modulator (SLM), and recording the corresponding output fields holographically. Each of these measurements yields one column of the transmission matrix. Assuming a complete set of modes is sampled, the transmission matrix can be used to predict any future output by linear combination of the known input-output measurements. It is possible to use the theoretical modes of the fiber as input basis for this procedure [22,23], but any other set of linearly independent input patterns is equally suitable as long as it can properly describe fields entering and exiting the fiber. We chose a basis of plane waves with varying spatial frequencies (i.e. varying angles with respect to the optical axis), because plane waves can accurately be displayed on a phase-only SLM and no light is lost at angles outside the numerical aperture of the fiber (unless those angles are explicitly probed). The relationship between the plane wave basis and the physical pixel basis of the SLM is simply a Fourier transform. This is shown in Fig. 3(a). The transmission matrix is measured using a basis of plane waves, which are equivalent to pixels in the Fourier domain. Each 'Fourier pixel' is successively turned on and off, and the corresponding output pattern is recorded. Each output pattern forms one column of the transmission matrix. (b) Pattern projection: based on a digital image of part of a USAF1951 pattern, a wavefront is calculated using the inverse transmission matrix. This wavefront is then generated by the SLM and sent through the fiber from the proximal end. On the distal of the fiber, the pattern emerges. (c) Reverse image transmission: a part of a USAF1951 target is illuminated from behind by a collimated beam and imaged onto the distal facet of the fiber. The wavefront generated in the proximal end is recorded, and decoded using the transmission matrix. In this way, the pattern can be reconstructed digitally. (d) Experimental results for image transmission. The last row is a snapshot from an animated cartoon ('La Linea', 1971, Osvaldo Cavandoli); see also the associated Visualization 1. Camera-SLM alignment To transmit images in the reverse direction (i.e. from distal to proximal), as needed for confocal filtering, the field should to be recorded holographically at the proximal end and then reconstructed using the transmission matrix. Here lies a significant practical challenge: the transmission matrix is measured using the SLM, but the field must be recorded separately with a camera. For an accurate reconstruction, the camera must record the field exactly as it exists at the position of the SLM. This is possible by placing the SLM and the camera in equivalent planes behind a beamsplitter, as shown in Fig. 1. The two devices must be aligned precisely in position and in angle, and should ideally have the same pixel pitch. In our experiments, the tolerances for the SLM position leading to a 5% change in reconstructed spot intensity were 15µm of lateral translation and 4 arcsec of rotation perpendicularly to the optical axis, and 1.2cm for a translation and 0.9° for a rotation along the optical axis. To reach the required precision quickly and easily, we used a digital registration approach. The fields captured with the camera were interpolated, displaced, and tilted so as to match the coordinate system of the SLM. The alignment parameters can be tuned only once and stay stable until either component is moved. Similar strategies can be found in literature for the alignment of digital phase conjugation mirrors [27]. Bidirectional image transmission In order to calculate which proximal field will create a given pattern at the distal side of the fiber, the transmission matrix needs to be inverted. Because of measurement noise, a regularized inversion scheme is necessary. We used Tikhonov inversion, which has been successfully applied in the context of scattering media before [28,29]: where USV H is the singular value decomposition (SVD) of the transmission matrix T. U H and V H denote the Hermitian transposes of matrices U and V. The singular values σ i of the matrix T are located on the diagonal of S. In S + these values are spectrally filtered by σ i / (σ i 2 + λ 2 ), as required for Tikhonov inversion. The regularization parameter λ was chosen as 10% of the greatest singular value σ 1 . Once the inverse is calculated, any illumination pattern can be displayed dynamically at the distal end of the multimode fiber, as shown in Figs. 3(b) and 3(d), and Visualization 1. Thanks to the use of phase tracking during the measurement of the transmission matrix and Gerchberg-Saxton encoding of the modulated wavefronts (further explained in the Appendix), the patterns do not suffer from interference artefacts [30,31]. We obtain a linear correlation of over 95% between the experimental intensity patterns and the desired intensity patterns. To transmit images in the reverse direction through the system (i.e. from the distal to the proximal side), we record the field in the proximal end with a single holographic acquisition and decode it using the transmission matrix. This yields the field at the distal end of the fiber (Fig. 3(c)). The alignment of the SLM and the holographic acquisition in the proximal end is critical for the successful reconstruction of the distal field, as explained before. Digital confocal microscopy For the confocal scanning, we first put the appropriate pattern on the proximal SLM in order to generate an excitation spot at a distance of approximately 100µm in front of the distal fiber facet. This spot interacts with the sample at that location, and the reflected and backscattered light is collected back through the fiber. The field is then recorded holographically at the proximal side. One such measurement is performed for each position of the sample. Three ways of processing the acquired data were tested. In the first method we simply integrate the total intensity of the proximally recorded field. This serves as a reference image, showing the contrast that would be obtained if the returning light were measured with a bucket photodetector without any further processing. The second method is the digital confocal method. Here, we use the transmission matrix to virtually propagate the backscattered field back through the fiber, and reconstruct it as it existed at the position of the sample. There, we apply a digital pinhole mask that suppresses all light contributions except those found within a radius of 1µm of the position of the excitation spot. Note that the Rayleigh radius for this wavelength (532nm) and fiber NA (0.22) is 1.5µm. The light energy that remains after filtering with the digital pinhole is integrated, and this value forms one pixel of the final image. This filtering scheme is illustrated in Fig. 4(a). We refer to the last method as the correlation method, and it is based on a different filtering principle. Consider the field that is sent from the proximal end in order to create a focus spot at the distal end of the fiber. The light that originates from that same spot at the distal end and carrying the sample information propagates back through the fiber towards the proximal side, where it should lead to a similar field as we used for excitation (neglecting losses), simply because of the reversibility of light propagation. The phase conjugation literature [16,32] provides formal and experimental proof of this principle. Any contribution of light not originating from the focal point should, on the contrary, lead to a proximal field that is uncorrelated with the excitation field due to the randomizing nature of modal scrambling. Therefore, the distal spot intensity can be estimated simply by calculating the linear projection (or correlation) of the returning field with respect to the excitation field, as shown in Fig. 4(b). This operation is done for each scanning spot and the image is constructed pixel by pixel. In the correlation method, the returning field is correlated with the illumination field. Results Our first set of experiments consisted of imaging of a human epithelial cell dried on a microscope cover glass. The results are shown in Figs. 5(a)-5(c). The image area is 81µm by 76µm, and the step size is 1.1µm. A control image made in white light transmission is shown in Fig. 5(d). A similar experiment was made for polystyrene beads spread on the surface of a cover glass. These results are shown in Figs. 5(e)-5(g), with a control image in Fig. 5(h). Here the area is 22.5µm by 22.5µm, and the step size is 0.55µm. To have an estimate for the resolution of our system, we calculated the full width at half maximum of one of the reconstructed spots in the digital confocal image (Fig. 5(f)), which is 1.5µm. In a second experiment, we made a transversal scan (z-scan) of a cover glass, as sketched in Fig. 6(e). This is to illustrate the sectioning capability that we can obtain using the proposed filtering techniques. The results are shown in Figs. 6(a)-(c). A control image is shown in Fig. 6(d); it was taken on a commercial laser-scanning confocal microscope (Zeiss LSM 710) with an NA 0.3 objective. The average full width at half maximum of the interface is 12.7µm in the digital confocal image, 15.8µm in the correlation image, and 10µm in the control image. The ratio of the coverslip signal to the average background intensity between the interfaces is 22.5:1 in the digital confocal image (Fig. 6(b)), 8.4:1 in the correlation image (Fig. 6(c)) and 270:1 in the control image ( Fig. 6(d)). Fig. 6. Transversal scans of a coverslip with the (a) total intensity method, (b) digital confocal method, (c) correlation method and (d) control image taken with a commercial confocal microscope. The scale bars represents 20µm of distance in air. Note that the thickness of the coverslip is approximately 150µm, but due to refraction it appears thinner in these images. The vertical axis is perpendicular to the coverslip, and the horizontal axis represents a lateral scan. (e) Schematic description of the experiment. Contrast enhancement, sectioning and image quality The comparison of the various methods in Fig. 5 reveals that a significant increase in image contrast is achieved when filtering the backscattered light, versus the case where the whole field is integrated. By digitally implementing spatial selectivity in the detection, we were able to clearly distinguish the walls and the nucleus of an epithelial cell in Figs. 5(b) and 5(c). Also in the case of polystyrene beads, the filtering scheme was useful. With this sample, we recorded an intensity image with very little contrast in Fig. 5(e), but the beads appear clearly on the confocal and correlation images Figs. 5(f) and 5(g). Similarly, the depth scans of Fig. 6 show reflective interfaces could not be resolved by simply recording the total backscattered intensity ( Fig. 6(a)), but they were made visible by the proposed filtering schemes (Figs. 6(b) and 6(c)). Due to the limited numerical aperture of the fiber (NA 0.22), the axial resolution is relatively low in Figs. 6(b) and 6(c). The numerical aperture explains part of the difference between these images and the control image from a traditional microscope (NA 0.3). Note that the transmission matrix method is general and can be used with any type of fiber. Therefore, the steps outlined in this manuscript can be extended to fibers with a higher numerical aperture or a larger core; however, this implies that a greater number of modes need to be sampled during calibration, and with a slow modulator it is preferable to keep this number low (the calibration currently takes 10min in our implementation). Other factors play a role as well in determining the image quality obtained with our approach. In the experiments presented here, we illuminated and recorded only one polarization of the light going through the fiber, for experimental simplicity. Since the fiber acts as a depolarizing medium for linear polarization, half of the light is lost each way. Polarization multiplexing techniques [22,23] may improve the sensitivity by allowing to process all of the light travelling through the fiber. An added benefit of polarization multiplexing would be the capability to do confocal polarization microscopy. Finally, most phase-only spatial light modulators are known to cause aberrations due to the fact that their surfaces are not perfectly flat. This induces a systematic error in the measurement of the transmission matrix. Because the same aberration is not present on the camera used for recording backscattered field, it is not possible to perfectly reconstruct the distal field from the proximally measured data. One possible solution is to use a modulator that is flat or corrected for such errors, or measure the deformation experimentally and correct for it [33]. Speed The experiments presented here are currently limited in speed by our modulator. With a pointscanning rate of 20Hz, the measurement shown in Figs. 5(a)-5(c) took 4min 15s to acquire, Figs. 5(e)-5(f) took 1min 24s, and Figs. 6(a)-6(c) took 3min. Faster modulators can be used, such as digital micromirror devices or a combination of an acousto-optic deflector with a spatial light modulators. These have been shown to work for similar applications [14,19,24,34], and reach speeds over 20kHz. The next limiting factors would be the speed of the acquisition (i.e. the frame rate of the camera), and ultimately the computational load of reconstructing holograms. We use digital off-axis holography here, and with this method the speed of reconstruction is mainly determined by speed of the necessary Fourier transform. On a computer with an Intel Xeon E5-2620, using the FFTW library, we were able to process holograms of 800 by 800 pixels at a speed of 400 frames per second. Note that in the digital confocal method, two Fourier transforms are required: one to reconstruct the off-axis hologram captured in the proximal side, and one to reconstruct the distal field from the unscrambled Fourier coefficients calculated with the transmission matrix. With the correlation method, only the first transform is needed (for the holographic reconstruction). The processing speed can be increased by making lower-resolution holograms. A lowerresolution means that the field of view has to be reduced, and/or the magnification of the optical detection system (OBJ2, L4, OBJ3 and L5 in Fig. 2) should be reduced, leading to a smaller spatial frequency bandwidth [35]. The resolution of 800 by 800 pixels that we used is enough for fibers with a V number up to 350, e.g. a fiber with a core of 105µm and NA 0.56 or a fiber with NA 0.22 and a core of 270µm at 532nm. Comparison of the digital confocal and the correlation method In effect, the pinhole method performs the same operation as a classical confocal microscope, while the correlation method acts more like a matched filter [36] measuring the amount of backscattered light bearing the same signature as the excitation light. The correlation method has a lower computational cost, because we do not need to transform the proximal field and reconstruct the distal field. However, there is also less flexibility in the signal processing, since the pinhole size cannot be adjusted and the reconstructed spots are not available for further analysis. As opposed to the digital confocal method, the correlation method can be completely hardware-implemented by letting the backscattered field reflect on the SLM. This field will then be demodulated by the phase pattern currently being displayed. In other words, the backscattered field (the field to be filtered) will be multiplied by the illumination pattern (the field we wish to correlate with). After this operation, the light can simply be focused through a lens to obtain the Fourier transform, and a pinhole can be used to extract the DC-term of the resulting field. In this case, the acquisition speed would only be limited by the modulator. Bending and stability The proposed methods depend on the characterization of the fiber by the transmission matrix, and this transmission matrix changes depending on the bending state of the fiber. While there is a certain limited tolerance to bending [21,37,38], for practical applications it may be preferable to use a fiber immobilized inside a needle [17], as a rigid endoscope. The small outer diameter (125 -300µm) of multimode fibers is compatible with some of the thinnest needle gauges, so this constitutes a minimally invasive method for deep-tissue microscopy. Other proposals in literature that address the problem of bending include using a semirigid probe, with a calibration stored for a discrete set of bending states [37], or compensating bending in real-time with a fast feedback system [14]. By using two-photon fluorescence as a feedback signal and exploiting the structure of light patterns in graded-index fibers, it is possible to obtain the calibration of the fiber without access to the distal end [15]. Recently, it was demonstrated that the transmission matrix of a fiber can be calculated instead of being measured [22]. It is also possible to calculate the matrix for different bending states of the fiber. This study suggests that it may be possible to compensate for the bending of the fiber by recalculating the matrix in real-time. The images acquired through the fiber endoscope could be used as feedback signal in order to estimate the bending state. Another important point with regard to the proposed applications is the temperature stability of the transmission matrix. According to previous results in literature [39], the temperature variation that is necessary to decorrelate a speckle pattern through a 1m long fiber is 8°C, and this scales inversely with fiber length. Therefore, it may be necessary to calibrate the fiber at the temperature of the body, but there is otherwise enough temperature margin for most endoscopic applications. Fluorescence Here, we showed results for reflection-mode confocal operation. This has the advantage for in-vivo operation that no fluorescent probes need to be injected to the area of interest before it can be imaged, i.e. the technique works label-free [40,41]. If a label is desired, for example to target specific parts of a tissue, one should use scattering probes such as nanoparticles. Since confocal microscopy is often used in biology to image fluorescent specimens, we briefly discuss whether the proposed methods can be extended to this case. For imaging fluorescence, the transmission matrix must in principle be known for both the excitation and the emission wavelength. With this information, the correlation method could be implemented as follows: fluorescence emission could be spectrally filtered to yield a speckle pattern, and this speckle pattern could be correlated with the pattern expected for fluorescence emission from the excited spot. Multispectral transmission matrices have been studied before in the context of scattering media [42]. The fluorescence bandwidth that could be obtained with such a technique depends on the spectral decorrelation width, which is inversely proportional to the length of the fiber [39]. The digital confocal method relies on holographic detection. Since there is no coherent reference available in the case of fluorescence, a reference-free method of recovering the phase information would be needed to apply this method [43]. Conclusion Our experiments show that the principle of confocal filtering is broadly applicable, even in cases where the light paths towards the focal volume are severely distorted. The schemes presented here can be generalized to any system where the distortion is described by a transmission matrix, e.g. also in scattering media [28]. In the context of biomedical imaging, the multimode fiber can be calibrated outside the tissue of interest, and then inserted at another location (i.e. inside the tissue) for imaging. The proposed system does not have any distal scanning optics, and the probe diameter can therefore be as thin as the fiber itself. The focal plane can be chosen dynamically by appropriate modulation from the proximal side. We proposed two conceptually different ways of obtaining a confocal filtering effect via multimode fibers. This has potential applications in the endoscopic high-contrast imaging of cells, either label-free or with scattering probes such as nanoparticles.
7,074.2
2015-02-14T00:00:00.000
[ "Physics", "Biology" ]
The Complementary q-Lidstone Interpolating Polynomials and Applications In this paper, we introduce the complementary q-Lidstone interpolating polynomial of degree 2 n , which involves interpolating data at the odd-order q-derivatives. For this polynomial, we will provide a q-Peano representation of the error function. Next, we use these results to prove the existence of solutions of the complementary q-Lidstone boundary value problems. Some examples are included. Introduction In 1929, Lidstone [1] introduced a generalization of Taylor's series that approximates a given function in a neighborhood of two points instead of one. Recently, Ismail and Mansour [2] introduced a q-analog of the Lidstone expansion theorem. They proved that, under certain conditions, an entire function f (z) can be expanded with respect to the points 0 and 1 in terms of the q-Lidstone polynomials A n (z) and B n (z): Here, A n (z) = η 1 q −1 B n (z) and: where η y q −1 z n denotes the q-translation operator defined by: η y q −1 z n = q n(n−1) 2 z n (−y/z; q −1 ) n = y n (−z/y; q) n , and B n (z; q) is the q-analogue of the Bernoulli polynomials, which is defined by the generating function: E q (z) and e q (z) are the q-exponential functions defined by Jackson, cf., e.g., [3,4], E q (z) := ∞ ∑ j=0 q j(j−1)/2 z j [j]! ; z ∈ C and e q (z) := ∞ ∑ j=0 z j [j]! ; |z| < 1. The q-Lidstone polynomials A n (z) and B n (z) of degree (2n + 1) and satisfy: A 0 (z) = z and B 0 (z) = z − 1, A n (0) = A n (1) = B n (0) = B n (1) = 0, for n ∈ N, D 2 q −1 A n (z) = A n−1 (z) and D 2 q −1 B n (z) = B n−1 (z). (2) Throughout this paper, unless otherwise stated, q is a positive number less than one. The sets A q , A * q are defined by: where N 0 := {0, 1, 2, ...}. If X is the set A q or A * q , then for n > 1, we use C n q (X) to denote the space of all continuous functions with continuous q-derivatives up to order n − 1 on X. We shall follow the notations and terminology in [3,5]. In [6], we studied the boundary value problems, which consist of an even order q-differential equation and the q-Lidstone boundary conditions. This paper extends this technique to solve the following problem: subject to the boundary conditions: whereβ,β j ,γ j ∈ C, φ is a continuous real function defined on the set A * q × R j+1 , 0 ≤ j ≤ 2n and: g := (g 0 , g 1 , ..., g j ) = (g, D q −1 g, ..., D j q −1 g) ∈ C 2n+1 q −1 (A * q ). We will give a q-analog of the complementary Lidstone interpolation, which was introduced in [7] and drawn on by Agarwal, Pinelas, and Wong in [8]. More precisely, we introduce and construct explicitly the complementary q-Lidstone interpolating polynomial of degree 2n, which involves interpolating data at the odd-order derivatives. Furthermore, we will provide a q-Peano representation of the error function. These results are of fundamental importance in every aspect of numerical mathematics, in the theory of q-differential equations such as maximum principles, q-boundary value problems, oscillation theory, disconjugacy, and disfocality. This article is organized as follows. In the next section, we give the formula of the q-Lidstone interpolating polynomial Q n (z; q) of degree (2n − 1) and provide a q-Peano representation of the error function. In Section 3, we introduce and construct explicitly the complementary q-Lidstone interpolating polynomial P n (z; q) of degree 2n, which involves interpolating data at the odd-order derivatives. In Section 4, we are interested in the existence of solutions of the complementary q-Lidstone boundary value problems (3) and (4), and we will give some illustrative examples. General conclusions of this work are summarized in Section 5. Some Basic Results on the Interpolating Polynomial We begin by some results from [6]: where A m and B m are q-Lidstone polynomials of degree 2m + 1, and: G(x, qy) G n−1 (qy, qt) d q y (n = 2, 3, ...) Remark 1. For n ∈ N, the function G n (z, qs) satisfies: As in the classical field of approximation theory [9], we consider the q-Lidstone interpolating polynomial Q n (z; q), z ∈ A * q , of degree 2n − 1 satisfying the q-Lidstone conditions: A representation of the q-Lidstone interpolating polynomial Q n (z; q) is given by the following: The q-Lidstone interpolating polynomial Q n (z; q) can be expressed as: Proof. It is clear that Q n (z; q) is a polynomial of degree at most (2n − 1). From (2), we have: It follows that: and: In such a case, Q n (z; q) is called the q-Lidstone interpolating polynomial of the function f (z). For the associated error: we provide a q-Peano representation. Therefore, in the following, we recall a q-Peano kernel theorem from [10], which is an important role in our results. We use the notation P n to denote the space of polynomials of degree n, and we consider functions of class C n+1 q (A * q ). Define the two variables polynomials φ n (z, t), z, t ∈ C, to be: (−1) n q n(n−1) 2 t n , z = 0. : where: here, L z means the linear functional L applied to φ + n (z, qt) as a function of z, and: Let z 0 , z 1 , ..., z n be distinct points in A * q . We denote by I k (z), k = 0, 1, ...n, to the polynomials that are defined on A * q and satisfy the following condition: Lemma 3. (see [10]) Suppose z 0 , z 1 , ..., z n are distinct points in A * q . Define the corresponding error functional by: Then: Now, we prove the main result. Theorem 2. Let f ∈ C 2n (A * q ). Then: here, G n has a q-Peano representation: Proof. According to Lemma 2, the q-Lidstone interpolating polynomial of the function f can be expressed as: where the associated error: Therefore, from Lemma 1, we obtain (11). Now, we apply Theorem 1. Note that, the reminder L( f ) defined by: where: By Equation (12), we obtain: We can verify that: Therefore, by Lemma 3, we conclude that G n has a q-Peano representation: The Complementary q-Lidstone Interpolating Polynomials In this section, we consider the complementary q-Lidstone interpolating polynomial P n (z; q) in A * q , which is of degree 2n and satisfies the conditions: In the next result, we denote by ν m (z) and τ m (z) (m ≥ 0) the first q −1 -derivatives of A m (z) and B m (z), respectively. That is, Then, it immediately follows that: and P n (z; q) be the complementary q-Lidstone interpolating polynomial of degree 2n of the function g(z). Then: where: and R(z; q) is the residue term: Furthermore, the kernel H n (z, qs) has the q-Peano representation: and for z ≥ s, Proof. Let f = D q −1 g. Integrate both sides of (7) from zero to qz, to obtain: From (2), we have: Similarly, we can verify that: It follows g(z) = qz 0 1 0 G n (t, qs)(D 2n+1 q −1 g)(q 2 s) d q s d q t + g(0) and then, we get Equation (13), where: By using Theorem 2, for z < s, we obtain: Similarly, for z ≥ s, we have: Finally, we will take g(z) = q 2n 2 +n−1 is the polynomial function of degree 2n defined in (8). Then, after some calculations, we verify that: Hence, we obtain: By using (14), we get: Therefore, for z = s, we have: Combining (17) and (18), for z ≥ s, we get: This completes the proof. Proof. From (13) and (15), we get: Note that: and from Lemma 4, we conclude that the double q-integral on the right-hand side of (22) is absolutely convergent. Therefore, we can interchange the order of the q-integrations to obtain: C n z. Applications In this section, we present the necessary and sufficient conditions for the existence of solutions of the complementary q-Lidstone boundary value problem (3) and (4). The proof depends on the results obtained in Section 3 and the Arzelà-Ascoli theorem [11]. Theorem 4. Suppose that Q k > 0, 0 ≤ k ≤ j are given real numbers, and define the nonzero constant M to be the maximum of | φ(z, g 0 , g 1 , ..., g j ) | on the set A * q × E, where: Furthermore, suppose that: Then, the boundary value problem (3) and (4) has a solution in E. Proof. First, we define the set: Notice, we can verify that J(A * q ) is a closed convex subset of the space C j q −1 (A * q ). Consider an operator T : C j q −1 (A * q ) → C 2n q −1 (A * q ) as follows: (Tg)(z) = P n (z; q) + 1 0 |H n (z, qs)|φ(s, g(s)) d q s. In view of Theorem 3, any fixed point of (25) is a solution of the complementary boundary value problem (3) and (4). Next, we prove that T maps J(A * q ) into itself. Let g(z) ∈ J(A * q ). Then, from (24), (25), and Lemma 5, we get: is a compact set, Inequality (26) implies that the sets: are bounded and then uniformly equi-continuous on J(A * q ). Therefore, from the Arzelà-Ascoli theorem, the closure of T(J(A * q )) is compact. Thus, by the Schauder fixed point theorem, we can find a fixed point of T in E that satisfies the boundary value problem (3) and (4). Proof. By using (27), for g(x) ∈ J(A * q ), we get: where N := L + ∑ j k=0 L k (2Q k ) α k . Hence, the result follows by observing that the hypothesis of Theorem 4 is satisfied and replacing M by N such that Q k , (0 ≤ k ≤ j) are sufficiently large. Theorem 5. Suppose that the function φ(z, g 0 , g 1 , ..., g j ) on the compact set A * q × E 1 satisfies the following conditions: where Then, the boundary value problem (3) and (4) has a solution in E 1 . Proof. Let y(z) = g(z) − P n (z; q). Then, the boundary value problem (3) and (4) is equivalent to the following problem: (−1) n D 2n+1 q −1 y(z) = φ z, (y + P n )(z), D q −1 (y + P n ), ..., D j q −1 (y + P n ) , For y ∈ C j q −1 (A * q ), we define: and we consider the operator T 1 : We will use the same technique of the proof in Theorem 4. Therefore, it is sufficient to prove that T 1 maps the set: For this, let y(z) ∈ J(A * q ). It immediately follows that: and then: Thus, from (28), (30), (31), and Lemma 5, we get: Theorem 6. Suppose that the function φ(z, g 0 , g 1 , ..., g j ) on the compact set A * q × E 2 satisfies the Lipschitz condition: where E 2 is the same as E 1 in (29), with: Then, the boundary value problem (3) and (4) has a unique solution in E 2 . Proof. Since the Lipschitz condition (32) implies (28), the existence of a solution follows from Theorem 5. To prove the uniqueness, let g(z) and y(z) be two solutions of the boundary value problem (3) and (4) in E 2 . Then, as in Theorem 5, it follows that g − y ≤ θ g − y , and since θ < 1, we get g(z) = y(z). Concluding Remarks The q-Lidstone polynomials are defined in analogy with the well known Lidstone polynomials through the q-translation operator and the q-analogue of the Bernoulli polynomials. These polynomials of degree 2n + 1 satisfy analogue conditions of the Lidstone polynomials with respect to the q-differential operator D −1 q . It was recently proven, that under certain conditions, an entire function f can be expanded with respect to the points 0 and 1 in terms of the q-Lidstone polynomials. In [6], we studied the boundary value problems, which consist of an even order q-differential equation and the q-Lidstone boundary conditions. This paper extended this technique to solve some problems. We introduced the complementary q-Lidstone interpolating polynomial of degree 2n, which involves interpolating data at the odd-order q-derivatives in zero and one, and provided a q-Peano representation of the error function. This work provided the basis for several applications that we can search in the future. Firstly, we are interested in studying the possibility of extending q-Lidstone and complementary q-Lidstone interpolation polynomials to triangular domains. The analogous problem for the classical case was posed by Agarwal and Wong [12] and studied in [13,14]. Secondly, we are interested in applying such expansions to the construction of the boundary-type quadrature formula on triangles (see [15]) or to a solution of Hermite-Birkhoff interpolation problems on scattered data (see [16,17]).
3,225.6
2020-06-19T00:00:00.000
[ "Mathematics" ]
Towards a Guideline Affording Overarching Knowledge Building in Data Analysis Projects Tight and competitive market situations pose a serious challenge to enterprises in the manufacturing industry domain. Competing in the use of data analytics to enhance products and processes requires additional resources to deal with the complexity. On the contrary, the possibilities afforded by digitization and data analysisbased approaches make for a valuable asset. In this paper we suggest a guideline to a systematic course of action for the data-based creation of holistic insight. Building an overlaying corpus of knowledge accelerates the learning curve within specific projects as well as across projects by exceeding the project-specific view towards an integrated approach. Introduction Demand and supply for insights derived from all kinds of accessible data sources in enterprises are higher than ever before as the pressure to keep up with global competitors meets the ever-growing possibilities of data acquisition and exploitation. A plethora of methods and tools is available to deal with and make use of these resources: from sensors to algorithms, from Industrial Internet of Things (IIoT) solutions to programming libraries and software. [1] While all business sectors face this situation equally and therefore must deal with similar challenges, the complexity of the task is particularly high in the manufacturing industry domain. [2] [3] This holds true especially for tasks within data-driven enhancement projects (EP) in the manufacturing industry domain which require a high level of innovation and are conducted in a project-based manner like one-of-a-kind production, research and development (R&D), customer-specific machinery and plant engineering or the design of cyber-physical production systems. [4] First and foremost, conducting successful data analysis projects does not only include the activities directly associated with analyzing data but involves the execution of several elaborate steps as well as strategic measures. To systematically align all relevant aspects affecting the analysis outcome in a wider sense will result in distinct quality improvement. [3] In our research we aim at providing the means to support achieving strategic goals by conducting data analysis projects which systematically connect relevant information fragments on all levels of aggregation from all relevant sources. Therefore, our research is driven by the following research question (RQ): RQ: How can a reference model be provided for complex tasks in the industrial domain which provides methodological support for the data-driven construction and utilization of an overlaying corpus of knowledge? To answer this question, we developed an artifact in the form of a reference model to equip the user with a wide range of methodological support for conducting informed data analyses. The goal of the suggested framework is to not only derive insight about the examined topic of an active data mining project but to preserve and build on the findings exceeding project boundaries. The reference model aims to inspire rigorous and holistic investigation, to provide the means for communication, project management and documentation and to build the foundation for future software applications to support this holistic project-exceeding data mining approach thus also paving the way for an analysis and optimization of the activities undertaken within data mining projects themselves. Following this approach this paper is structured as follows: In Section 2, we describe our motivation, we then sum up foundations and basic concepts in Section 3. Derived from the key activities of the sensemaking approach as described by [5] and more specifically by [6] a set of design principles is suggested, as will be described in Section 4. In fulfillment of the defined design principles a framework is presented in Section 5 to structure necessary methodological measures and to allocate useful activities within five layers of information aggregation. By presenting the reference model we advocate for a systematic course of action aiming at the creation of holistic insight. Finally, we draw a conclusion and give an outlook for further research in Section 6. Motivation The major purpose of the presented long-term design science research project is to elaborate methodological support for data-driven knowledge extraction projects in the manufacturing industry domain. Therefore, our main objective is to help artifact users gain a sophisticated understanding of the principles by which to conduct data-driven knowledge extraction projects, to reduce the associated hurdles for manufacturing companies and to create a basis to address and solve them in the future in a repeatable manner. The application of the presented reference model enables domain experts to derive cumulative knowledge, rather than re-inventing technical concepts and methodological procedures under new labels in every new project setting. [7] Specialists dealing with data analysis projects in the industrial domain face the necessity to cover the methodological skillset required in data science as well as a deep understanding of the domain fundamentals to consider relevant causalities and interactions and to purposefully derive and interpret results according to their context. Hence throughout all industrial sectors on the one hand domain experts successfully gain and apply data analytics knowledge while on the other hand data analysts engage in various domain contexts and oftentimes both have to team up with each other and with additional professionals like computer scientists and mathematicians to derive the desired outcome. While tremendous progress is underway in the domain-specific training of and proficient cooperation with data scientists and in the successful realization of data analytics projects the potential for even better outcome is huge. [8] [9] The main hurdles are the intricate communication between domain experts and data scientists, the scarcity of human resources for data analytics projects and the lack of domain-specific standardized procedures which lead to a singular quality of the execution and the use of results of data-driven analyses. These shortfalls especially hold true where a limited number of experts must realize data analytics projects next to rivaling work tasks as is the case in small and medium sized companies (SME), startups and R&D or planning departments. [3] A pre-study in the form of an exploratory study with six qualitative expert interviews aimed to identify the challenges that occur while setting up a data-driven knowledge extraction project confirmed these hurdles. The interviews were designed as partially standardized interviews using open to semi-open questions as initial starting points for the conversation and took between 70 and 180 minutes. The complete listing of the formulated questions and results will be provided by the authors upon request. The answers showed that practitioners tend to rely on traditional procedures and experience-based knowledge. Their understanding of Data Mining (DM) mainly focused on the core analysis activities like the application of algorithms and often underestimated the effort and importance of peripheric aspects like the determination of target-aimed questions, data preparation to produce structured evaluable data sets, conclusive feature engineering and context-sensitive model building. The interviewees expressed their wish for more structure and guidance in data analytics projects while they found existing standard processes too generic to apply for their domain as well as not sufficiently considering real-life problems like data acquisition, data quality and operational data processing. Foundation Pursuing a long-term research project in the field of information systems (IS) aiming at the design of an artifact in the form of a reference model we comply with the design science paradigm stated by [10]. We furthermore adopt the three-cycle view of design science research (DSR) presented in [11] to address the relevance, design and rigor of the developed artifact. Additionally we rely on the steps for DSR research recommended by [12] to apply the paradigm to our research as follows: The problem identification and motivation for our research is constituted by the experience from numerous research projects and a pre-study in the form of expert interviews as described in Section 2. We then derived theory-based research goals and objectives by the definition of design principles as described in Section 4 followed by the design and development of the artifact, the outcome of which is presented in Section 5. While applying the findings in practice the derivation of a context-specific model should then be demonstrated and evaluated within future research. In an iterative manner the insights from an initial implementation within an example scenario should be used to further enhance the artifact and undergo subsequent evaluation phases to then be transferred to the community. When attempting to represent and reduce reality to fulfill a subjective purpose like the understandable formulation of complex facts [13] for a class of similar problems a reference model is provided by introducing a model which is of recommendatory and universal character and allows for the derivation of application-specific models. [14] Consequently reference models are a generic type of model representing the essence of a commonpractice or best-practice view on a class of similar problems intended for re-use and acting as a blueprint for the derivation of specific models. [15] The addressed application field of the presented reference model comprises tasks in the industrial domain which require a high level of innovation and are conducted in a projectbased manner. When attempting to support such tasks there are various user roles and artifacts to take account of, notwithstanding that more than one user role can be fulfilled by one individual. These roles and artifacts are depicted in figure 1. Figure 1. Addressed users and artifacts As drawing conclusions by the statistical or algorithms-based study of large amounts of data today is widely established throughout all disciplines, numerous attempts have been made to standardize the data mining process especially in the field of computer science and economic analyses. Such procedure models generally consist of generic steps to structure and guide the planning and execution of DM projects. [20] Prominent standard operating models are subsequently named. Knowledge discovery in databases (KDD) is a description of the central building blocks of the overall multi-step procedure for complex real-world analysis tasks aiming at the discovery of knowledge in large amounts of data. [17] [18] Subsequent approaches like SEMMA and CRISP-DM emerged from the basic concept of KDD. The cross-industry standard process for DM (CRISP-DM) comprises the steps business understanding, data understanding, data preparation, modeling, evaluation and deployment, thus adding a more strategic perspective to the KDD core concept [19] [20]. The sample, explore, modify, model, and assess (SEMMA) methodology was developed by the SAS Institute to methodically organize the functions of its statistical and business intelligence software SAS Enterprise Miner, its constituent phases naming the concept in the form of an acronym. The analytics solutions unified method (ASUM) draws on a combination of agile and traditional implementation principles to achieve set solution goals and therefore complements the defined analysis phases by an additional project management stream to support the organizational realization. [21] Design Principles The concept of sensemaking originated in social psychology and was set in an organizational context by [5]. The approach describes how human beings in a social setting derive understanding of their surroundings by combining various information, creating connections and finally adding their own reasoning to it. The concept is described extensively in [22]. [6] sums up relevant literature and derives five key activities found in previous work as listed in table 1 which constitute the making of sense and thereby act as design goals for the developed reference model. As the developed framework is supposed to not only support the understanding of facts and the creation of insight but also its utilization for the in-project and project-exceeding enhancement of the target-system, one more key activity is needed to complement the sensemaking key activities. By including the creation and utilization of a knowledge base we want to create a linkage to the field of knowledge management and thereby create the concept of knowledge making. By coining the term, we want to emphasize a creative, intuitive and iterative character of the approach, orienting on human behavior and the cognitive and social processes it originates in. In DSR the concept of design principles (DP) provides the means to specify prescriptive design knowledge in a way that allows for a precise formulation to describe how the mechanisms of a technology or approach help to achieve particular aims. [23] According to [24] design principles should describe which actions are made possible through the use of an artifact and explain the material properties which make that action possible while naming the boundary conditions under which this description holds true. More precisely [24] suggests the formulation of a DP in the following form: "Provide the system with [material property-in terms of form and function] in order for users to [activity of user/group of users-in terms of action], given that [boundary conditions-user group's characteristics or implementation settings]." Following this suggestion, we formulated design principles for the presented reference model based on the derived knowledge making key activities as shown in table 1. Reference Model We want to motivate a highly strategic and integrated practice in data-driven enhancement projects [EP] in the manufacturing industry domain and to support this mindset by suggesting a framework to guide the efforts. The development of this reference model is driven by the needs identified in industrial practice and numerous research projects and realized by employing well-researched approaches grounded in established theory. We set up a grid-like structure to assign relevant methodologies to the respective analysis project phases and thereby fulfill the design principles formulated in Section 4. We based our approach on three widely established concepts: standard procedure models, the concept of data aggregation and the field of knowledge management. We attempt to provide the means for the effective combination and domain-specific adaption of these concepts while additionally overcoming their shortcomings as described in section 1 and further elaborated in [25] and [3]. We especially want to emphasize the importance of considering the various aggregation levels as described in table 2 in which information fragments can occur in, calling attention in particular to the intense interaction of all five levels of aggregation implying the necessity to expand awareness to each of them and their interrelations within each step of action. More specifically speaking an integrated consideration and operationalization is needed throughout all project phases as the strong focus on DM core analysis activities was one of the main hurdles found in the pre-study described in Section 2. The reference model supports practitioners in the inclusion of all aspects, from aggregation level 1, being the least connected state of raw data and the physical system realization and data acquisition up to level 5, comprising the overarching management of highly connected complex information constructs. Data aggregation is often depicted in a form similar to the traditional knowledge pyramid, although revised and refined approaches can be found superseding this strictly hierarchical view. [26] Within the scope of our research we adopt the view that information fragments can exist in various states of aggregation, starting from incrementally small pieces of data like a single binary number, but also forming states of light aggregation as in protocols or logfiles or of higher aggregation like in the form of data sets, tables, charts or reports, where data is set into context and provides declarations exceeding its alpha-numerical value. We therefore deem it valid to speak of information when referring to aggregated data. Data aggregation states then stretch to strongly aggregated forms of where aggregated chunks of information further connect to complex constructs representing relations comprising formal logic thus resembling the processing of insight and thought in the human mind. We therefore argue that the term information is suitable to describe aggregated forms of data and highly aggregated information equals knowledge in the daily use of language. In table 2 we convey this understanding to the manufacturing industry domain introducing an additional level of analogous real-life objects which the relevant data relates to and originates in. Relevant objects within AL 1 can be controllers, motors, GPS trackers and sensors or transport systems, accompanied by the respective digital counterparts in AL 2 like output data of controllers, performance data of motors, GPS data and other sensor data. Furthermore AL 2 addresses additional descriptions of the target-system as e.g. conceptual models. Within AL 3 a suitable concept must be chosen to gather, process and contain any relevant information fragments to transfer them to higher levels of aggregation and derive and utilize insight. A suitable concept can be an enterprise-specific analysis framework, an individual adoption of the DM standard processes described in section 3 or domain-specific adoptions like the "DMME: Data mining methodology for engineering applications" as presented in [3]. Within AL 3 and the central analysis project phase of the chosen concept resides the core activity constituting the success of the EP: Proceeding in an intensely iterative character and closely observing the relation to any other grid point highly contextsensitive feature engineering is made possible. Within AL 4 the found facts and interrelations are implemented by integrating the derived insight within physical instantiations, instantiations of digital shadows or digital twins, simulation models or visualizations. The knowledge base constituting AL 5 can take many forms, from the incorporation by an individual, classical SQL databases or ontologies to intelligent agents. Lastly the successful utilization of the concept will depend on what the respective knowledge base affords. Despite AL 5 constituting the bottleneck of the implementation, the more suitable its chosen way of instantiation is for the occasion the more intense the usage in practice will be. Highly formalized approaches and machine-readable implementations allow for complex and potent operations but require high effort to set up and maintain. Depending on the application situation the manageable effort of a lightweight solution can advance implementation success. We suggest orienting on existing solutions like for example extensively elaborated for the application of ontologies in the manufacturing domain in [27]. Two more aspects are vital to exploit the full potential of data analytics in the industrial domain: to take into account the dimorphic system character of the target system consisting of analogous and digital components and to focus on context-sensitive engineering of conclusive features as this step constitutes the heart of the project and is complemented by the choice and application of fitting tools and methods, only rendered possible by the utilization of aforementioned concepts providing the necessary context. [29] As pointed out by [29] and further elaborated by [30], the concepts described in Section 3 share the common essence of a stepwise description of the data mining project phases along with similar core principles of the activities performed during the respective steps. Attempting to capture the essence of the various data mining procedure models we derived a generalized version of data mining project phases as can be seen in figure 2. Based on the specification of the analysis project goal in phase 1 (P1) a conceptualization phase follows in phase 2 (P2). The data analysis core activities are performed in phase 3 (P3) and 4 (P4). First data is collected by setting up the necessary physical infrastructure and accumulating all accessible and presumably relevant information fragments, growing and extending the data pool. Then feature engineering, model building and extraction of relations follow, reducing the data build-up to a set of connected information which can then be deployed. Phase 5 (P5) draws on the preceding phases and can and should be conducted in parallel from the start as it preserves and makes available the methodological and meta-information of the data analysis project as well as comprises the supervision of its execution during and after the project. The phases described above provide the reference model with a basic sequence of actions to perform in a data analysis project and can be replaced by any adequate alternative during instantiation, e.g. a standard process or an enterprise specific procedure. Concurrently the necessity to consider various aggregation levels of available and derived information fragments pertains for all project steps. The aggregation level view in combination with the project phases forms a grid as presented in figure 3 to address the methodological repertory of each combination of layer and phase allowing for the mapping of relevant methods accompanied by respective meta-information. At each grid point a template is to be provided to document used methods and their domainspecific application as well as to give an initial information impulse comprising a narrow set of well-established methods along with a continuable list of methods and sufficient search terms. If available, sub-methodologies and detailed sub-selection options are included by grouping them in a hierarchical manner beneath the respective method, providing a template for each hierarchical dimension. The basic or initial selection can be realized by pre-defining a default method for each methodological category as well as by giving a minimum viable implementation strategy. Within the iterative solution process a token of current knowledge cycles the defined project phases undergoing permanent revision and thus updating the knowledge base. The active token resembles an assumption about the current state of the targeted artifact, permanently considering the dimorphous character of the target-system. It is the state of the art for nearly any real-life system to be accompanied by a digital counterpart. From our point of view these two sides of reality, the analogous components and digital descriptions and traces mirroring them, form the targeted system and have to be considered continuously to investigate, analyze and enhance this system. For further details on the real-life system we suggest [30] and [31] on the concepts of digital shadow and digital twin. To realize the iterative procedure based on an assumption token it is advisable to orient on existing approaches like the "Conceptual Model of the Learning-Oriented Knowledge Management System" given in [32]. When applying the reference model, a specific model is derived tailored to support the targeted EP. Project phases, components included within the aggregation levels and respective methodological suggestions populating the reference model grid are adapted to their relevance within the given context. To ensure intuitive applicability for practitioners, the reference model and templates should be provided in form of visual content accompanied by textual explanations, preferably by the means of a software application. Discussion and Outlook In the presented paper we gave an outline towards a framework supporting the systematic data-based creation of insight. The suggested reference model aims at providing the means to accelerate the learning curve within an active data analysis project as well as to build and utilize an overlaying corpus of knowledge exceeding project boundaries. This aim can be addressed by orienting on the sensemaking approach as described by [6] to derive knowledgemaking key activities. To afford the realization of these activities design principles were formulated. Following these principles, we set up a grid-like structure to assign relevant methodologies to the respective analysis project phases while considering the possible aggregation levels information fragments can occur in. The presented reference model offers a guideline for communication, handling and documentation of technological and methodological information thus providing the means for the construction and utilization of an overarching knowledge base. First application experience in the support of research projects showed the value of the reference model to promote a more integrated method of operation, but also made obvious how providing the means for intuitive applicability is crucial for the successful implementation of the approach. [4] [36] Future work will be devoted to the demonstration, evaluation and revision of the concept in practice. Additionally, a thorough analysis of existing and common methodological elements will be conducted by the analysis of research publications within leading journals and by assessment of accessible information on their application in practice to develop an appropriate classification and identify any additional elements that should be included. Moreover, having provided the means to document the usage of methods and their specification as well as having examined their classification allows for the construction of a formalized body of knowledge addressing the creation of knowledge itself. Future work will comprise the development of a taxonomy of methodological principles at hand to then be conveyed to an ontology defining logical relations, rules and principles allowing for decision support by typecasting similar EPs and deriving suitable solution approaches. While this paper focused on the motivation and the theoretical grounding of the concept, some consideration should also be given to its compliance with existing standards and tools to accelerate interoperability. The integration with standardized approaches like the Reference Architecture Model Industrie 4.0 (RAMI4.0) or with data management aspects like the data lifecycle approach can create synergies and add a helpful dimension to support the organizational implementation of the suggested method within enterprises. [37]
5,650.4
2021-01-01T00:00:00.000
[ "Computer Science" ]
Augmented Skepticism: The Epistemological Design of Augmented Reality . In order to solve the problem of the traditional account of knowledge, according to which justification is the ability to provide reflectively accessible positive reasons in support of one’s beliefs, a number of epistemologists have suggested that knowledge is true belief that is the product of cognitive ability. According to this alternative, a belief-forming process may count as a knowledge-conducive cognitive ability if and only if it has been cognitively integrated on the basis of processes of mutual interactions with other aspects of the agents’ cognitive system. One of the advantages of this approach is that it allows knowledge and justification to be extended to such artifacts as telescopes, microscopes, smartphones and augmented reality (AR) systems. AR systems, however, rely on deceptive reality augmentations that could significantly deteriorate the epistemic efficiency of users’ cognitively integrated natures. This could lead to a form of ‘augmented skepticism’, whereby it will be impossible to tell augmented from physical reality apart. In order to solve this problem, epistemology should play an active role in the design of future AR systems and practices. To this end, this chapter puts forward some initial suggestions, concerning the training of AR users and the design of certain reality augmentation features, in order to ensure that everyday epistemic practices won’t be disrupted by the introduction of emerging AR technologies. Introduction Weeks after the release of Pokémon Go, Police are offering safety advice to users of the popular online game and reminding players to concentrate on the real world when catching Pokémon.Car accidents, property trespassing, carelessly crossing the road, walking through landmines, and wandering in dangerous areas at inappropriate times of the day have raised a number of concerns, all related to the attention deficiency of overexcited users.Yet failing to concentrate on the real world is not the only and certainly not the most worrying aspect of augmented reality (AR).1 'Seeing is believing' could so far be hardly doubted in most ordinary contexts.Yet this fundamental aspect of our everyday epistemic life is likely to be soon under serious threat by the advent of AR.As AR will become ubiquitous, it will likely take over most aspects of our daily interactions with surrounding objects and human beings, making it practically impossible to distance ourselves from this added dimension of future society, much in the same way that most people can no more leave their house without making sure they have their mobile phones on them.There is, no doubt, a great potential in this emerging technology, which promises to enrich our lives beyond imagination.But its users may also be exposed to the serious danger of being unable to tell reality and augmented reality apart.This form of future 'augmented scepticism' cannot be neglected and important steps need to be taken with regards to the design of future AR systems as well as teaching users how to employ the emerging technology in order to avoid this looming epistemic threat.By focusing on recent advances within contemporary epistemology and philosophy of mind and cognitive science, and especially the notion of cognitive integration, this chapter attempts to address this concern and provide advice that could secure our knowledge of the external world while also allowing our knowledge to be extended beyond our biological capacities, by taking advantage of the opportunities offered by AR. Knowledge and Cognitive Integration The received epistemological view holds that knowledge is justified true belief.'Justification', however, is a term of art that can be given a number of different interpretations.According to the traditional account of knowledge, justification is a form of ability to provide explicit positive reasons in support of one's beliefs by reflection alone.2This is a familiar demand.We are many times asked to provide explicit reasons in support of our epistemic statements as well as in support of our reasons for claiming that we know such statements and so on. Nevertheless, however common this practice may be, it cannot really represent a universal theory of knowledge and justification as it generates serious problems, both from a theoretical and a practical point of view. From a theoretical perspective, demanding to always be in a position to offer reasons in support of one's beliefs by reflection alone has the paralyzing epistemic effect of disallowing all perceptual and empirical knowledge.Technically, asking for one to justify one's perceptual cooperative interaction with other aspects of the cognitive system" (Greco 2010, 152). Accordingly, the answer to the first question is that a process may count as a cognitive ability (and thereby as knowledge-conducive) so long as it has been cognitively integrated on the basis of processes of mutual interactions with other aspects of the cognitive system. One of the virtues of this approach to knowledge and justification is that it is fairly straightforward: In order for a reliable belief-forming process to count as knowledgeconducive, it must also count as a cognitive ability, and, in order for that to be the case, the relevant belief-forming process must mutually interact with other aspects of the cognitive system.Yet an additional advantage of this approach is that it can also provide a satisfactory response to the second question we posed above-i.e., what is the specific sense in which one can be justified/epistemically responsible on the basis of one's cognitive abilities, even in the absence of any explicit reasons in support of their reliability?The key, again, is to focus on the cooperative and interconnected nature of cognitive abilities: If one's belief-forming process interacts cooperatively with other aspects of one's cognitive system then it can be continuously monitored in the background such that if there is something wrong with it, then the agent will be able to notice this and respond appropriately.Otherwise-if the agent has no negative beliefs about his/her belief-forming process-he/she can be subjectively justified/epistemically responsible in employing the relevant process by default, even if he/she has absolutely no positive beliefs as to whether or why it might be reliable. For example, in order for agent S to responsibly hold the belief that there is man standing in front of her, S does not need to offer explicit, positive reasons in support of the reliability of her visual system.Instead, provided that S's visual system is interconnected with the rest of her cognitive system, then, in the mere lack of defeaters against the reliability of her visual perception, S can take herself to be epistemically responsible in holding the relevant belief by default.Had her working memory alerted her to the fact that the lighting conditions were not good, had she felt extremely tired, had her long term memory reminded her that she is watching a magic show, or had she tried to touch the person without receiving the expected tactile feedback, she would refrain from accepting the visually formed belief, no matter how truth-like it would appear to her.Nevertheless, in the absence of any such negative reasons against her belief, she can take herself to be epistemically responsible in holding the automatically delivered visual belief, by default (Palermos, 2014b). This way, we can make sense of the commonly held idiom that 'seeing is believing', or at least, 'seeing is believing, unless there are reasons to be believe it is not.' Extended Knowledge and Cognitive Integration But is this always the case, or just when we perceive the world through our biological equipment?Recent studies at the intersection of epistemology and philosophy of mind and cognitive science indicate that knowledge and justification can be technologically extended.(Pritchard 2010c;Palermos 2011Palermos , 2014bPalermos , 2015Palermos , 2016;;Palermos and Pritchard 2013;Carter, Kallestrup, Palermos and Pritchard 2014) Over the last two decades, philosophy of mind and cognitive science has become increasingly receptive to the idea that cognition is not head-bound but instead potentially extended to the artifacts we mutually interact with.Broadly known as the current of active externalism, this idea has been expressed under a number of headings by several philosophers and cognitive scientists (Clark and Chalmers 1998;Rowlands 1999;Wilson 2000;Wilson 2004;Menary 2007).One of the most influential formulations-perhaps the most influential-is known as the hypothesis of extended cognition and it holds that "the actual local operations that realize certain forms of human cognizing include inextricable tangles of feedback, feedforward and feed-around loops: loops that promiscuously crisscross the boundaries of brain, body and world" (Clark 2007, sec. 2).A list of examples of interactive, cognition extending equipment would include telescopes, microscopes, GPS systems, even pen and paper when trying to solve complex scientific problems (Palermos 2015) or while performing simple multiplication tasks. Think about a three-digit multiplication problem such as 987 times 789.It is true that few if any of us can solve this problem by looking at or contemplating on it.We may only perform the multiplication process by using pen and paper to externalize the problem in symbols.Then we can serially proceed to its solution by performing simpler multiplications, starting with 9 times 7, and externally storing the results of the process for use in later stages. The process involves eye-hand motor coordination and it is not simply performed within the head of the person reciting the times tables.It involves intricate, continuous interactions between brain, hand, pen and paper, all the while it is being transparently regulated by the normative aspects of the notational/representational system involved-for instance, that we cannot multiply by infinity, that we must write the next digit under the second to last digit of the number above, what operation we must perform next and so on. 7roponents of the hypothesis of extended cognition note that in such cases we can talk of an extended cognitive system that consists of both biological and technological resources, because the completion of the relevant cognitive task (e.g., performing the multiplication task) involves non-linear, cooperative interactions between the two components.According to dynamical systems theory (DST)-i.e., the most promising mathematical framework for modeling such dynamically interacting systems-when this is the case we have to postulate an overall coupled system that consists of all the mutually interdependent components at the same time.8According to a dynamical interpretation of the hypothesis of extended cognition, when two (or more) components mutually interact with each other in order to complete a cognitive task, they give rise to an extended cognitive system that consists of all of them at the same time.9 This brings to the fore the possibility that knowledge-conducive cognitive abilities can be extended to the artifacts we employ.This is because epistemology and philosophy of mind and cognitive science put forward the same condition in order for a process to count as cognitively integrated, and thereby knowledge-conducive: Just as philosophers of mind claim that a cognitive system is integrated when its contributing parts engage in ongoing reciprocal interactions (independently of where these parts may be located), so epistemologists claim that cognitive integration of a belief-forming process (be it internal or external to the agent's organism) is a matter of cooperative interactions with other parts of the cognitive system.10 The theoretical wedding of the two disciplines suggests there is no reason to disallow the belief-forming processes of extended or even distributed cognitive systems from counting as knowledge-conducive. Provided that the relevant system is cognitively integrated on the basis of the mutual interactions of its component parts, it can generate epistemically responsible/justified beliefs, independently of whether it is organism-bound or extended.The ongoing interactivity of its component parts-i.e., its cognitively integrated nature-allows the system to be in a position such that if there is anything wrong with the overall process of forming beliefs, the system will be alerted to it and respond appropriately.Otherwise, if there is nothing wrong, the system can accept the deliverances of its belief-forming processes by default, without the further requirement to provide explicit positive reasons in their support.This is a form of justification/epistemic responsibility that does not belong to any of the component parts but to the relevant system as a whole.The reason is that it does not arise on the basis of any component parts operating in isolation but instead on their ongoing interactivity, which, according to DST, belongs to the system as whole. For example, it is possible to use the above approach in order to explain how a subject might come to perceive the world on the basis of a Tactile Visual Substitution System (TVSS), while also holding fast to the idea that knowledge is belief that is true in virtue of cognitive ability (i.e. the ability intuition on knowledge).A TVSS comprises of a mini video camera attached on a pair of glasses, which converts the visual input into tactile stimulation under the agent's tongue or her forehead.By moving around and on the basis of the associated sensorimotor contingencies, 11 blind patients quickly start perceiving shapes and objects and orienting themselves in space.Occasionally, they also offer reports of feeling as if they are seeing objects, indicating that they are enjoying phenomenal qualities very close to those of the original sense modality that is being substituted.In light of DST, seeing through a TVSS qualifies as a case of cognitive extension, because it is a dynamical process that involves ongoing reciprocal interactions between the agent and the artifact.By moving around, the agent affects the input of the mini-video camera, which continuously affects the tactile stimulation she will receive on her tongue or forehead by the TVSS, which then continuously affects how she will move around and so on.Eventually, as the process unfolds, the coupled system of the agent and her TVSS is able to identify-that is, see-shapes and objects in space. Augmented Skepticism Given the way augmented reality systems work, they have the potential to qualify as cognitively integrated and thereby knowledge-conducive extensions of biological cognition. Most modern augmented reality systems combine the input from hardware components such as digital cameras, accelerometers, global positioning systems (GPS), gyroscopes, solid state compasses, and wireless sensors with simultaneous localization and mapping (SLAM) software, in order to track the position and orientation of the user's head and overlay computer data and graphics to her visual field in real time.By moving around with the AR system, the user affects the input received by the hardware components, which continuously feeds in to the SLAM software.In turn, the SLAM software keeps constructing and updating a map of the user's unknown environment while simultaneously keeping track of the user's position in the physical world, the way she is pointing the device at and the axis the device is operating in.This constant interplay between the user, the AR hardware and the AR software allows the system to display computer-generated images on the user's field of perception and allows the user to visually interact with these virtual images while she moves in space as if they were real, physical objects. In light of epistemology and philosophy of mind and cognitive science, this advanced degree of ongoing mutual interactivity between the user and the AR system indicates that AR 11 For a recent review on TVSS, see Bach-y-Rita and Kercel (2003).For a full account of how sensorimotor knowledge is constitutive of perception see (Noë 2004)."The basic claim of the enactive approach is that the perceiver's ability to perceive is constituted (in part) by sensorimotor knowledge (i.e. by practical grasp of the way sensory stimulation varies as the perceiver moves)."(Noë 2004, 12) "What the perception is, however, is not a process in the brain, but a kind of skillful activity on the part of the animal as a whole".(Noë 2004, 2)."Perception is not something that happens to us or in us, it is something we do".(Noë 2004, 1).Sensorimotor dependencies are relations between movements or change and sensory stimulation.It is the practical knowledge of loops relating external objects and their properties with recurring patterns of change in sensory stimulation.These patterns of change may be caused by the moving subject, the moving object, the ambient environment (changes in illumination) and so on.can become a powerful technology for extending our knowledge beyond the epistemic abilities provided by our organismic cognitive capacities.A number of emerging applications across a multitude of disciplines indicate this clearly. Users can perceive electromagnetic radio waves overlaid in exact alignment with their actual position in space.AR can also be used to assist archaeological research, by superimposing archaeological features onto modern landscapes, allowing archaeologists to draw inferences about site placement and configuration.AR archaeology applications can assist users reconstruct ruins, buildings and landscapes as they formerly existed.Architects and civil engineers can employ the technology to visualize future building projects.Computergenerated images of buildings can be overlaid into a real life local view of a property before the construction process begins.Architecture sight-seeing can be enhanced with AR applications allowing users to virtually see through the walls of buildings and gain access to visual information about interior objects and layout.With recent improvements to GPS accuracy, construction companies are able to use augmented reality to visualize georeferenced models of construction sites, underground structures, cables and pipes. Similarly, there is a number of potential commercial uses.AR can enhance product previews such as allowing consumers to view what's inside a product's packaging without opening it.It can also be used in order to facilitate the selection of products from a catalogue or a kiosk.AR users could gain access to additional content such as customization options and images or videos of the product in its use.Such technologies are already in use.It is possible, for example, to design printed marketing material so that it can bear certain "trigger" images that, when scanned by an AR device, they activate a video version of the promotional material. AR can also make significant contributions to health and safety.Imagine a rescue pilot who is looking for a lost hiker in a forest.Augmented reality systems can provide geographic awareness of forest road names and locations.As a result, the rescuer can more easily detect the hiker knowing the geographic context provided by the AR system.Similarly, AR can be used to let a surgeon look inside a patient by combining one source of images such as an f-MRI scan with another such as video. AR can also augment the effectiveness of navigation devices.Directions can be displayed on a car's windshield, while also indicating weather, terrain, road conditions and traffic information as well as alerts to potential hazards.Augmented reality applications can enhance a user's travel experience by providing real time informational displays of her location and its features, as well as access to comments of previous visitors of the site.AR applications can allow archaeological site visitors to experience simulations of historical events, places and objects by overlaying them into their view of a landscape.They can also offer location information by audio, calling attention to features of interest as they become visible to the user. The above examples make it obvious that AR has the potential to permeate and enrich our everyday lives in a variety of ways.As AR technologies become less intrusive and more transparent, moving from hand held devices, to AR glasses and finally to contact lenses, AR will possibly not only penetrate every aspect of our lives but will become a constant, additional layer to physical reality that users will be practically unable to disengage from.Short films Sight (https://vimeo.com/46304267)and Hyper-Reality (https://vimeo.com/166807261)provide good tasters of how the augmented future might soon look like. AR therefore promises to provide a great opportunity for extending our knowledge in a variety of new and exciting ways.At the same time, however, it also poses the serious threat of obstructing our knowledge of the external world.Contrary to other forms of extended cognitive systems, AR is specifically designed to generate and operate on the basis of unreal yet deceivingly truth-like mimicries of the external world in a way that users won't be able to distinguish augmented images from actual images of the world. Of course, the integrated nature of our cognitive systems may still be in a good position to single out reality augmentations that cannot be easily confused as parts of physical reality.For example, floating prize tags above products or fluorescent navigation arrows in our visual field won't be of particular concern.On the basis of cognitive integration, our previous experience and knowledge of the external world will allow us to perceive such items as reality augmentations.Other aspects of augmented experience, however, are going to be troubling. Consider, for example, S's mundane experience of visually perceiving that a person is standing opposite her.S will be considerably worse off holding such a belief in an epistemically responsible manner while having her AR system turned on than when she has it turned off.The possibility of having real-like yet virtual representations being superimposed on one's perception of the physical world will require a much more thorough background check by S's integrated cognitive system before she can believe what she perceives.Normally, the presence of good-lighting and a relatively stable experience, along with the absence of any beliefs regarding the possibility of being tricked by a magician or undergoing drug-induced hallucinations, would be more than enough for S to know that there is a person standing opposite her.An AR experience, however, would essentially amount to participating in a magic show.As such, believing what one sees would additionally require making haptic checks or being sensitive to additional cues that could potentially warn S' cognitively integrated nature to the fact that she is in a context where the presence of AR avatars is to be expected. In the absence of such additional background checks, 'augmented skepticism' would ensue, making it impossible to distinguish between virtually any aspect of augmented and physical reality.Perceiving and interacting with the external world would no more be the same, bringing about a dramatic change to our everyday epistemic practices.12 Future Use and Design AR therefore has the potential to both extend and distract our organismic epistemic capacities.Of course, technology optimists may disregard the above worries as being exaggerated.One could turn their AR systems off anytime they liked, thereby eliminating the threat of 'augmented skepticism' at the push of a button.But how realistic is such optimism? Considering the present-day analogue of owning a smart-phone, how often do we turn them off?Mobile phones are significantly less intrusive and attention-grabbing than future augmented reality technologies such as AR glasses and AR lenses are going to be. Smart-phones require their users to actively look at the screen instead of having information automatically pushed within their visual field.Yet mobile phone addiction has already started posing real life threats: In the case of cell-phones, such an addiction may begin when an initially benign behavior with little or no harmful consequences-such as owning a cell-phone for safety purposes-begins to evoke negative consequences and the user becomes increasingly dependent upon its use.Owning a cell-phone for purposes of safety, for instance, eventually becomes secondary to sending and receiving text messages or visiting online social networking sites; eventually, the cell-phone user may engage in increasingly dangerous behaviors such as texting while driving.Ultimately, the cell-phone user reaches a "tipping point" where he/she can no longer control their cell-phone use or the negative consequences from its over-use (Roberts, Yaya and Manolis 2014, 255). Responsible theorizing and future planning and design cannot therefore rest on unsubstantiated optimism, especially when relevant evidence points in the opposite direction. Future AR technologies are more likely than not to storm users' visual fields with push-on notifications, advertisements, personalized suggestions and reminders.Such reality augmentations could, in the best-case scenario, obstruct the user's perception of the external world and, in the worst-case scenario, cause severe disorientation with regards to what may be part of actual reality. Careful planning and design, however, can reduce or even eliminate such risks.The preceding epistemological remarks on the role of cognitive integration can offer significant guidance to this end.Previously we noted that epistemic responsibility and justification rely on the mutual interactivity of the agent's belief-forming processes.If there is something wrong with the way the agent is currently forming her beliefs, then it will clash with at least one of the agent's belief-forming processes running in the background, such that the agent will take notice and respond appropriately.Otherwise, if there is nothing wrong, the agent can accept the deliverances of her belief-forming process by default. Given that AR overlays augmentations on one's visual field, many of which might be deceptively real, one initial suggestion is to attempt to teach users how to employ the technology in a way that can diminish the ensuing 'augmented skepticism'.While it is difficult to imagine how future AR will actually look like, a generic solution to this problem may include the progressive training of AR users to recognize and automatically be aware of settings and social contexts in which deceptive reality augmentations are likely to be present. In such cases, users will have to be aware that relying on what they perceive won't be safe. Instead, they will need to employ their cognitively integrated nature more than it is normally required by performing additional background checks that will involve supplementary interactions with the perceived item (e.g., reaching out for the item in order to test whether it will provide the corresponding haptic feedback).Key to the above solution is that users will be able to tell deceptive reality augmentations from non-deceptive ones apart.It assumes that even though users may be tricked by reality augmentations that look like deceptive representations of physical reality, they can easily spot augmentations that are unlikely to be found in physical reality (e.g., floating price tags above products, or navigational arrows pointing users in the right direction).This ability of our cognitively integrated natures relies on extensive previous experience of interacting with the physical world. But what happens if the user has never had the opportunity to become thoroughly acquainted with the physical world outside AR?Given how attractive digital technologies are to children, this is a developmental danger that future educational systems and upbringing must take into consideration.It may well sound as yet another exaggerated threat, but given the potential prevalence of AR in future societies, it may not be easily disregarded as farfetched.Should that ever become the case, children and students should be encouraged to spend as much of their day interacting with the actual physical world alone, or they may fail to enhance their cognitively integrated nature with the expectations that will be required to tell most instances of augmented and physical reality apart-even if reality augmentations are specifically designed to stand out from physical reality. Future AR users should therefore prime their cognitively integrated nature to identify non-deceptive augmentations as well as the contexts and settings in which deceptive augmentations are likely to appear.Yet despite such measures, users' epistemic standing may still be severely compromised.Not at all unlikely, the contexts and settings in which deceptive augmentations may appear could be widespread or even ubiquitous.If that turns out to be the case, users' ability to perceive the external world would be severely limited and slowed down, due to having to perform a number of additional-presently unnecessary-background checks with every step they'd take.Eventually, their experience would amount to walking through a mirror room. A solution to this problem would require turning our attention away from the users' practices and towards the design of AR.AR developers would have to make sure that all augmentations bear features that would allow them to clearly and immediately stand out from the physical elements in the world without the need of unrealistically burdensome checks on the part of the users.The design of future AR systems should not pose unrealistic demands on the users' cognitively integrated nature.Reality augmentations should automatically stand out as such, leaving minimal room for confusion or misinterpretation.For example, they should be delineated with fluorescent borders, have a see-through effect or both.In fact, to ensure users' epistemic ease and safety, such AR design specifications could even be enforced via public policies and the law. 13 Instead, a completely immersive experience, where virtual images could be entirely indistinguishable from physical reality could be retained for virtual reality, where the user's awareness of her physical disengagement will allow her to fully and safely enjoy the experience of mediated reality. Conclusion In order to solve the problem of the traditional account of knowledge, according to which justification is the ability to provide reflectively accessible positive reasons in support of one's beliefs, a number of epistemologists have suggested that knowledge is true belief that is the product of a cognitive ability.According to this alternative, a belief-forming process may count as a knowledge-conducive cognitive ability if and only if it has been cognitively integrated on the basis of processes of mutual interactions with other aspects of the agents' cognitive system.One of the advantages of this approach is that it allows knowledge and justification to be extended to such artifacts as telescopes, microscopes, smartphones and AR systems.AR systems, however, rely on deceptive reality augmentations that could significantly deteriorate the epistemic efficiency of users' cognitively integrated natures.This could lead to a form of 'augmented skepticism', whereby it will be impossible to tell augmented from physical reality apart.In order to solve this problem, epistemology should play an active role in the design of future AR systems and practices.To this end, this chapter has put forward 13 For further considerations on how the hypothesis of extended cognition might invite a reconceptualisation of current legal theorising and practices, and especially of how we should perceive the right against personal assault, see (Carter and Palermos, forthcoming).some initial suggestions, concerning the training of AR users and the design of certain reality augmentation features.This is but a first step to ensuring that our everyday epistemic practices won't be easily disrupted by the advent of AR technologies.To avoid such and similar threats it is important to not undermine the input that philosophical engineering (Halpin 2013;Hendler & Berners-Lee, 2010;Halpin et al. 2010;Palermos forthcoming), in general, and epistemological design, in particular, can provide to the development of emerging and future technologies.
6,896.4
2017-04-01T00:00:00.000
[ "Computer Science", "Philosophy" ]
g:Profiler—a web server for functional interpretation of gene lists (2016 update) Functional enrichment analysis is a key step in interpreting gene lists discovered in diverse high-throughput experiments. g:Profiler studies flat and ranked gene lists and finds statistically significant Gene Ontology terms, pathways and other gene function related terms. Translation of hundreds of gene identifiers is another core feature of g:Profiler. Since its first publication in 2007, our web server has become a popular tool of choice among basic and translational researchers. Timeliness is a major advantage of g:Profiler as genome and pathway information is synchronized with the Ensembl database in quarterly updates. g:Profiler supports 213 species including mammals and other vertebrates, plants, insects and fungi. The 2016 update of g:Profiler introduces several novel features. We have added further functional datasets to interpret gene lists, including transcription factor binding site predictions, Mendelian disease annotations, information about protein expression and complexes and gene mappings of human genetic polymorphisms. Besides the interactive web interface, g:Profiler can be accessed in computational pipelines using our R package, Python interface and BioJS component. g:Profiler is freely available at http://biit.cs.ut.ee/gprofiler/. INTRODUCTION Next-generation sequencing and other high-throughput technologies have revolutionized the characterization of life at molecular resolution. While collection of omics data has become dramatically cheaper and more accessible over the past decades, its interpretation remains a significant challenge. Functional enrichment analysis is a common technique to interpret gene lists. It takes advantage of previous knowledge of gene function and uses a battery of statistical techniques to determine biological processes and pathways characteristic of the genes of interest. Information about biological processes, molecular functions, and cell components and phenotypes is organized into structured vocabularies such as Gene Ontology (GO) (1) and Human Phenotype Ontology (HPO) (2). Databases such as Reactome (3) and KEGG (4) maintain well-curated collections of known molecular pathways. Other functional annotations including protein complexes (5), transcription factor (TF) binding sites (6), microRNA target sites (7) and disease associations (8) can be also used to interpret gene lists. We call all these potential annotations commonly as features or terms that help to interpret the shared properties of the genes in the input lists. Functional enrichment analysis is a common component of every omics analysis and such resources are in demand in the research community. Many tools of variable qualities are available. While data are frequently updated in some tools such as g:Profiler (9), GOstats (10) and Babelomics (11), many popular tools like DAVID (12) and Bingo (13) have not been updated in years. Tools such as Panther (14), FuncAssociate (15) and GOrilla (16) aim to support the analysis of many species, while others such as WebGestalt (17) focus on the convenient mapping of diverse gene identifiers. Babelomics provides functional enrichment analysis as part of a larger platform (11). While the majority of available tools are web services, functional enrichment analysis can be performed using Java applications (18), R packages (10) and Cytoscape plugins (13). Thus users have many alternatives to interpret their gene lists with functional information. With the ten-year continuous development of g:Profiler, we aim to address the needs of diverse research communities. Our web server provides access to a toolbox of statistical techniques, intuitive interactive analyses, numerous species and a multitude of options. g:PROFILER WEB SERVER The g:Profiler web server (http://biit.cs.ut.ee/gprofiler/) comprises several tools to perform functional enrich-W84 Nucleic Acids Research, 2016, Vol. 44, Web Server issue ment analysis and mine additional information. These tools analyse flat or ranked gene lists for enriched features (g:GOSt; g:Cocoa), convert gene identifiers of different classes (g:Convert), map genes to orthologous genes in related species (g:Orth) and find similarly expressed genes from public microarray datasets (g:Sorter). An additional tool g:SNPense maps human single nucleotide polymorphisms (SNP) to gene names, chromosomal locations and variant consequence terms from Sequence Ontology (19,20). g:Profiler regularly synchronises with the Ensembl database for gene annotations and identifiers. It supports all species whose genomes are available in Ensembl and Ensembl Genomes (19,21) except for bacterial, archaeal and protist genomes. GO ontologies and some gene annotations are downloaded from the GO website (1). Other functional resources are updated regularly from corresponding databases. The latest versions and dates of each update are documented on our main page. g:GOSt--functional enrichment analysis g:GOSt performs pathway enrichment analysis and is the central tool of the g:Profiler web server. It maps a userprovided gene list to various sources of functional information and determines significantly enriched pathways, processes and other annotations. The GO (1,22) is richest of supported ontologies and is available for many species. We also use molecular pathways from the KEGG (4) and Reactome (3) databases, target sites of miRNAs from the miR-Base (7) database, and predicted target sites of TFs using the TRANSFAC resource (6). Information about protein complexes and protein-protein interaction networks from the CORUM database (5) and BioGRID (23) is also used to interpret gene lists. In this update we have included protein expression data from the Human Protein Atlas (HPA) (24). Gene annotations of physiological and disease phenotypes from the HPO (2) and the Online Mendelian Inheritance in Man (OMIM) resource (8) allow users to interpret their gene lists in the context of human health. g:GOSt supports the majority of gene identifiers used by the basic and biomedical research community. This includes all identifiers that have been linked to genes in the Ensembl database (19), including genes, proteins, transcripts, accession numbers in genome databases, probesets of experimental platforms, etc. For example, g:GOSt recognises 116 types of identifiers of human genes that can be presented as input as a mixed list of genes. This flexible feature allows the user to easily navigate the jungle of numerous omics platforms and gene databases. The gene query can be also presented as a list of chromosomal coordinates. For each chromosomal region we extract all genes that are at least partially located in the given region. Analysis of genes in chromosomal regions is a useful feature for analysing GWAS and epigenomics data. g:GOSt allows researchers to analyse flat and ranked gene lists. Ranked list analysis is more powerful and is recommended in the majority of cases. In the case of ranked gene lists, first genes in the input list are more important than the following genes (e.g. have a stronger signal in the underlying experiment). g:GOSt then computes a minimum hypergeo-metric statistic for every term. This technique starts from the top-ranked genes in the list and determines the subset where the enrichment is the strongest. This method provides more resolution to pathway enrichment analysis as it detects both small and highly significant pathways among top-ranked genes as well as broader terms representative of the entire gene list. We enable and encourage users to provide a custom background for their query when necessary. This statistical technique is essential when the number of genes studied in the specific case is a considerably small subset of all known genes in the genome of the studied gene. For example, certain experimental platforms such as ProtoArrays only cover 1/3 of all human genes and thus the remaining genes are not part of analysis by design. Providing this fraction of genes of as statistical background provides a more accurate estimate of functional enrichment and reduces the bias towards over-interpretation. g:GOSt applies the widely applied hypergeometric distribution to estimate the significance of enriched pathways and processes in gene lists. Each default analysis of human gene lists considers more than 30 000 gene sets corresponding to a large variety of features. Thus a multiple testing correction is required to reduce false positive findings. With the first release of g:Profiler in 2007 we introduced a ontology-focused multiple testing correction method g:SCS (9). We showed that the most common multiple testing correction methods incorrectly estimate the expected number of false positive results in the enrichment analysis: the Benjamini-Hochberg False Discovery Rate tends to find more false positives while the Bonferroni correction is overly conservative. The g:SCS method is used by default in g:Profiler and users can choose to use the other two correction methods. g:GOSt allows users to filter resources used to interpret gene lists. For example, one may choose to only use biological processes of GO and Reactome pathways and filter out other databases and ontologies. Similarly, one may focus on relatively small biological processes (more than five genes and less than five hundred) and discard other gene sets prior to analysis. Such filtering speeds up calculations, improves statistical power and reduces the effect of multiple testing, as well as provides easier interpretation. We recommend users to consider data resources beforehand and select the most interesting ones for their particular analysis. The main output of g:GOSt comprises a visual matrix of functional annotations of genes. Each gene in the input list is highlighted with a coloured square if it belongs to the respective enriched term. Colours represent different evidence codes for GO as well as gene annotations to other functional resources. Several metrics are also reported for each of the enriched results, including the size of the gene set in question, overlap with the input gene list and the statistical significance (P-value). Individual results are grouped by their hierarchy relative to other results, or alternatively ordered by P-value. Additional details about the query are given below the visualization, along with statistical background sizes, the input gene list and involved protein interaction networks. The results are either provided in graphical format (PNG, PDF), text file or Excel spreadsheet. We also look for enriched modules in BioGRID proteinprotein interaction (PPI) network (23). Input genes that have at least one common interaction partner in the PPI network are visualized with all their partners. We use the Cytoscape.js JavaScript library (25) to visualize these networks and provide all interaction data as text. g:GOSt results can be easily integrated with the Enrichment Map (26) method that provides network visualization of functional enrichment analysis. Enrichment Map is a useful method for simplifying complex results with many redundant processes and gene functions. g:GOSt provides a special output format (generic enrichment map) that can be directly uploaded into Cytoscape for visual network analysis. g:Cocoa--simultaneous enrichment analysis of multiple gene lists g:Cocoa provides means to analyse several gene lists at the same time and compare their characteristic enriched terms. This is useful in scenarios where an experimental design involves many comparisons of samples or individuals, or when one wants to compare directly different clusters of genes arising from the analysis. Each gene list is analysed for functional enrichments similarly to g:GOSt and resulting terms are aligned vertically into a matrix highlighting strongest findings for every gene list. g:Convert--automatic conversion of gene identifiers g:Convert provides a convenient service to translate identifiers (IDs) of genes, proteins, microarray probesets and many other types of namespaces. The seamless translation process works on a mixed set of diverse identifiers and maps these through Ensembl gene identifiers (ENSG) as reference. In cases of multiple identifiers, all relevant combinations are highlighted. At least 13 types of IDs are supported for all of the 213 species available in g:Profiler, and at least 40 types of IDs for more than 50 species. g:Orth--mapping related genes across species g:Orth allows the user to map a list of genes of interest to homologous genes in another related organism. Many experiments are conducted in model organisms and knowledge from such experiments is transferred to other organisms to compare or complement previous findings. g:Orth uses g:Convert to map gene IDs to Ensembl ENSG identifiers. Further mapping to orthologous genes in other organisms is also based on Ensembl data (19,21). We provide cross-references between all organisms in g:Profiler. Queries are limited to tiers according to classes of species (animals, plants, fungi). g:Sorter--finding similar genes in transcriptomics data g:Sorter is a tool for finding lists of co-expressed genes from public transcriptomics datasets. Thousands of microarray and RNA-seq experiments have been conducted in the past decades. The majority of published studies have been accumulated in databases like ArrayExpress (27) and Gene Expression Omnibus (28). We have downloaded 7878 datasets for 18 species from ArrayExpress and provide gene co-expression similarity searches using six most common statistical methods. The datasets can be searched rapidly with keywords. The input of g:Sorter is a single gene and a dataset of interest and the result is a sorted list of genes that are similarly expressed with the gene of interest. These lists can be integrated into functional enrichment analysis. For comprehensive global gene expression similarity queries, as well as support for more species and platforms we suggest to use Multi Experiment Matrix (MEM) tool (29). gProfileR package in R for automated analyses The g:Profiler web server can be accessed in GNU R using the dedicated R package gProfileR available in CRAN. The R is a core asset of the bioinformatics community with hundreds of resources and analysis packages available. We provide the R package to enable integration of our tools to diverse automated pipelines. The package accesses our web server via the internet and covers the functionality of g:GOSt, g:Cocoa, g:Convert and g:Orth. NEW DEVELOPMENTS IN G:PROFILER IN 2016 Since our previous publication in 2011 (30), we have added several new resources for interpreting gene lists and implemented new technologies. With our data update and archiving policy, we aim to maximize reproducibility and timeliness of research. Mapping of ambiguous gene identifiers Gene identifier mapping is a complex problem as the community continuously replaces earlier identifiers by newer ones and multiple aliases are the rule rather than an exception. This creates ambiguities in gene list interpretation and may cause genes to be excluded. To remedy this situation, we now provide semi-manual mapping of gene identifiers in addition to our automated annotation pipeline. We determine input genes that cannot be mapped to single ENSG identifiers and provide these to the user as an optional form where correct identifiers can be selected manually or excluded from the analysis. This approach guarantees that important genes are always included in the enrichment analysis. Transcription factor binding site predictions with TRANS-FAC TF binding in regulatory DNA determines regulation of gene expression. Thus information about TF binding sites (TFBS) can be used to interpret gene lists and enrichment of TFBS in gene promoters may indicate common regulation and biological function. We have updated our binding site predictions in gene promoters by systematically mapping TF binding motifs to regulatory DNA in multiple species including human, mouse, chicken, fly and yeast. The promoter sizes depend on the species and are depicted in Figure 1. We have updated TFBS data in g:Profiler and changed our definitions of potential regulation events. We used regulatory motifs in the TRANSFAC database version 2015.3 Figure 1. We predicted regulatory motifs from the TRANSFAC database for nine species shown on the x-axis. We used 6 kb promoter regions (±3 kb) upstream the transcription start sites for vertebrates, 2 kb promoters for fly and worm and 1000 bp promoters for yeast. TFBS matches per promoter are given with boxplots on the y-axis where the mean number of sites per promoter per TF is depicted with a black diamond. to make computational predictions of binding sites in gene promoters. We use the TRANSFAC internal threshold for limiting false positive matches (minFP) of TFBS. On average, each promoter has between 1.4 and 2.5 TFBS on average for included species. Thus we introduce a two-step hierarchy of terms where the upper more lenient category covers all the genes that have at least one match of the given TFBS in their promoter, while the second more stringent category covers promoters where the motif needs to be present at least twice. The second category with stronger binding sites suggests a stronger regulatory relation. Enriched protein expression patterns from HPA The HPA is a compendium of protein expression in 44 normal human tissues measured by immunohistochemistry (24). Protein expression levels are categorized into four groups (not detected, low, medium and high expression) with two evidence terms of presence (uncertain, supportive). To allow interpretation of gene lists using this information, we have mapped gene sets corresponding to protein expression signatures into a hierarchy of terms that reflects their tissue-specific level of expression. The most stringent terms include only highly expressed proteins per tissue, while lenient terms include highly as well as lowly expressed proteins. The HPA resource provides 713 tissuespecific groups of genes corresponding to 15 000 genes. Enrichment of Mendelian disorders OMIM is a collection of human genes and their relationships with Mendelian disorders and other genetic phenotypes (8). Although the majority of OMIM descriptions are included already into g:Profiler via the HPO, we have also directly added more than 4500 OMIM annotations to 3500 genes and provide methods to search for over-represented disorders. Disorders have been organized hierarchically to parental terms using information on genetic heterogeneity in OMIM data records. Genomic and functional data for 213 species The 2016 version of g:Profiler supports the analysis of data from 213 different organisms from Ensembl (19) and Ensembl Genomes (21). g:Profiler covers 67 vertebrate, 38 plant and 52 fungi species among others, nearly doubling from 126 in our previous update in 2011 (30) (Figure 2). This makes g:Profiler the most species-rich functional enrichment analysis tool serving different research communities in life sciences. g:SNPense--SNP identifier mapping With the rapid growth of whole genome sequencing technology, researchers are uncovering extensive genetic variation and large collections of known SNP are available for human and other species. In order to easily map SNP identifiers (e.g. rs4244285) to gene names, chromosomal coordinates and retrieve their functional consequences we now provide a new service called g:SNPense. Information about genome variants is retrieved from dbSNP (31) and mapped to NCBI Genes (32). Potential functional consequences of the variants are retrieved from Ensembl Variation data (19) and grouped into 35 Sequence Ontology terms of various severity (20). g:SNPense is a potential entry point to g:GOSt and its functional annotation pipeline and enrichment analysis. Programmable access to g:Profiler The research community increasingly requires automatic and programmable access to web tools as basic and biomedical science is becoming increasingly data intensive. In addition to the CRAN-supported R package gProfileR that we have been providing for years, we are now expanding the programmable access capabilities to new technologies. We provide an application program interface (API) with Python that can be included into user-friendly bioinformatics analysis software such as Chipster (33) and Galaxy (34), allowing users to set up their own custom analytical pipelines for large-scale analysis. We already provide the g:Profiler utility as part of the Galaxy ToolShed (35). BioJS component for visualizing g:Profiler results as word clouds BioJS is an open source bioinformatics project comprising a library of JavaScript components for visualising biological data in web applications (36). We have developed a BioJS component for g:Profiler (biojs-vis-gprofiler) that performs g:GOSt analysis and represents the most significant keywords as word clouds. Clicking on keywords reveals associated biological processes and enrichment statistics. This simplified solution can be used in web applications and pipelines such as our MEM tool (29) where a comprehensive visual representation is not required. Tag clouds provide an easily interpretable visualization of most common terms highlighted with different colours and font sizes ( Figure 3). Data maintenance policy for reproducibility Since the publication in 2011 (30) all 13 previous releases of g:Profiler have been saved on our web server and are accessible on the web site through the dedicated link to Archive. This allows users to continue or verify their analysis on the same set of data that was available at the time of the original analysis. With this policy, we aim to increase transparency and integrity of bioinformatics data analysis. Since December 2014, g:Profiler has been updated on a quarterly basis following each Ensembl release (Figure 4). This assures that central gene identifier indexes and GO annotations are never older than 6 months. This renders g:Profiler one of the most up to date functional enrichment tool available today. While resources such as Reactome, KEGG, HPO, OMIM and others follow their own release schedules and add new genes to their databases, we The number of mapped human genes has also grown by 20% from 26 000 to 31 000. As an exception, the Reactome dataset has decreased as we filtered out gene sets corresponding to reactions starting from g:Profiler version R1353. check for updates in these resources when we conduct the main cycles with Ensembl. Thus some more static resources are updated less frequently (e.g. regulatory motifs of mi-croRNAs and TFs). In addition to the stable g:Profiler version, we also provide public access to a development server called g:Profiler Beta for power users who benefit from the latest developments and newest data sources. DISCUSSION With the continuous development of g:Profiler, combining further resources and releasing programmable access points (web, R, Python API, BioJS), we aim to provide a stateof-the-art functional profiling and identifier mapping service. Our tool combines up-to-date functional and genomic data with sophisticated algorithms and serves the community through an intuitive and freely accessible website with efficient visualization techniques. With this update we introduce more datasets for functional interpretation of gene lists. These include physiological and disease-related gene sets from OMIM, tissuespecific protein resources from the HPA and a new release of gene regulatory predictions using the TRANSFAC resource. We also have a new SNP identifier translation service g:SNPense. Since the last publication in 2011 we have more than doubled the number of supported species to 213 species. This is the largest number of species supported by any of the publicly available functional enrichment tools. In this update we focus on programmable ways to access our service. We have developed the Python API to g:Profiler that provides means to include our analysis to bioinformatics pipelines using solutions like Galaxy or Chipster (33,34). We have increased the number of output formats on our web portal for easier downstream analysis. We have also developed the g:Profiler BioJS JavaScript component that can be incorporated into independent websites. W88 Nucleic Acids Research, 2016, Vol. 44, Web Server issue We consider data timeliness our highest priority. Novel annotation terms and findings about gene functions appear daily and therefore it is important to promptly consider this information for functional annotation. Annotating gene lists with data from five years ago, like when using DAVID (12), provides different conclusions than current data would. This is especially important in fields of research where technological improvements have only recently allowed high-throughput analysis (e.g. single cell analysis, embryonic stem cells, precision medicine). The g:Profiler service has recently proven to be useful to a broad user community studying anything from insects to wolves and plants (37)(38)(39). The most frequent use cases of g:Profiler probably relate to cancer genomics (40)(41)(42)(43)(44), stem cell research (45,46) and ageing (47,48). g:Profiler is a recommended tool for interpreting cancer genomes with pathway information (49). Several bioinformatics tools have incorporated the functionality of g:Profiler through dedicated APIs. For example, the global gene expression similarity analysis tool MEM (29) uses g:Convert for identifier mapping and the BioJS library for summarising enrichment results. Our online multivariate data clustering and visualization tool ClustVis (50) uses data and name mapping services from g:Profiler. A similar approach to GO-based word clouds is also used in the R package GOsummaries (51) that combines enrichment analysis of gene lists from g:GOSt with principal component analysis of gene expression data. Future developments of g:Profiler will focus on research of precision medicine and also on supporting as many species as possible. Advances in whole genome sequencing technology create requirements for novel tools that analyse genome variation for functional enrichments, their relation to drugs and diseases, protein domains or phosphorylation sites (52,53). As the enrichment analysis and identifier mapping services are highly needed for many species, our goal is to support the research of common and uncommon model organisms. SUPPLEMENTARY DATA Supplementary Data are available at NAR online.
5,544
2016-04-20T00:00:00.000
[ "Biology" ]
Nanoplasmonics and its Applied Devices Nanoplasmonics makes a connection to conventional optics to the nanoworld. Interesting performance like subwavelength focusing to invisibility cloaking, nanoplasmonics have profound applications in science and engineering world from biophotonics to nanocircuitry. Metal and dielectric have free d-shell electrons. When metal and dielectric of different refractive index come in contact, these free electrons get accumulated in a region at the metal-semiconductor interface forming nanoplasmons. Practical implementation of nano device fabrication is the most challenging task due to the dissipative losses in metal. The optimum operating condition can be achieved by the efficient use of optical gain. We review here the ongoing progress in the field of nanoplasmonic research. Journal of Nanotechnology and Smart Materials J Nanotech Smart Mater 2014 | Vol 1:402 Introduction This paper is primarily based on the concepts of nanoplasmonics and their important application. Nanoplasmonics is a new research field for scientist for the last couple of decades. Scientists are exploring nano-structured materials for noble properties at nano scale. The interaction of light with free electrons in metal-dielectric interface causes electrons to vibrate. In optics, metals were for years believed as dull of optical properties. Once, after the discovery of surface-enhanced Raman scattering [1] metals was believed to have appreciable optical properties. Nanoplasmonics device can offer considerable exciting optical properties in near future. When two materials of different refractive indexes come in contact, due to their difference in refractive indexes, completely free electrons in materials come across to the surface boundary of the metal -semiconductor interface. When an incident electromagnetic field exerts force on these free electrons between metal-semiconductor interfaces, these free electrons start oscillating. Depending on their nature of oscillation, surface plasmon can be of two types-Localized Surface Plasmons (LSP) and Surface Plasmon Polaritons (SPP). Typically in LSP, electrons vibrate back and forth near their position, they don't propagate. While the rest in SPP, electrons gather a considerable amount of energy and hence they propagate through the medium. These free electrons are in resonance at specific frequencies of operation; this particular frequency is defined as the resonance frequency for that device. Depending on materials used resonance behavior can be of different type though the structure, size and shape are same. Plasmon based dielectric lenses and resonators can confined extremely high intense field in sub-wavelength. Optimum light confinement in nanoparticle can be achieved through plasmon based devices like modulators, switches, detectors, lenses, resonators. Dissipative losses from the interaction of light with free electrons needs to be traded off with the localization with the incident light. This dissipative loss is more significant at optical frequencies like of the order of 1,000 cm -1 . Researchers developed various ways to mitigate these dissipative losses. Costas M. Soukoulis et.al explained that larger the materials lesser the loss. At optical frequencies, constituent metal is responsible for major losses. Part of the losses can be eliminated by avoiding nearby resonances and sharp edges of the current flow [3,4]. The atom has dimension of 1 angstrom or 10 -10 meter. Nano scale (10 -9 meter) materials can be considered as of several atoms and molecules. Scientist explored microstructure based materials for the decades. But nano structured material of size 1-100nm needs to be explored. Nano structured material characteristics such as lack of symmetry in electron confinement with size hinders explorations. Material properties depend on the shape and size of that material. Quantum dots are made of atoms and size of quantum dots of nano scale. Hence CdSe of different sizes have different emissions throughout the visible spectrum [5]. As shown in the figure, emission spectrum blue shifts with the decrease in quantum size. There is a direct relation between peak of the emission spectrum with the size of the quantum dot. Nanomaterial and Nanotechnology Figure 2: The fluorescence peak of CdSe with different size of quantum dots. Magnitude of intensity spectrum doesn't depend on the size of quantum dots but its operating wavelength changes with the dimension of the quantum dots. Spectrum shifts towards higher frequency (blue shift) as its dimension reduces [5]. Reprinted from book of Adv in Biomedical Eng, Vol 9 (2012), IERI. (Open Access) Material used so far in the research of nano scale technology are copper (Cu), silver (Ag), gold (Au), lead (Pb), Indium (In), Mercury (Hg), Tin (Sn), Cadmium (Cd). Among these materials, considering optical performance and reliability, gold and silver are believed to be noble materials while copper, lead, indium, mercury, tin and cadmium are considered as secondary nanomaterials. Gold and silver nanostructures exhibit an absorption spectrum in the visible region. As free electrons beside in the vicinity of the surface between metal-semiconductor, optical properties are controlled by the surface type-flat surface and surface with nanoparticles. The researcher has demonstrated blue shifted absorption spectrum for nano rods over the nano spheres. Not all the materials are suitable for nano devices. Materials selected for nanomaterials should have the robustness, controllable properties, unusual target binding and of course of size in nano scale. Nano structured materials has advantages over bulk material due to their target binding phenomena which can change both chemical and physical properties of nanomaterial. Noble nanoparticles: Nanorod over nanosphere Color changes with the change of nanoparticle size. Gold nanosphere is characteristically red while silver characteristically yellow. This color formation is due to the oscillation of free electrons in metal-semiconductor interfaces. This free electron oscillation is in the visible spectrum and the oscillation is in strong resonance in this frequency band. At this resonance, absorption peak is at maximum as shown for gold nanoparticles [43]. : Absorption spectrum of various sizes and shapes of gold nanoparticles. Absorption spectra doesn't change significantly with the size of nanospheres while for nanorods as the aspect ratio (AR) increases corresponding spectra moves towards higher wavelength. Higher the AR larger the spectrum bandwidth, greater the sensitivity [43]. Reproduced with permission from Chemical Society Reviews.,35, 209-217 (2006), Royal Society of Chemistry. Gold nanosphere has single absorption resonance and peak of this resonance is relatively independent of the size of the gold nanospheres. With the enlargement of nanosphere its optical property changes negligibly. Where gold nanorod has two absorption resonances -one towards its shorter axis called transverse resonance second towards its larger axis called as longitudinal resonance. Optical property changes so promptly if we add anisotropy to the geometry. As we can see with the decrease in length for nanorod absorption spectra shifts towards lower wavelength making device to operate at the higher frequency causing device to a blue shift. As the orders of magnitude wider absorption peak prevail, it will promote for better sensitivity, making device for a wide range of operation. Surface Plasmon Resonance Modes Non-propagating vibrating electromagnetic excitations are bounded on material surfaces. And hence they are called Localized surface plasmons (LSPs). LSPs show resonance characteristic and these resonances can be of transversal and longitudinal resonance modes, dipolar or multi-polar resonance modes, Fano resonance mode. Incident electric field perpendicular to the nanostructures axis corresponds to the transversal resonance mode while electric field parallel to the axis of nanostructures matches up to the longitudinal resonance mode ( Figure 5(c)). L.M. Liz-Marzan [44] investigated transversal and longitudinal resonances due to their optical anisotropy for one-dimensional nanostructures. Generally, transversal resonance mode frequency is higher than that of the longitudinal resonance mode frequency [6]. 32-41 (2006) , American Chemical Society. [44]. While dipolar and multi-polar resonance modes can be obtained by changing the size of one-and zero-dimensional nanostructures. Generally small size nanostructures offer dipolar resonance modes and those with large sizes exhibit multipolar resonance modes. Moreover, frequency of the multipolar resonance mode is higher than that of dipolar resonance mode. An exceptional phenomenon, Fano resonance, appears with an asymmetric line shape owing to the interactions between a superradiant "bright" mode and a subradiant "dark" mode. Interaction between dipolar and quadrupolar resonances gives rise to the Fano resonance [6]. Surface Plasmon Resonance Modes Resonance modes can be adjusted through various nanostructure parameters like spacing, aspect ratio, and length. Wurtz et al. investigated transversal and longitudinal Localized Surface Plasmons resonance (LSPR) of Au nanostructures engineered by electrodeposition in anodic aluminium oxide (AAO) templates [45]. Figure 6(b) shows the experimental extinction spectra of Au nanostructures for various incidence angles. Incident electric field perpendicular to nanostructure axis, extinction spectra gives rise to one single transversal LSP peak at 520 nm. At oblique incidence, the incident electric field which includes both s-polarized and p-polarized components exhibits two resonance peaks centered at 520 and 650 nm for transverse and the longitudinal resonance modes respectively. Longitudinal resonance is excited more effectively due to their strong dependence on large incidence angles. Angular sensitivity is a sign of strong anisotropy of the nanorods in the array (Figure 6(b)). Resonance peak for longitudinal resonance mode shifts towards shorter wavelengths with increasing incidence angle while angular dispersion depends on the coupling strength between nanorods [6]. Moreover, resonance mode depends strongly both on the rod aspect ratio and the distance between the nanorods in the array. An increase in the nanorod aspect ratio splits resonances into two resonance frequencies and transverse mode undergoes a blueshift, moves towards higher frequency region and longitudinal mode undergoes a redshift, moves towards low frequency region. Zero-order optical extinction spectra for different of incidence angles. The nanorod length is 300 nm, diameter is 30 nm, and the interrod distance is about 100 nm. (c) Zero-order optical extinction spectra of Au nanorods in AAO as a function of rod aspect ratio. The nanorod length is 400 nm and the interred distance is about 100 nm. The plots are labeled according to the nanorod's aspect ratio, corresponding to the diameters from about 15 to 30 nm. Reproduced with permission from Opt. Express 16(10), 7460-7470 (2008), OSA [45]. Dipolar and Multipolar Resonance Modes Dipolar and multipolar localized surface plasmon resonance modes depend on nanostructure size. For instance, spherical nanoparticles of size 5-50 nm diameter corresponds to mainly dipolar resonance, as conduction electrons in metal are in phase with the incident electromagnetic field. However, when the dimensions become long enough, multipolar resonance modes can be excited as a result of phase retardation of the applied field inside the material [46]. For example, small and larger nanorods display dipolar and multipolar resonances respectively [47][48][49]. Fano Resonances: For some systems amplitude of the oscillator increases up to its maximum when its frequency is in phase with driving force while for other systems opposite phenomenon can also occur for certain resonance condition. Let's consider weakly coupled harmonic oscillators system and an external applied force; then there will be two resonances near eigenfrequencies ω1 and ω2 of the oscillators [51]. Standard enhanced resonance exist near eigenfrequency ω-while other unusual sharp peak resonance is at eigenfrequency ω+. First enhanced resonance is described by a Lorentzian symmetric profile known as a Breit-Wigner resonance, while second unusual resonance is characterized by an asymmetric profile. In 1961, Ugo Fano discovered Fano resonance exhibits a distinctly asymmetric shape resulting from the constructive and destructive interference between narrow and broad discrete resonances [52]. [51] Due to destructive interference of oscillations between first oscillator and the external force and the second oscillator amplitude of the first oscillator reduces to zero. When the coupled oscillators system is at resonance of second oscillator there are basically two forces acting on the first oscillator, which are indeed out of phase and cancel each other. This phenomenon describes the basic properties of Fano resonance [51], namely, resonant destructive interference. Field Enhancement through Surface Plasmons Near-field intensity is strongly enriched due to LSPP resonance near the interface between metals and dielectric materials, and the enhancement mainly depends on the shape and size of the metal nanostructures. The metal nanostructures such as nanorods, nanotips, and nanogaps show strong near-field enhancement effects. Free charge carriers are detached with the applied external electric field of the propagat-ing light. These separated charge carriers then introduce an additional field which oscillates with the same frequency as of the external field. As a result, an extremely strong field is developed near the interface of nanostructures [54]. The near-field enhancement effects has a great interest in some applications such as surface enhanced Raman spectroscopy (SERS) [55][56][57], nonlinear optics [58][59][60][61], and nanophotonics [62][63][64]. Transmission Enhancement Holes with sizes smaller than that of the wavelength of the incident light reveal distinctive optical properties for an opaque metal film. These holes strongly enhance the transmission of light; these fascinating effects take place due to the interaction of the light with electronic resonances in metal surfaces [65]. Output surface of the metal nanoholes act as a new point source for the light propagating through them. These transmission enhanced phenomenon through tiny holes are of great importance in the applications such as subwavelength optics, nanophotonics, optoelectronics, and sensing to biophysics [6]. While for perfect conductor these phenomenon are reversed. Considering a single hole milled in a free-standing infinitely thin Ag film. Transmission efficiency of normally incident light can be approximately expressed as [66]. where propagation constant, k =2π/λ and r and λ are the hole radius and the wavelength of the incident light, respectively. T is proportional to (r/λ) 4 that indicates transmission of light is very little for a very small hole compared with the wavelength. Nanoslits Optical transmission like metal nanoholes, can also be enhanced through metal nanoslits. Garcia-Vidal et al. theoretically and experimentally discovered strong enhanced optical transmission through single nanoslit edged by a finite array of grooves made on a thick Ag film [67]. A single nanoslit of width of 40 nm was surrounded by ±5 grooves of length of 10 um (Figs. 10(a) and 10(b)), was fabricated by a focused-ionbeam technique. A wide transmission maxima was revealed at around 725 nm ( Figure 10(c)). This maximum corresponds due to transmission through the nanoslit has enhancement factor of about 6. Transmission peak of grooves surrounded nanoslits of periods ranging from 500 to 800 nm of nominal depth of 40 nm, shifts to higher wavelengths with enlarged period; the peak is strongest at 650 nm. As a consequence, peak appears at the wavelength agreeing with the nanoslit waveguide mode position. For transmission enhancement optimum groove depth is of 40 nm. Garcia-Vidal et al. suggested three main ways to enhance optical transmission: groove cavity mode excitation (depth control of the grooves), in-phase groove re-emission (period control of the groove array), and nanoslit waveguide mode (thickness control of the metal film). Two orders of magnitude transmission enhancement of light can be attained by adjusting these geometrical parameters [6]. Surface Plasmon Resonance Spectroscopy Optical setup for Hydrogel optical waveguide spectroscopy (HOWS) [68] biosensor is depicted in Figure11. He-Ne laser with a power of 2mW at a wavelength of λ = 633nm is transmitted through a polarizer, Polarizer polarizes to transverse magnetic (TM) mode and is passed to a high refractive index (n p = 1.845) prism at [90] 0 and through a sensor chip. The sensor chip consists of a glass slide and with a PNIPAAm hydrogel film and glass is coated with a gold layer of thickness between 37 and 45 nm. Cell dimension inserts in the chip area of volume 10μL, length 10mm and depth 0.1mm. Rate of flow of liquid sample over chip is 200 μL[min]^- 1 For the current analysis purpose, 45nm gold and thiol Self-assembled monolayer (SAM) was used for the sensor. To control the angle of incidence of a laser beam θ, total setup was mounted on a rotating stage. The reflected laser beam from the sensor was measured by a photo detector. Reflectivity is determined as a ratio of two light intensities: reflected light from a sensor chip and from a blank glass slide. Reflectivity variation σ(R) can range from between 7×10 -5 and 2×10 -4 . Evanescent wave is first internally reflected at sensor surface and then penetrates through the gold layer and which is then interact with surface plasmon (SP) and hydrogel waveguide (HW) modes. This evanescent wave propagates along metal interface. SP and HW modes excite two distinct dips as can be seen in the angular reflectivity spectrum. Angles θ associated with these dips can be related with the propagation constant β of the reflected laser beam component as Principle of plasmon based concentrators Scientists are exploring nanostructures to effectively concentrate light on nanoscale devices. The structures can be of two types: resonant and non resonant. The electric field associated with light wave in resonant structures, apply a force on the negatively charged electrons inside the metal and with this applied force, electrons oscillate, creating surface plasmon inside material. At a particular frequency this oscillation is at resonant making a huge charge displacement in contact of the metal -dielectric interface. Resonant characteristic of quasistatic and retardation based structures will be discussed first, then will continue with non resonant characteristic. When the size of a nanostructure is much smaller than the free space wavelength, i.e. λ/a ratio is very high, then this structured nanoparticle can be called as quasistatic nanoparticle, quasistatic nanoparticle structure experiences a uniform electric field everywhere at any instant of time. With the help of potential function one can determine resonance characteristic of a given geometry. Spherical nanoparticles can be in resonance at wavelengths where ε m = -2ε d , where εm and ε d are of metallic and dielectric permittivities respectively. Being quasistatic resonance frequencies independent of particle size, by changing metal, shape or dielectric environment resonant frequency for nanoparticles can range over a wider frequency spectrum [69] (Figure 12a, b). Frequency depends on the energies in metal and its surrounding dielectric and this frequency will be in resonance when they are equal. Quality factor, Q at the resonant frequency depends on metal losses and doesn't change with the change of geometry. Sub wavelength particles in combine can enhance field at least couple of orders of magnitude larger than single subwavelength particle. A single subwavelength particle can offer enhanced in the range of 10-100. When nanostructures dimensions approaching external applied light wavelength. i.e. Wavelength of external light is comparable to nano particle or even smaller than nanostructure, the system is considered as retardation and effect is called as retardation effect. Retardation principle is based on scaled radio frequency antenna design concepts. Truncated SPP waveguides of wavelength scaled structures are metal nanowires [70,71] or strips [72]. Surface plasmon polaritons oscillate back and forth inside the metal, creating a standing wave in the metal. This back and forth oscillation of free electrons in metal is considered as Fabry-Perot resonator for SPPs. The resonant length of this fabry-Perot structure is equal to nλ SPP /2, where n is an integer and for first resonance mode n equal to one and λ SPP is the wave length of the resonator (Figure 12c-e). Figure 12: Resonances characteristic of different geometry and materials. a, Effects of geometry and materials on electrostatic resonances of deep-subwavelength metal nanostructures. As the surrounding dielectric constant increases the resonance for a spherical nanoparticle (shown on the left) redshifts. As the aspect ratio for a nanorod is increased the longitudinal resonance is redshifted (shown on the right). Minus (plus) signs indicate regions of high (low) electron density. b, Resonant condition (ε m /ε d ) as a function of aspect-ratio parameter L for quasistatic spheroidal particles, which are shown in the inset. The major and minor axes of the spheroid are represented by a and b, respectively. c-e, Retardation-based strip resonators. Normalized field-intensity distributions normalized to the incident intensity for the lowest odd-order resonances of 30-nm-thick silver strips that are top-illuminated with light at wavelength of λ 0 = 550 nm. E0 denotes the incident electric-field strength. Three different resonant antennas are shown that approximately measure one (c), three (d) and five (e) times λ SPP /2 [69]. Reproduced with permission from Nature Materials, 9, p193-204 (2010), Nature Publishing Group. As the structure size of this resonator is very small, dielectric lenses are used to efficiently couple free space light to the structure of interest. Plasmonic structures can store light in areas that are sometimes quite larger than the wavelength of light. Non resonant effects can also be utilized to store light inside the materials. Various structured nano-devices such as plasmonic tapers: metal cones or wedges, can offer broadband, non resonant enhancements. As a wave propagates, group velocities in these structures decrease towards apex while at the same time wave vector increases towards its apex. Hence, if we launch SPP at the base of a structure, the structure will experiences strong field at its tip. Photovoltaic Devices For complete absorption of light photovoltaic device needs to be thick enough. Figure 13 shows AM1.5 solar absorption spectrum and light passes once through 2μm thick crystalline Si film. The figure shows that for 600-1,100 nm spectral range, light absorption considerably low. But traditional wafer Si solar cells have 180-300 μm. For high efficiency diffusion length of minority carrier has to be several times higher than the actual material thickness. Physical thickness of solar cell can be reduced in three ways. First, subwavelength nanoparticles interact with propagating Sun light and semiconductor thin film absorbs completely these electromagnetic waves by folding these waves 7 Figure 13: Absorption spectrum of solar cell. AM1.5 solar spectrum, together with a graph that indicates the solar energy absorbed in a 2-μm-thick crystalline Si film (assuming single-pass absorption and no reflection). Clearly, a large fraction of the incident light in the spectral range 600-1,100 nm is not absorbed in a thin crystalline Si solar cell. [73] Reproduced with permission from Nature mater, 9, 205-213 (2010), Nature Publishing Group. several times before being absorbed (Figure 14a). Second, subwavelength nanoparticles can be placed in metal-semiconductor interface and interacting with light, those subwavelength nanoparticles excite plasmonic near field and increases solar cell effective absorption (Figure 14b). Third, corrugated metallic film could be installed at the back of solar cell devices. Due to the refractive index mismatch between metal and semiconductor, Surface Plasmon polariton (SPP) modes generates at their interface. Absorbed sunlight could couple with these SPP modes as well as with the guided modes in the semiconductor slab (Figure 14c). Physical thickness of photovoltaic solar could be reduced considerably applying these three techniques, could be reduced in the range of 10-to 100-fold but in both cases optical absorption remains constant. Nanoparticles embedded inside homogeneous medium. Both forward and reverse wave propagates symmetrically from these nanoparticles. But when these nano particles beside in interface between metal and semiconductor, light penetrates first in a medium of higher permittivity. When light scattered at the critical angle, total internal reflection takes place and light remains trapped. The Si -air interface has a critical angle of 16 0 . Due to corrugated metallic surface at the back of the photovoltaic cell, the light reflected back towards the surface and again interacts with the nanoparticles and again reflects towards the corrugated back surface. Thus light bounces back and forth for several times before being absorbed in semiconductor film. Absorption efficiency depends on metal nanoparticles shape and size and it has been proved that smaller nanoparticles could increase absorb of light due to increase cross section areas [74]. Optical Antenna Optical antenna similar to microwave and radiowave antenna, is an interesting concept to the scientists. They use optical radiation at subwavelength scale. Optical antennas can be used to enhance the efficiency of photodetection [75,76], light emission [77,78], sensing [79], heat transfer [80,81] and spectroscopy [82]. Optical antenna takes care of optical propagation using elements like mirrors, lenses, fibres and diffractive elements while for radiowave and microwave antenna deals with electromagnetic fields at subwavelength scale. Optical antenna converts optical radiation into localized energy, and vice versa. Fabrication accuracies for optical antenna necessitates down to a few nanometers. So far optical antennas have been fabricated by top-down nanofabrication techniques such as focused ion beam milling [83,84] or electron-beam lithography [85,86], and also by bottom-up self-assembly schemes [87,88]. Size of a receiver or transducer is generally much smaller than that of radiation wavelength, λ, and is normally of the order of λ/100 and at optical frequency, antenna requires dimensions to be of ~ 5 nm [89]. Optical antenna associates both with quantum and pure photon sources systems, and which in turn introduces new physics such as breaking of selection rules and strong coupling. Directed emission and reception concepts can now be imposed to photon emitters. Photodetectors White J.S and et al explored a deep subwavelength volume nanoplasmonic structure: a single isolated slit in a metallic film on an absorbing substrate [90]. They carried out their analysis based on finite-difference frequency-domain (FDFD) simulations [91] of slits generated in an Al film on a Si substrate. Figure 16 (a) shows the energy density distribution of a slit. Dimensions are for this slit 50nm wide and of 100nm long. The plane wave of wavelength 633nm excites the structure from the top with polarization towards x direction. Strong energy concentration is observed both below the diffraction limit as well as in the semiconductor. White J.S and et al claimed these enhanced energy density below the slit due to resonance phenomena. They demonstrate this resonance characteristic as surface plasmon polariton (SPP) mode supported by the slit [see Figure16 (b)]. This resonance geometry works as a truncated metal-dielectric-metal (MDM) plasmonic waveguide [92]. A strong reflection is observed from its truncation edge terminal and cavity is termed as resonance cavity. Their proposed geometry can offer absorption enhancements up to 352% for λ=633 nm and quite user friendly for its fabrication. Using commercially available FDFD simulations they calculated absorption enhancement for a variable slit of dimension 1.5w x50 nm where slit width is 1.5w nm and height is 50nm. [See Figure 17 (b)]. Figure 17 (a) shows the absorption spectrum as a function of slit length as well as with slit widths, normalized to bare silicon without any metallic structure on its back; absorption enhancement decreases by 34.8% with a perfect antireflection coating with the bare silicon. White JS et al [90] investigate scattering coefficients of the metal-dielectric-metal system (MDM) [ Figure 17 (b)] for Fabry-Perot model. Plane wave electric field polarized normal to x direction strikes the top surface with permittivity constant ε 1 . Cavity (ε 2 ) of length L and width w is formed in metal film (ε M ). Plane wave couples to plasmon modes supported by the cavity with a transmission coefficient t 12 . Plane wave also couples to surface plasmon polaritons on interface ε 1 /ε M , but they have very little effect on isolated cavity and can be ignored. Incident electromagnetic waves bounce back and forth several times at top and bottom interfaces with complex reflection coefficients r 21 and r 23 . Propagating plasmon mode out couples to induce absorption, which is termed as coupling coefficient k 23 , a ratio of absorption in the 1.5wx50 nm region to the magnitude of the propagating electric field. Scattering parameters; transmission and reflection spectrum as well as coupling coefficients can be calculated from FDFD simulations. They discovered width independent first order resonance at L≈100 nm and resonance length decreases as with the decrease of slit width as k MDM increases. They also found that 9 lowest order resonance length is off L res ≈λ MDM /5. If we could eliminate losses in the aluminum film, then absorption could be increased by 19% (w=100 nm) to 82% (w=30 nm). Metamaterial Metamaterials are artificial material engineered to achieve specific electric and magnetic characteristic from that materials which are not present in nature. Exciting optical characteristic can be tuned from this man-made material. J. B. Pendry et al detailed the enhance gained plasmonic nanostructures, such as metamaterial emitters, nanolasers, spacers and so on [93]. They dealt with problems and limitation associated with these structures and resolve these problems both analytically and experimentally. They explained later the experimental success in association with the loss-compensated negative-index and double negative metamaterial. These materials are also termed as left handed materials. Effective pa-rameters more specifically effective permittivity and effective permeability of these materials can be controlled over a wide frequency range. Metamaterial research can be motivated in the area of high-resolution imaging [94], invisibility cloaks [95], small antennas [96], quantum levitation [97]. In the last couple of decades different types of metamaterial has been introduced by numerous researchers globally. All of these metamaterials are operational in RF and optical frequency range. But these materials are lossy in the visible band and so still researchers are working to fabricate low loss metamaterial for visible and higher order spectrum. One of the key measuring factors for metamaterial characteristic is its figure of merit (FOM) and is defined as FOM = Re{n}/Im{n}. Higher its value, better its performance, lower its loss, easier to fabricate. shades represent high (low) areas of inversion; right, light colors represent the local field enhancement. hm, hc and hd denote the height of the metal, cladding and dielectric layers, respectively; ax and ay are the width of the rectangular holes in the x and y direction, p is the periodicity. The incident optical pump and probe pulses are indicated by red and blue waves. b, Real and imaginary part of the extracted effective refractive indices n for different pump amplitudes. The peak electric-field amplitude of the pump increases in steps of 0.5 kV cm −1 , from no pumping (cyan) to a maximum of 2.0 kV cm −1 (black). The inset shows the real and imaginary parts of the effective permeability (black and red lines, respectively) and the corresponding results of Kramers-Kronig calculations (dotted lines) for the highest peak electric-field amplitude of 2.0 kV cm −1 . c, The figures-of-merit (FOM = Re{n}/Im{n}) for the same pumping amplitudes as those shown in b. d, Rate dynamics during probing in the amplifying regime of the metamaterial. Shown are the net-gain rate Γ g (blue), dissipative-loss rate Γ f (green), outflux/radiative-loss rate Λ (red) and energy-decay rate Γ t (black). e, Dynamics of the probe-pulse intensity Is (black) and energy U inside the metamaterial (red) in the regimes of continuous excitation (CE) and free decay (FD) for the active optical metamaterial of d. [93]. Reproduced with permission from Nature 11, 573-584 (2012), Nature Publishing Group. Modulators and Direction coupler Switches For Rapid light routing and switching in optical communication, high speed and poor efficient IC has increasing demand for the last couple of decades. In these devices light passes through a guided wave guide. The waveguide is made of core and cladding where core has a higher refractive index than that of the dielectric. Complete internal reflection takes place in core material and thus light propagates through the core material. Waveguide modes can be controlled through an external electric field for EO effects and with magnetic field for MO effects. Positioning of electrode for modulators or switches need to be taken special care. Thin metal nanostripe embedded inside the dielectric can support propagation of a Long Range Surface Plasmon (LRSPP) mode, but Thomas Nikolajsen [98] and et al showed rigorously by experiment that such a stripe can also carry electrical signals that influences the LRSPP mode. They are the first guys to demonstrate the first examples of electrically controlled plasmonic components, opening new areas of research interest in photonic modulators and switches. They detailed the design, fabrication, and characterization of thermooptic Mach-Zender interferometric modulators (MZIMs) and directional-coupler switches (DCSs). These devices require low driving powers as low as <10 mW for modulators and <100 mW for switches, high extinction ratios as high as >30 dB, moderate response times of ~1 ms. The operation of a thermo-optic MZIM is based on changing the LRSPP propagation constant in a heated arm resulting in the phase difference of two LRSPP modes that interfere in the output Y-junction. The Characteristic curve presented realizes the feature associated to LRSPPs that allow us to control and guide optical power through the same mate- rial. Thermooptic effect depends on the type of material being used. Proper use of material will enhance system efficiency further. Characteristics presented here can be improved, the component could be made attractive for communication industries. We can use this design concept for other designs too, like Y-and X-junction based DOSs. Long range Surface plasmon polaritons (RSPP) components are fabricated through the true planar processing technology, that simplifies development processes, large-scale integration possible, and photonic devices fabrication possibility. Plasmon based MDM waveguide can be manufactured in nanoscale level. High confinement modes in the cavity is strongly limited to rapidly attenuating SPP waves. Materials used today in all optical fields have modulation amplitude of ~3 dB and transmission losses of about ~3dB for IC application. Those materials are being used in all optical switching configurations [99,100] and for all control devices [101]. In accelerating modulators researchers have demonstrated a modulator application for terahertz frequency spectrum [102]. Nanoplasmons in Chemical and Thermal reaction Nanoplasmons has profound effect both on chemical and thermal reaction. Induced enhanced electromagnetic field by these nanoplasmons increases the chemical and thermal reaction rate. Abraham Nitzan and L. E. Brus [103] investigated enhanced photochemical reactions for this electromagnetic field. They demonstrated both by experimentally and numerically a simple theory for ultraviolet, visible, and infrared photochemical enhancement near rough dielectric and metallic surfaces described and investigated. Noble metals Ag, Au and Cu due to their low plasma frequency are the most efficient enhancing reactors. Nitzan and Brus observed the same characteristics with alkali metals as with the noble materials. Silver is the best enhancing substrate found till to date. This is because of its narrow, pronounced plasmon resonance. Chemical and thermal process can be controlled by the temperature induced in nanostructured particles. Heat induced in nanoplasmons has various applications like in detection and killing of cancer cells [104], drug delivery [105], photothermal melting of DNA [106,107], growth of semiconductor nanowires and carbon nanotubes [108], nanofluidics and chemical separation [109], polymer surface modification [110], phase change memory [111,112]. Due to their large cross sectional area metallic nanoparticles are effective sources of heat generation. Absorb and scattering of light can be manipulated by changing the shape, size and dielectric environment [113]. Different methods have been developed to measure this nanoparticles temperature [114]. Conclusion Nanoplasmonics has become one of the most exciting research areas due to the ability to manipulate free electron oscillation in the interface of metal-semiconductor in various fields and geometric configurations. These oscillations takes us to the peak of modern technology. Guiding and concentrating light capabilities in very subwavelength region of the nanostructure is the key interest of the plasmonics device. Conventional solar cell is much more thicker than the plasmon based solar cell due to high optical length and reduced physical length. As physical length is reduced significantly much cheap plasmon solar cell can be manufactured at lower price. Modern day high resolution camera uses plasmon technology to provide us vivid pictures of the objects. In recent past in communication sectors we used very large size antenna for transmission and receiving information, with the help of nanoplasmon concepts researchers are capable of producing super small antenna. Plasmon acts as primary agent and can change the chemical and thermal reaction rate drastically. Techniques have been developed to detect infected cancer cells and then kill them in thermal plasmon treatment. We can cure infected cells with the help of drug delivery technique. In manufacturing semiconductors nanowires and carbon nanotubes nanoplasmon acts as the key role for their fabrication process. In the 60s people watched in movie actors made their appearance on screen disguise, now researchers have made invisible cloak based on these nanoplasmons that can make thing invisible. Despite of lots of remarkable properties of na-noplasmonic devices, dissipative losses in all of the conventional optical devices are considerably high. Scientist proved that engineered metamaterial can reduce this dissipative loss significantly.Larger the volume of engineered materials lower the dissipative loss. Submit your manuscript to JScholar journals and benefit from: ¶ Convenient online submission ¶ Rigorous peer review ¶ Immediate publication on acceptance ¶ Open access: articles freely available online ¶ High visibility within the field ¶ Better discount for your subsequent articles Submit your manuscript at http://www.jscholaronline.org/submit-manuscript.php
8,405.6
2013-08-08T00:00:00.000
[ "Physics" ]
Probing many-body localization in a disordered quantum dimer model on the honeycomb lattice We numerically study the possibility of many-body localization transition in a disordered quantum dimer model on the honeycomb lattice. By using the peculiar constraints of this model and state-of-the-art exact diagonalization and time evolution methods, we probe both eigenstates and dynamical properties and conclude on the existence of a localization transition, on the available time and length scales (system sizes of up to N=108 sites). We critically discuss these results and their implications. I. INTRODUCTION Localization in disordered, interacting quantum systems 1,2 is a topic that has recently received wide attention due to the very peculiar phenomenology [3][4][5][6] , the foundational issues about quantum integrability and ergodicity involved 7,8 , and the increased precision and control on experimental realizations 9,10 . Systems with a many-body localization (MBL) transition typically exhibit two phases, one at low disorder which obeys the eigenstate thermalization hypothesis (ETH) and one at high disorder which exhibits no transport, no thermalization [11][12][13][14] and emergent integrability due to an extensive number of quasi-local integrals of motion [15][16][17][18][19] . Furthermore, localized states have low entanglement at any energy and obey an area law, a property usually valid for ground states only 20,21 . Finally, localization in interacting systems is characterized by the very slow spreading of information, namely the entanglement [22][23][24] , and the total absence of transport for local observables 1,2 . All these features have contributed to make MBL a compelling physical phenomenon, including with respect to quantum information processing protocols 20,25-27 . In the context of the study of MBL transitions, a wide range of results, outlining the phenomenology described above, have been produced for one-dimensional (1D) systems [3][4][5][6] . Remarkably, a proof of the existence of the MBL transition has been obtained for a 1D quantum Ising model with a transverse field 28,29 . In higher dimensions, however, no such proof exists. One generally expects that in higher dimensions delocalization is favoured due to the increase in channels for the delocalizing terms, similarly to the phenomenology of Anderson localization in higher dimensions. More specifically, general arguments based on the existence and size-scaling of thermalizing bubbles support the absence of localization for large enough times 30,31 , even though no rigorous proof was obtained either. A number of results on 2D systems have notably been presented. Experimental results obtained in cold atoms setups interestingly show absence of dynamics and localization at high disorder 10,32 . At present, this exper-imental evidence is arguably of higher quality than the analytical and numerical modeling of MBL in 2D. Numerically, a number of approaches have been explored in 2D lattice models, using both unbiased and biased methods, and showing indications of a localized phase [33][34][35][36][37][38] . Other simulations conclude in favor of absence of MBL 39 . However, the main limit of numerical approaches is the small system sizes and/or time scales that are reachable in the computations. The size of the Hilbert space and thus of the quantum problem grows exponentially with the number of particles N in the system while the physical length scale of the sample grows as a square root of N . For unbiased methods this is an especially strong constraint, effectively limiting the analysis to systems up to around 20 spins 1/2. While in one dimension several different lattice sizes can fulfill this requirement, thus allowing in principle finite size scaling to be performed, this is no longer the case in two dimensions where the number of system sizes are greatly limited. While larger system sizes can be reached using methods geared towards capturing properties of an MBL phase [33][34][35][40][41][42] , these methods are not unbiased and by construction will miss the ergodic phase or the phase transition. Here, we aim to investigate an MBL transition in a specific system up to a real-space size as large as possible and with unbiased methods. We do this by considering a highly constrained model and state-of-the-art numerically exact methods 43 . Specifically we consider a disordered quantum dimer model (QDM) on a honeycomb lattice, where each lattice link is either free or occupied by a dimer with the constraint that each lattice site is touched by one and only one dimer [44][45][46] . An immediate consequence for this is that the dynamics of such a model is very constrained: single dimer moves are not allowed and the simplest move involves an hexagonal plaquette. Moreover, this constraint also automatically encodes strong interactions which for the honeycomb lattice already imply long-range correlations in the statistical ensemble of dimer coverings. The interplay between a constrained dynamics, which favors slow dynamics and localization 47 , and the strong interactions, which favor delocalization, creates an ideal situation for arXiv:2005.10233v1 [cond-mat.dis-nn] 20 May 2020 an MBL transition to exist. Finally, we note that such models are based on Hilbert spaces that, due to the constraints, have considerably lower dimension compared to spin systems: for N 1/2-spins, the Hilbert space size is 2 N , while it scales only as ≈ 1.175 N 44 for a dimer system on a N -sites honeycomb lattice, giving an obvious numerical advantage for large system sizes. A previous work has analyzed a similar disordered QDM on a square lattice 48 . Here, we substantially push forward this analysis, almost doubling the maximum system size reached, by turning to the honeycomb lattice instead. The article is structured as follows. In Section II we detail the model Hamiltonian, the symmetry sectors and the lattices used as well as the procedures used to obtain the numerical results. Such results are outlined in Sec-tion III, first considering observables within exact midspectrum eigenstates, and, secondly, the dynamical properties obtained with Krylov time evolution. Finally, we provide conclusions in Section IV. In the appendix we discuss in detail the lattice clusters used in the numerical analysis (Appendix A), further energy-resolved quantities (Appendix B) and comparisons with the entanglement properties of specific states (Appendix C). II. MODEL We consider the following quantum dimer model on the honeycomb lattice 45,46,49 with a random potential: The first term, an hexagon "flip", is a kinetic term. The second term is a disordered potential on each flippable hexagon; the v p are drawn from a uniform distribution in [−V, V ]. We construct lattices with N = 42, 54, 72, 78, 96 and 108 sites; in Fig. 1 (a) we show the N = 72 lattice and we refer the reader to the Appendix for more details on the other clusters. On the honeycomb lattice with periodic boundary conditions, the constraints due to the dimers and to the allowed plaquette moves are such that two conserved quantities, the winding numbers, exist. The winding numbers are defined as the sum along a line parallel to the x or y axis; having labeled the honeycomb lattice Fig. 1 (c)). Among the sectors with conserved total winding number, we select the one for which w x = w y = 0, which is the largest one. We remark that, for finite lattices, not all lattice shapes allow the existence of this zero winding sector; we discard lattice shapes that do not satisfy this requirement 50 . Table I displays the number of allowed coverings in the zero winding sector, which correspond to the size N H of the Hilbert space. The number of nonzero elements in the matrix is also noted, which, in addition to matrix size, contributes to limiting the feasibility of the numerical calculations. We perform exact diagonalization on some of these lattices (up to size 78). We use either full diagonalization or shift-invert methods 43 to obtain around 100 eigenstates at the center of the spectrum. We also study the dynamics of nonequilibrium initial states though Krylov subspace time evolution methods for all lattice sizes 51 . In all cases, we average over disorder realizations of the random potential (at least 1000 for most system sizes and around 100 for the dynamics on the largest one). III. RESULTS We consider various quantities with known different behaviors in the MBL and ETH phases. We analyze spectral, eigenstate, and entanglement properties as well as the dynamics of the system. A. Spectral properties Spectral gap ratio We start by analyzing the spectral properties of the two phases. Specifically, we consider the energy level gap ratio 14 : where s i = E i+1 − E i is the gap between two adjacent eigenvalues. We average in a small window of about 100 eigenstates around the center of the spectrum as well as over disorder realizations. Depending on the level gap statistics, r ≈ 0.39 for a Poisson distribution in the localized phase and r ≈ 0.53 52 for a Wigner-Dyson distribution corresponding to the ETH phase. In Fig. 2, top panel, we show the value of r as a function of the disorder for various system sizes. It appears that both localized and ETH phases are captured with the available cluster sizes. The transition value can typically be inferred by where the curves for increasing size cross, as it denotes opposite flows in the system size scaling in the two phases. We note that here the crossing point has a noticeable drift towards higher V values. In the bottom panel of Fig. 2 we show the probability distributions of the gaps s of the unfolded spectrum for various values of the disorder V , showing excellent agreement with a Poissonian or a Wigner-Dyson distribution (shown in black) for high and low V respectively. For the smallest sizes N = 42 and N = 54, we additionally computed the gap ratio as a function of the energy density (not just for the middle of the spectrum), see Appendix B. B. Eigenstates Kullback-Leibler divergence for energy-adjacent eigenstates We now consider quantities characterizing eigenstate properties which have been shown to be good indicators of localization. In the localized phase, eigenstates and local observables close in energy are very different in structure, as opposed to the ETH phase. Thus, we consider the Kullback-Leibler divergence for two consecutive eigenstates |ψ and |ψ in the spectrum, defined as where the sum runs over the N H elements |b i of the Hilbert space basis. We expect KL to approach to KL GOE = 2 (the value obtained for the Gaussian orthogonal ensemble of random matrices) in an ETH phase and to diverge with system size in a localized phase 12 . We show the results for KL in Fig. 3 as a function of the disorder strength V . The limit value KL = 2 is well captured at small disorders V , as well as a crossing point between the N = 54, N = 72 and N = 78 clusters (N = 42 appears to show stronger deviations due to the small size), with some drift due to finite-size scaling, suggesting a localization transition around V ≈ 22 − 25. Eigenstate participation entropy In a similar manner, we consider the participation entropy of the eigenstates, which gives information about localization in the Hilbert space 12,53 . It is defined as For a state which is localized in the Hilbert space, S p is of O(1). For many-body localized states, a multifractal behavior is expected in this computational basis 53 , with a participation entropy behaving as S p ∝ a ln N H , with a < 1. For extended states in the ETH regime, S p will scale as ln N H , with a = 1. In Fig. 4 we show the participation entropy, rescaled by ln N H (i.e. this ratio is the coefficient a up to higher order corrections), as a function of the disorder V . At low disorder we see that a has a high value which is likely to scale to 1 with increasing size. A different behavior onsets at around V ≈ 20 − 25: the curves for different system sizes join and collapse, suggesting a finite a < 1 asymptotically for disorders larger than this value. Eigenstate imbalance We next consider the imbalance of the eigenstates with respect to a specific configuration where the state used as a reference is chosen as the basis element of the so-called star configuration displayed in Fig. 1b. We define the (complex) imbalance as The phases φ p assume three possible values: 0, 2 3 π and − 2 3 π, depending on the dimer configuration on the plaquette p (see Fig. 1b). With this definition, the imbalance of the reference basis state in Fig. 1b has I = 1 and maximum amplitude |I| 2 = 1. Delocalized eigenstates will have a probability distribution for the modulus squared imbalance which is sharply peaked in 0. On the other hand, if states are localized and close to basis states, the imbalance will be peaked around the values corresponding to dimer configurations of the basis states. The probability distribution of the modulus squared of the imbalance is shown in the two panels of Fig. 5 respectively for low (top panel) and high (bottom panel) values of disorder for the largest system size (N = 78). As expected, the imbalance distribution is sharply peaked at 0 for very small values of disorder with an increasing variance for higher disorders. Between V = 15 and V = 20 the distribution broadens and develops peaks at values |I| 2 > 0 which, at higher disorder (V ≥ 25), are shown to closely correspond to the imbalance of configuration basis states (shown in dashed lines). As for 1D systems, the imbalance is a quantity especially useful for characterizing the dynamical properties of the system in different phases. We will further analyze dynamics of the imbalance after a quench in Sec. III D. Eigenstate dimer bond occupation We finally consider a local observable, the dimer bond occupation, and specifically the probability distribution of where the operator n k acts on the basis vectors |b i as n k |b i = 1 if bond k is occupied in b i and 0 otherwise. In the limit of a uniformly extended state, all three bonds belonging to a site have the same probability to be occupied, i.e. 1/3. As shown in the main panel of Fig. 6, for a delocalized eigenstate, this translates into a probability distribution of O sharply peaked at 1/3 for low disorder values, with an increasing variance for higher disorders. At disorder between V = 15 to V = 20 the distribution becomes bimodal with two peaks at O = 0 and O = 1, meaning that the eigenstates start to resemble some given dimer configurations. In the limit of infinite disorder, where the eigenstates coincide with the configuration states, the distribution is 2/3 δ(0)+1/3 δ(1), given that one bond per lattice site is occupied. In the inset of Fig. 6 the expected behavior is further evidenced by the computation of the integral of the peaks in small intervals near 0 (solid lines) and 1 (dashed lines) respectively; for increasing system size and disorder strength, the peaks approach 2/3 and 1/3 respectively. C. Half-system entanglement entropy Next, we consider the entanglement properties of eigenstates through their von Neumann entanglement entropy where A is a region comprising half of the sample and ρ A = Tr B ρ is the reduced density matrix obtained from an eigenstate by tracing out the complementary region B. The analysis of the entanglement entropy has been especially useful in the study of MBL transitions given the low, area law entanglement of all localized states, to be compared with a volume law scaling in the extended phase 12,20,54,55 . In the clusters taken into consideration, there is some freedom in the choice of the two regions A and B; here, where possible, we consider a cut that runs parallel to the lattice vectors. The two regions are shown in red and blue respectively in Fig. 1 of the main text and in Fig. 14 in the appendix. In the top panel of Fig. 7 we show the entanglement entropy Eq.(6) as a function of the disorder strength V for different sample sizes. For low-V values, we see that S approaches the value obtained for random states (shown in the figure as a dashed line) as V → 0, thus making evident a volume law entanglement. At high disorder, on the other hand, we observe an area law growth; specifically, by considering S/A where A is the length of the boundary between the two subsections, we observe a collapse (see Fig. 7 bottom panel). Interestingly, as seen for other quantities, the curves for different system sizes collapse in pairs, at around V = 18 for sizes N = 42 and N = 54 and at V = 20 for sizes N = 72 and N = 78, with both sets of curves collapsing only for larger V . Given the relatively arbitrary choice of the boundary of the bipartition, as an additional comparison and justification for adequateness of the use of volume and A area laws, we considered the entanglement entropy of some special states. One class is the already mentioned random states with volume law entanglement growth. We additionally considered the uniform ('Rokshar-Kivelson' 45 ) state ψ RK = 1/ √ N H |1 1 . . . 1 and the ground state of the model (1) with no disorder and a small constant field V c , which both have an area law entanglement scaling. The entanglement entropy computed for the clusters and the cuts under consideration indeed scales with A as expected (see Appendix C and Fig. 16). In order to better understand the position of a transition point, we consider the variance of the entanglement entropy distribution as a function of disorder. The vari- ance is expected to have a peak at the transition value (with possibly strong finite-size corrections) 54,55 . In the main panel of Fig. 8 we show the standard deviation σ S of S for the eigenstates in the energy window around E = 0 and for different disorder realizations. A peak is present, although with a substantial drift towards higher disorder values. The position of the peak rescaled with cluster size shows an approximately linear increase with respect to the system size (see bottom panel). Thus, for the entanglement entropy, system sizes up to N = 78 do not show convergence to a finite transition value. This might be an indication that the system sizes that we considered are still within the non-universal scaling regime or that the transition does not hold asymptotically in the thermodynamic limit. D. Dynamics We finally consider the dynamical properties of the system. Starting from a product state, which is taken as an element of the computational basis, we perform a quench to the disordered model: The chosen initial state |ψ(0) is the same as the reference state for the imbalance calculation in Sec. III B. In the 1D MBL phase, transport of local quantities is absent and entanglement has a well-understood slow logarithmic growth 22,56,57 . We look for these markers of localization in the present model at high disoder. We consider the same clusters that have been used in the exact diagonalization analysis, that is N = 42, 54, 72 and 78, with the addition of the N = 96 and 108 clusters. The time evolution is performed through full exact diagonalization for the clusters N = 42 and 54, and with the Krylov method for the larger ones. We average over 10 4 ÷ 10 3 disorder realizations for clusters up to N = 96 and around 100 realizations for the largest cluster N = 108. Imbalance We start by considering the imbalance of the time-evolved state with respect to the initial state, as defined in Sec. III B and Eq. (4). In Fig. 9 we show the modulus-squared imbalance as a function of time for various values of the disorder V for the system sizes N = 78 and N = 108. An imbalance value |I| 2 > 0 indicates that some memory of the initial state is kept after the time evolution. From Fig. 9, we see a decrease and, for most disorder values, a saturation of the imbalance (namely for the N = 78 cluster for which longer times are available). For this reason, we look at the asymptotic value, estimated from the last available point of the time evolution, and look at its scaling with the system size. We remark that, while for the smallest clusters N = 42 and N = 54 we are able to obtain the evolved states at very large times, for the larger sizes and with the Krylov time evolution we are only able to reach times of order t = 1000 (in units of the inverse of the plaquette flip energy scale τ ) according to the system size and the disorder strength. An appropriate error is attributed to the points that are not sufficiently close to the saturation value. In Fig. 10, we show the asymptotic values as a function of the disorder highlighting their dependence on the system size. For finite size systems it is expected that |I| 2 > 0 and one should therefore look at the thermodynamic limit. We extrapolate the infinite-size imbalance I 0 (V ) from a scaling function of the form |I| 2 = I 0 +a/N , and we observe (see inset) that it is 0, or reasonably close to it, for V 20, while it increases to non-zero values for V > 20, indicating a localized state where some memory of the initial state is kept at infinite time. Entanglement entropy A known remarkable feature of the localized phase in one dimension is a slow growth of the entanglement, which spreads logarithmically in time as opposed to a ballistic (linear in time) spread in the extended phase. In finite systems the growth is eventu- ally limited by the corresponding volume law in the two phases [22][23][24][56][57][58] . We remark that given the geometry imposed by the entanglement cut of the 2d system (see Fig. 1b and 14), entanglement can spread only in the direction perpendicular to the cut, and we thus expect a spread similar to a 1D localized phase in this case. In Fig. 11, we show the bipartite entanglement entropy, as defined in Eq. (6), as a function of time, for the cluster sizes N = 78 and N = 108. For low disorder, a fast saturation to the volume law value can be readily observed. As disorder increases, the entanglement entropy continues to quickly reach a size-dependent limiting value. For disorders V 20, a logarithmic growth appears to be present, consistent with the existence of a MBL phase. We note that this feature is only visible in the largest clusters, N = 96 and N = 108, highlighting the need of analysing very large system sizes in order to obtain evidence of a localized phase. Return probability We then consider the return probability R = | ψ(t)|ψ(0) |. Being an overlap of two vectors in the Hilbert space, one expects that it will be exponentially small (scaling as the inverse of the Hilbert space size) at long times in both ergodic and localized phases, but its time dependence may reveal non-trivial differences. To account for the system size scaling, we consider (minus) the logarithm of the return probability rescaled with the (log of the) Hilbert space size − ln R/ ln N H which is displayed as a function of time in Fig. 12 for six values of the disorder V . For low disorder (V = 1, V = 10 and V = 15, top row in Fig. 12) the rescaled return probability quickly reaches a limiting value, which is smaller in absolute value as the disorder increases. At larger disorder (V 20), a logarithmic increase appears for larger system sizes, indicating a slow spreading in a range consistent with the one obtained from entanglement entropy and the participation entropy results shown below. We finally note that there is reasonable collapse between different system sizes (except the smallest two N = 42 and N = 54). Participation entropy Finally, we consider the participation entropy, as defined in Eq. (3), of the time evolved state. In Fig. 13 we show the participation entropy, rescaled by the logarithm of the Hilbert space size, as a function of time, for six values of the disorder strength. For the small disorders V = 1, V = 10 and V = 15, shown in the top row, a quick saturation to system-size dependent values can be readily observed, with notably a saturation to a value very close to 1 for very small disorder; for higher, V ≥ 20 disorders, shown in logarithmic scale in the bottom row of Fig. 4, a slow, logarithmic growth suggesting localization becomes apparent for the two largest system sizes. The behavior of the participation entropy thus closely resembles the one of the bipartite entanglement entropy. IV. CONCLUSIONS The analysis of the eigenstates and of the dynamics after a quench suggest the same conclusions than the ones reached in a similar disordered QDM on the square lat- tice 48 : the presence of an extended and a many-body localized phase at low and high disorder respectively. This conclusion is justified by the study of a large system size, as large as possible with unbiased numerical methods, up to N = 78 for exact diagonalization and N = 108 for dynamics. The analysis of large sizes is essential in finding some characteristic features of MBL, such as the slow logarithmic growth of entanglement entropy. From our analysis on finite systems at finite times, however, it cannot be excluded that, in the thermodynamic limit, there is no transition but a crossover to increasingly slow dynamics. This is hinted by e.g. the linear scaling with system size of the maximum of the entanglement entropy variance (even though this quantity does not accurately locate the transition, already in the standard model of MBL in 1D 12 ). We remark that in that case the time scales for thermalization at high disorder would likely still be so long for the system to be effectively localized for practical purposes, in particular potential experimental platforms. We also attempted a scaling analysis (done through the bayesian method 59 ) on some of the quantities presented in Sec. III. With the available system sizes, it was not possible to obtain a collapse. This tends to suggest that, if a true transition exists, the finite systems considered are not large enough to be in the universal scaling regime. The analysis presented in this work makes use of dimers on a peculiar lattice: in other words, we use a very constrained model in order to numerically study the 2D MBL problem in the largest physical system attainable with the current numerical capabilities. Considering the current lack of theoretical arguments for MBL in 2D, alternative opportunities come from possible experimental realizations in specifically arranged experimental setups. There has been a lot of recent effort devoted to perform analog quantum simulations of lattice gauge theories (see Ref. 60 for a recent review), in order to implement experimentally e.g. the Gauss law equivalent to the dimer constraint. Let us for instance highlight explicit proposals for implementing QDMs with different possible setups using Rydberg atoms [61][62][63] . Finally, it would be interesting to see whether the constraints and the non-tensor product structure in QDM could allow the existence of quantum scar states 64 , similar e.g. to what happens in the 1D constrained PXP model. These scar states have been argued to realize intermediate scenarios between the extended and localized paradigms. In this work we have used the honeycomb lattices with N = 42, 54, 72, 78, 96 and 108 sites shown in Fig. 14. These were all considered with periodic boundary conditions and are constructed with the following basis vectors 71 , written in the basis {u 1 , u 2 } where u 1 = (1, 0) and The separation into two subsystems used for the calculation of the bipartite entanglement entropy is shown in different colors in each cluster. The boundary has been chosen parallel to one of the basis vectors. We note that in some cases (namely, clusters N = 54 and N = 78) this was not exactly possible but was chosen as close as possible to the parallel boundary line. Appendix B: Mobility edge We present here an additional analysis of the gap ratio defined in Sec. III A, this time resolved in energy. The purpose is to identify a possible dependence of the localization transition value from the energy, i.e. the presence of a so-called mobility edge 12 . For the smallest system sizes, we consider full exact diagonalization. As customary, we introduce the parameter Thus, from the whole spectrum, we compute the gap ratio for ten windows of fixed width and average on around 1000 disorder realizations. The result for cluster sizes N = 42 and N = 54 is shown in Fig. 15. Having only the two smallest system sizes available, we cannot definitively conclude the existence of a mobility edge in the model (1), although Fig. 15 does show an indication of an enhanced localization at the spectrum extrema, which appears more marked for N = 54 than N = 42. Given the different symmetries and aspect ratio of the clusters, dividing them in two subsystems for the purpose of the computation of the bipartite entanglement entropy should be done respecting the vectors of each cluster, as outlined in Appendix A. In order to check that the chosen cut is sufficiently general, we computed the entanglement entropy of some reference states which are known to have an area law as the system size increases. The entanglement entropy, rescaled by the area of the cut, is shown in Fig. 16. The reference states are: the ground state |ψ GS of the nondisordered model with constant potential V e = 0.1; the 'Rokshar-Kivelson' 45 state |ψ RK = 1/ √ N H |1 1 . . . 1 ; two localized states at high disorder, respectively obtained at disorder strength V = 30 and V = 50. For all states, S/A is approximately constant with respect to system size N , showing thus the correct area law scaling for the selected cut in all the clusters shown in Fig. 14.
7,105
2020-05-20T00:00:00.000
[ "Physics" ]
GEOMETRIC AND PROBABILISTIC RESULTS FOR THE OBSERVABILITY OF THE WAVE EQUATION . — Given any measurable subset ! of a closed Riemannian manifold and given any T > 0 , we define ` T ( ! ) 2 [0 , 1] as the smallest average time over [0 ,T ] spent by all geodesic rays in ! . Our first main result, which is of geometric nature, states that, under regularity assumptions, 1 / 2 is the maximal possible discrepancy of ` T when taking the closure. Our second main result is of probabilistic nature: considering a regular checkerboard on the flat two-dimensional torus made of n 2 square white cells, constructing random subsets ! n " by darkening cells randomly with a probability " , we prove that the random law ` T ( ! n " ) converges in probability to " as n ! + 1 . We discuss the consequences in terms of observability of the wave equation Introduction and main results Let (M, g) be a closed connected Riemannian manifold.We denote by Γ the set of geodesic rays, that is, the set of projections onto M of Riemannian geodesic curves in the co-sphere bundle S * M .Given any T > 0 and any Lebesgue measurable subset ω of M , we define (1) T (ω) = inf Here, χ ω is the characteristic function of ω, defined by χ ω (x) = 1 if x ∈ ω and χ ω (x) = 0 if x ∈ M ω.The real number T (ω) ∈ [0, 1] is the smallest average time over [0, T ] spent by all geodesic rays in ω.This quantity appears naturally when studying observability properties for the wave equation on M with ω as an observation subset. In this article we establish two properties of the functional T , one is geometric and the other is probabilistic.Let us describe them in few words. The first geometric property is on the maximal discrepancy of T when taking the closure.We may have T (ω) < T (ω) whenever there exist rays grazing ω and the discrepancy between both quantities may be equal to 1 for some subsets ω.We prove that, if the metric g is C 2 and if ω satisfies a slight regularity assumption, then T (ω) T (ω) + 1 .We also show that our assumptions are essentially sharp; in particular, surprisingly the result is wrong if the metric g is not C 2 .As a consequence, if ω is regular enough and if T (ω) > 1/2 then the Geometric Control Condition is satisfied and thus the wave equation is observable on ω in time T . The second property is of probabilistic nature.We take M = T 2 , the flat twodimensional torus, and we consider a regular grid on it, a regular checkerboard made of n 2 square white cells.We construct random subsets ω n ε by darkening each cell in this grid with a probability ε.We prove that the random law T (ω n ε ) converges in probability to ε as n → +∞.As a consequence, if n is large enough then the Geometric Control Condition is satisfied almost surely and thus the wave equation is observable on ω n ε in time T . Observability and Geometric Control Condition. -The condition T (ω) > 0 means that all geodesic rays, propagating in M , meet ω within time T .This condition, usually called Geometric Control Condition (in short, GCC), is related to observability properties for the wave equation where g is the Laplace-Beltrami operator on M for the metric g.More precisely, denoting by dx g the canonical Riemannian volume, we define the observability constant C T (ω) 0 as the largest possible nonnegative constant C such that the inequality is satisfied for any solution y of (2), that is, When C T (ω) > 0, the wave equation ( 2) is said to be observable on ω in time T , and when C T (ω) = 0 we say that observability does not hold for (ω, T ). The converse is not true: GCC is not a necessary condition for observability.It is shown in [8] that, if M = S 2 (the unit sphere in R 3 endowed with the restriction of the Euclidean structure), if ω is the open Northern hemisphere, then T (ω) = 0 for every T > 0, and however one has C T (ω) > 0 for every T > π.The latter fact is established by an explicit computation exploiting symmetries of solutions.This failure of the functional T to capture the observability property is due, here, to the existence of a very particular geodesic ray which is grazing the open set ω, namely, the equator.In this example, considering the closure ω of ω, it is interesting to observe that T (ω) = 0 for every T π (take a geodesic ray contained in the closed Southern hemisphere) and T (ω) > 0 for every T > π, with T (ω) = 1 2 when T 2π.The latter equality is in contrast with T (ω) = 0: there is thus a discrepancy 1/2 in the value of T for T 2π when taking the closure of ω.In this specific case, this discrepancy is caused by the equator, which is a geodesic ray grazing the open subset ω. Our first main result below shows that 1/2 is actually the maximal possible discrepancy. 1.1. A geometric result on the maximal discrepancy of T .-In general, one can always find subsets ω for which the difference T (ω) − T (ω) is arbitrary close to 1. Surprisingly, under slight regularity assumptions, this maximal discrepancy is 1/2 only. Theorem 1. -Let T > 0 be arbitrary and let ω be a measurable subset of M .We make the following assumptions: We give more details and a number of comments on this theorem in Section 2. At the opposite, as an obvious remark, if there is no geodesic ray grazing ω then T (ω) = T (ω).Here and throughout the paper, we say that a geodesic ray γ is grazing ω if T 0 χ ∂ω (γ(t)) dt > 0, where ∂ω = ω ω.In more general, the existence of grazing rays adds a serious difficulty to the analysis of observability (see [1]).It is noticeable that, if one replaces the characteristic function χ ω of ω by a continuous function a, in the integral at the left-hand side of (3) (i.e., T 0 M a(x)|y(t, x)| 2 dx g dt) as well as in the definition (1) of the functional T , this difficulty disappears and the condition T (a) > 0 becomes a necessary and sufficient condition for observability of (2) on ω in time T (see [4]). J.É.P. -M., 2022, tome 9 By the way, for completeness, we provide in the appendix some semi-continuity properties of the functional T , which may be of interest for other purposes. The issue of the observability on a general measurable subset ω ⊂ M has remained widely open for a long time.Recent advances have been made, which we can summarize as follows.It has been established in [7] that observability on a measurable subset ω in time T is satisfied if and only if α T (ω) > 0. The quantity α T (ω), defined in [7] as the limit of high-frequency observability constants, is however not easy to compute and we have, in general, the inequality T (ω) α T (ω) T (ω).In particular, the condition T (ω) > 0 becomes a necessary and sufficient condition for observability as soon as there are no geodesic rays grazing ω.It has also been shown in [7] that lim T →+∞ C T (ω)/T is the minimum of two quantities, one of them being T (ω) and the other being of a spectral nature. We have the following corollary of Theorem 1, using the fact that, since ω is open, the condition T (ω) > 0 implies observability for (ω, T ), and thus C T (ω) C T (ω) > 0. Note that Corollary 1 does not apply to the (limit) case where M = S 2 and ω is the open Northern hemisphere.It does neither apply to the case where M is the two-dimensional torus and ω is a half-covering open checkerboard on it, as in [3,5] (see next section).Indeed, in these two cases, we have T (ω) = 0 for every T > 0 but C T (ω) > 0 (i.e., we have observability) for T large enough.This is due to the fact that trapped rays are the weak limit of Gaussian beams that oscillate on both sides of the limit ray, spreading on one side and on the other a sufficient amount of energy so that indeed observability holds true.In full generality, having information on the way that semi-classical measures, supported on a grazing ray, can be approached by high-frequency wave packets such as Gaussian beams, is a difficult question.In the case of the sphere, symmetry arguments give the answer (see [8]).In the case of the torus, a much more involved analysis is required, based on second microlocalization arguments (see [3,5]). Anyway, Corollary 1 can as well be applied for instance to any kind of checkerboard domain ω on the two-dimensional torus, as soon as the measure of ω is large enough so that T (ω) > 1/2. Since the case of checkerboards (in dimension two) is interesting and challenging, following a question by Nicolas Burq, in the next section we investigate the case of random checkerboards on the flat torus and we establish our second main result. A probabilistic result for random checkerboards on the flat torus In this section, we take M = T 2 = R 2 /Z 2 (flat torus) which is identified to the square [0, 1] 2 , class of equivalence of R 2 under the identifications (x, y) ∼ (x + 1, y) ∼ (x, y + 1), inheriting of the Euclidean metric.Given any subset A of M , we denote by |A| the (two-dimensional) Lebesgue measure of A. We consider a regular grid G n = (c n ij ) 1 i,j n in the square, like a checkerboard, made of n × n closed squares: Defining c i j in the same way for all (i , j ) ∈ Z 2 , we identify the square c i j to the square c n ij of the above grid with (i, j) ∈ {1, . . ., n} 2 such that i = i mod n and j = j mod n. Construction of random checkerboards.-Let ε ∈ [0, 1] be arbitrary.Considering that all squares in the grid are initially white, we construct a random checkerboard by randomly darkening some squares in the checkerboard as follows: for every (i, j) ∈ {1, . . ., n} 2 , we darken the square c n ij of the grid with a probability ε.All choices are assumed to be mutually independent.In other words, we make a selection of squares (that are paint in black) in the grid by considering n 2 independent Bernoulli random variables denoted (X n ij ) 1 i,j n , each of them with parameter ε.The total number of black squares follows therefore the binomial law B(n 2 , ε). We denote by ω n ε the resulting closed subset of [0, 1] 2 that is the union of all (closed) black squares (see Figure 1).Given any fixed T > 0 and ε ∈ (0, 1], our objective is to understand how well the random set ω n ε is able to capture all geodesic rays propagating in M [0, 1] 2 , in finite time T .In other words, we want to study the random variable T (ω n ε ).Of course, the random variable |ω n ε | follows the law (1/n 2 )B(n 2 , ε) and thus its expectation is equal to ε, and so, when ε is small, ω n ε covers only a small area in [0, 1] 2 .And yet, our second main result below shows that, for n large, almost all such random sets meet all geodesic rays within time T .Theorem 2. -Given any T > 0 and any ε ∈ [0, 1], the random variable T (ω n ε ) converges in probability to ε as n → +∞, i.e., Theorem 2 is proved in Section 3. As mentioned above, this issue has emerged following a question by Nicolas Burq.In [3,5], the authors also consider checkerboard domains, as above, but not in a random framework.As a consequence of their analysis, given any T > 0, any ε ∈ [0, 1] and any n ∈ N * fixed, if all geodesic rays of length T , either meet the interior of ω n ε (i.e., the interior of some black square), or follow for some positive time one of the sides of a black square on the left and for some positive time one of the sides of a black square (possibly the same) on the right, then C T (ω n ε ) > 0, i.e., the wave equation on the torus M = T 2 is observable on ω n ε in time T .Let T > 0 and let ε ∈ (0, 1] be arbitrary.According to Theorem 2, for n large enough, almost every subset ω n ε (constructed randomly as above) is such that T (ω n ε ) > 0. This implies that every geodesic ray, that is neither horizontal nor vertical, meets the interior of ω n ε within time T , and that every horizontal or vertical geodesic ray meets the closed subset ω n ε within time T (for some positive time, not less than T (ω n ε )).In the latter case, moreover, by construction of the random set ω n ε , the probability that vertical grazing rays follow for some positive time one of the sides of a black square on the left and for some positive time one of the sides of a black square on the right, converges to 1 as n → +∞. All in all, combining Theorem 2, the result of [3,5] and the above reasoning, we have the following consequence in terms of observability of the wave equation. Corollary 2. -Given any T > 0 and ε ∈ (0, 1], may they be arbitrarily small, the probability that the wave equation on the torus M = T 2 be observable on ω n ε in time T tends to 1 as n → +∞. In other words, observability in (any) finite time is almost surely true for large n, despite the fact that the measure of ω n ε may be very small!Note that, for ε > 1/2, almost sure observability follows from Corollary 1 (indeed, the random sets constructed above are piecewise C 1 and thus Theorem 1 can be applied).But the result is more striking when |ω n ε | is small.Note also Theorem 2 provides an answer to an issue raised in [6], which we formulate in terms of an optimal shape design problem in the next corollary. where the supremum is taken over all possible measurable subsets ω of M = T 2 having a Lipschitz boundary. Corollary 3 is proved in Section 3.4. J.É.P. -M., 2022, tome 9 We finish this section by a comment on possible generalizations of Theorem 2. Some of the steps of its proof remain valid for any closed Riemannian manifold, like the fact that it suffices to prove the theorem for T small and thus, we expect that, to some extent, the result is purely local.However, in some other steps we instrumentally use the fact that we are dealing with a regular checkerboard in the square.Extending the result to general manifolds, even in dimension two, is an open issue. Acknowledgments.-The authors are indebted to the referees for their very careful reading and comments, and for having pointed out a mistake in a proof in a first version. Additional comments and proof of Theorem 1 2.1.Comments on Theorem 1. -Theorem 1 states that, given any T > 0 and any measurable subset ω of M , we have under the two following sufficient assumptions: (i) the metric g is C Remark 1. -Assumption (ii) may be weakened as follows: • If M is of dimension 2, it suffices to assume that ω is piecewise C 1 .More precisely, we assume that ω is a C 1 stratified submanifold of M (in the sense of Whitney). • In any dimension, the following much more general assumption is sufficient: given any grazing ray γ, for almost every t ∈ [0, T ] such that γ(t) ∈ ∂ω, the subdifferential at γ(t) of ∂ω∩γ(•) ⊥ is a singleton.This is the case under the (much stronger) assumption that ω be geodesically convex. Comments.-It is interesting to note that the assumptions made in Theorem 1 are essentially sharp.Remarks are in order. • The inequality (4) gives a quantitative measure of the discrepancy that can happen for T when we take the closure of a measurable subset ω or, conversely, when we take the interior (this is the sense of Corollary 1).The inequality is sharp, as shown by the example already discussed above: take M = S 2 and ω the open Northern hemisphere; then T (ω) = 0 for every T > 0 and 2π (ω) = 1/2 for T = 2π.Hence, here, ( 4) is an equality. • As a variant, take ω which is the union of the open Northern hemisphere and of a Southern spherical cap, i.e., a portion of the open Southern hemisphere limited by a given latitude −ε < 0. Then we have as well T (ω) = 0 for every T > 0 and 2π (ω) = 1/2 for T = 2π. • Note that, taking ε = 0 in the previous example (i.e., ω is the unit sphere M = S 2 minus the equator), we have T (ω) = 0 and T (ω) = 1 for every T > 0 and thus (4) fails.But here, ω is not an embedded C 1 submanifold of M with boundary: J.É.P. -M., 2022, tome 9 Assumption (ii) (which implies local separation between ω and M ω) is not satisfied.More generally, the result does not apply to any subset ω that is M minus a countable number of rays.This is as well the case when one considers any subset ω that is dense and of empty interior (one has T (ω) = 0 and T (ω) = 1 for every T > 0).This shows that the discrepancy 1/2 is only valid under some regularity assumptions on ω. • The result fails in general if ∂ω is piecewise C 1 only, on a manifold M is of dimension n 3.Here is a counterexample. Let γ be a geodesic ray.If T > 0 is small enough, it has no conjugate point.In a local chart, we have γ(t) = (t, 0, . . ., 0) (see the proof of Theorem 1).Now, using this local chart we define a subset ω of M as follows: the section of ∂ω with the vertical hyperplane γ(•) ⊥ is locally equal to this entire hyperplane minus a cone of vertex γ(t) with small angle 2πε > 0, less than π/4 for instance (see Figure 2). Locally around γ(t), ∂ω ∩ γ(t) ⊥ is the complement of the hatched area.Now, we assume that, as t > 0 increases, these sections rotate with such a speed that, along [0, T ], the entire vertical hyperplane is scanned by the section with ω.If the speed of rotation is exactly T /2π then it can be proved that T (ω) = 0 and This example shows that Assumption (ii), or its generalization given in Remark 1, cannot be weakened too much.The idea here is to consider a subset ω such that the section of ∂ω with the vertical hyperplane γ(•) ⊥ has locally the shape of the hypograph of an absolute value, which is rotating along γ(•). Similar examples can as well be designed with checkerboard-shaped domains ω, thus underlining that in [3,5] it was important to consider checkerboards in dimension 2. J.É.P. -M., 2022, tome 9 • Surprisingly, the result is wrong if the metric g is not C 2 .A counterexample is the following. Let M be a pill-shaped two-dimensional manifold given by the union of a cylinder of finite length, at the extremities of which we glue two hemispheres (domain also obtained by rotating a 2D stadium in R 3 around its longest symmetry axis; or, take the unit sphere in R 3 , cut it at the equator, separate the two hemispheres and glue them with, in between, a cylinder of arbitrary length), and endow it with the induced Euclidean metric (see Figure 3).Then the metric is not C 2 at the gluing circles.Now, take ω defined as the union of the open cylinder with two open spherical caps (i.e., the union of the two hemispheres of which we remove latitudes between 0 and some ε > 0).Then T (ω) = 0 for every T > 0, because ω does not contain the rays consisting of the circles at the extremities of the cylinder.In contrast, T (ω) may be arbitrarily close to 1 as T is large enough and ε is small enough, and thus (4) fails.This is because any ray of M spending a time π in M ω spends then much time over the cylinder.This shows that Assumption (i) is sharp.In the above example, the metric is only The example above is rather counter-intuitive.The assumption of a C 2 metric implies in some sense a global result on geodesic rays. Our proof, given in Section 2.2 hereafter, uses only elementary arguments of Riemannian geometry.It essentially relies on Lemma 2, in which we establish that, given a grazing ray (i.e., a ray propagating in ∂ω), thanks to our assumption on ω, we can always construct neighbor rays, one of which being inside ω and the other being outside of ω for all times. 2.2. Proof of Theorem 1. -Without loss of generality, we take ω ⊂ M open.We will use several well known facts of Riemannian geometry, for which we refer to [2]. for any k.In all cases, we have obtained the inequality for every t ∈ [0, T ].By the Fatou lemma, we infer that The lemma follows. If the ray γ given by Lemma 1 is not grazing ω, i.e., if ω) and hence T (ω) = T (ω).So in this case there is nothing to prove. Lemma 2. -There exists a continuous path of points s → x s ∈ M , passing through x 0 at s = 0, such that, setting γ s (t) = π • ϕ t (x s , ξ 0 ), we have Proof.-To prove this fact, we assume that, in a local chart, γ(t) = (t, 0, . . ., 0).This is true at least in a neighborhood of x 0 = γ(0) = 0, and this holds true along γ(•) as long as there is no conjugate point.We also assume that, in this chart, any other geodesic ray starting at (0, x 0 2 , . . ., x 0 n ) in a neighborhood of γ(0) = (0, . . ., 0), with codirection ξ 0 , is given by (t, x 0 2 , . . ., x 0 d ) (projection onto M of the extremal field).Here, we have set d = dim M .This classical construction of the so-called extremal field can actually be done on any subinterval of [0, T ] along which there is no conjugate point.Note that the set of conjugate times along [0, T ] is of Lebesgue measure zero. (1) Let us search an appropriate (d − 1)-tuple (x 0 2 , . . ., x 0 d ) ∈ R d−1 {0} such that the family of points x s = (0, sx 0 2 , . . ., sx 0 d ), s ∈ (−1, 1), gives (5).Note that the geodesic ray starting at (x s , ξ 0 ) is γ s By assumption, in a neighborhood U of any point of N , the set N ∩U is a codimensionone hypersurface of M , written as It suffices to prove that, for almost every time t at which γ 0 (t) = γ(t) ∈ N and γ(t) ∈ T γ(t) N , the points γ s (t) and γ −s (t) are on different sides with respect to the (locally) separating manifold N for s small enough.This is obvious when γ is transverse to N .We set is important, we assume that v(t) ∈ P d−2 (R), the projective space.We claim that: With this result, setting V = (x 0 2 , . . ., x 0 d ), the points x s defined above give the lemma.Let us now prove the claim.We define 0 for every t ∈ Ω, where we have endowed P d−2 (R) with the Hausdorff measure H n−2 .Therefore, by the Fubini theorem, 0 = and thus Ω χ A (t, V ) dt = 0 for almost every V ∈ P d−2 (R).Fixing such a V , it follows that χ A (t, V ) = 0 for almost every t ∈ Ω, and the claim is proved. In view of proving Remark 1, note that the argument above still works in dimension 2 with ω piecewise C 1 (but not in dimension greater than or equal to 3: see the counterexample given in Section 1).In more general, in any dimension, the argument above still works if ω is such that, for almost every time t, the subdifferential at γ(t) of ∂ω ∩ γ(•) ⊥ is a singleton.equation, such times must be isolated, for otherwise the Jacobi field would vanish at the second order and thus would be identically zero. At this step, we have embedded the ray γ given by Lemma 1 into a family of rays γ s which enjoy a kind of transversality property with respect to N = ∂ω.Let us consider the partition into three disjoint measurable sets, with Since γ s (•) converges uniformly to γ(•) as s → 0 and since ω and M ω are open, we have: By the Lebesgue dominated convergence theorem, we infer that Now, on the one part, by the first step we have 1 On the other part, since A 1 and A 3 are disjoint we have 1 Since T (ω) )dt for every s by definition, we infer that 2 T (ω) T (ω) + 1. Theorem 1 is proved. Proof of Theorem 2 Theorem 2 states that, given any T > 0 and any ε ∈ [0, 1], the random variable T (ω n ε ) converges in probability to ε as n → +∞, i.e., lim This section is organized as follows.We make a preliminary remark in Section 3.1.In Section 3.2, we give the successive steps of the proof, involving intermediate lemmas that are proved.One of the main ingredients of the proof of Theorem 2 is a large deviation property which is established in Section 3.3.In Section 3.4, we also provide a proof of Corollary 3. Given any ω that is a union of closed squares from the grid G n with n fixed, the mapping Γ γ → m T γ (ω) is continuous at every γ ∈ Γ that is neither horizontal nor vertical, or, that is horizontal or vertical but meets no corner (by definition, a corner is a point (i/n, j/n) in [0, 1] 2 , for some (i, j) ∈ {0, . . ., n} 2 ). Proof.-Let γ ∈ Γ and let (γ k ) k∈N be a sequence of Γ converging to γ.Let us prove that Let us prove the second part of the lemma.Let γ ∈ Γ that is neither horizontal nor vertical, or, that is horizontal or vertical but meets no corner.Let (γ k ) k∈N be a sequence of geodesic rays converging to γ ∈ Γ. Writing where τ (i, j) = 1 if c n ij ⊂ ω and τ (i, j) = 0 otherwise, and noting that we have to prove that m T γ k (c n ij ) converges to m T γ (c n ij ) as k → +∞.This follows from the dominated convergence theorem and from the fact that (χ c n ij (γ k )) k∈N converges almost everywhere to χ c n ij (γ).The latter claim can be shown by distinguishing between two cases: if γ(t) ∈ cn ij then for k large enough we have The same conclusion remains true if γ(t) / ∈ c n ij .Since the set of t such that γ(t) ∈ ∂c n ij is finite (this follows from the assumptions on γ), the lemma follows. Note that γ → m T γ may fail to be continuous at some γ ∈ Γ that is horizontal or vertical and meets a corner.For such geodesic rays γ, we will see that it is relevant to define the following quantity.Given any γ ∈ Γ, we set m T γ (ω) = m T γ (ω) if γ is neither horizontal nor vertical, or is horizontal or vertical but meets no corner.When γ is horizontal or vertical and meets a corner, we set where the infimum is taken over the set of sequences of geodesic rays (γ k ) k∈N converging to γ such that, for every k ∈ N, γ k ([0, T ]) is obtained by rotating γ([0, T ]) around a corner of the grid through which γ passes.Of course, we have T (ω) m T γ (ω) m T γ (ω) and thus Note that γ → m T γ (ω) may also fail to be continuous.We will see that this quantity m T γ (ω) is important to obtain the inequality (25) in the proof of Lemma 5, item (b), and in order to treat the case of horizontal or vertical rays in the proof of Lemma 6. 3.2. Proof of Theorem 2. -For every (i, j) ∈ {1, . . ., n} 2 , let X n ij be the random variable equal to 1 whenever c n ij ⊂ ω n ε and 0 otherwise.Recall that, assuming that all square cells of the grid are initially white, when ranging over the grid, for each cell we randomly darken the cell with a probability ε (Bernoulli law) and in this case we set X n ij = 1; otherwise we let X n ij = 0.By construction, the random laws (X n ij ) 1 i,j n are independent and identically distributed (i.i.d.), of expectation ε and of variance ε(1 − ε). Given any fixed geodesic ray γ ∈ Γ, we denote by t n ij (γ) the time spent by γ in the square cell c n ij .We have and we also note that In particular, the random variable m T γ (ω n ε ) is a weighted sum of independent Bernoulli laws, and thus its expectation is where we have used (9) and the fact that 0 t n ij (γ)/T 1.To prove Theorem 2, we have to prove that, given any T > 0 and any ε ∈ [0, 1], for every δ > 0 we have J.É.P. -M., 2022, tome 9 Proof of (i).-Using ( 6) and ( 10), we have Applying the Bienaymé-Tchebychev inequality to the random variable m T γ (ω n ε ), we have and thus, using (11) and ( 12), ( 14) Finally, (i) follows from ( 13) and ( 14). Proof of (ii).-Establishing (ii) is much more difficult.We proceed in several steps, by proving the following successive lemmas that are in order. Thanks to Lemma 4 (proved below), we now assume that 0 < T < 1.This has the following pleasant consequence: any geodesic ray γ ∈ Γ crosses a given cell c ij at most one time over [0, T ], i.e., {t ∈ [0, T ] | γ(t) ∈ c ij } is connected.This will make easier the computation of crossing times, in the proofs of the forthcoming lemmas. Let us introduce some notations.Let Γ 0 , Γ 1 and Γ 2 be the sets of geodesic rays of M = T 2 meeting respectively zero, at least one and at least two corners of the grid G n (by definition, a corner is a point (i/n, j/n) in [0, 1] 2 , for some (i, j) ∈ {0, . . ., n} 2 ).Note that Γ 2 ⊂ Γ 1 . Given any ω ⊂ M that is the union of disjoint closed square cells of G n , we have We also define Of course, we have Lemma 5. -There exists C > 0 such that, for every n ∈ N * , for every subset ω of [0, 1] 2 that is a union of square cells of the grid G n , we have: Finally, (ii) follows from the above lemmas, that are proved hereafter.Let us prove that (15) Given any ρ > 0, let γ be a geodesic ray such that Setting γ k (•) = γ(kT + •) for every k ∈ {0, . . ., m − 1}, we have Letting ρ tend to 0, we obtain (15).Since 0 < T < 1, (ii) is true for this final time T , i.e., Therefore, using (15), we obtain (ii) for the final time T . Proof of Lemma 5 Proof of item (a).-Let γ ∈ Γ be such that ( 16) If γ ∈ Γ 1 then we are done.Hence, in what follows we assume that γ ∈ Γ 0 , i.e., that γ meets no corner of the grid.Without loss of generality, we can assume that the ray γ is neither horizontal nor vertical.Indeed, if γ is horizontal or vertical, since γ meets no corner, it follows from Lemma 3 that m T γ is continuous at γ. Hence, it is possible to rotate slightly γ so that γ is neither horizontal nor vertical and still satisfies (16). Let n ∈ R 2 be a unit vector orthogonal to γ (0).For every s ∈ R, we denote by T sn the translation of vector sn and we define the translated geodesic ray γ s = T sn • γ (which is neither horizontal nor vertical).By continuity, γ s meets the same square cells as γ if |s| is small enough.Let I(γ) denote the subset of all pairs (i, j) ∈ {1, . . ., n} 2 such that γ crosses the cell squares c n ij .For |s| small enough, we have where t n ij (γ s ) is the time spent by γ s in c n ij .Denoting by I (γ) the set of (i, j) ∈ I(γ) such that γ(0), γ(T ) / ∈ cn ij (note that #I(γ) − 2 #I (γ) #I(γ) and that these conditions do not depend on s for |s| small enough, hence I (γ s ) = I (γ)), an easy J.É.P. -M., 2022, tome 9 geometric argument shows that, for (i, j) ∈ I (γ), the function s → t n ij (γ s ) is affine and nonconstant with respect to s.Hence, is also an affine function of s.Replacing s by −s if necessary, we infer the existence of a threshold s 0 > 0 such that γ s ∈ Γ 0 for s ∈ [0, s 0 ) and γ s0 ∈ Γ 1 , and the mapping s → M γs (ω) is continuous and nonincreasing on [0, s 0 ] (continuity is because, for every s, γ s is neither vertical nor horizontal since it is a translation of γ).Since T Γ 1 (ω) m T γs 0 (ω).Besides, we have, for every s ∈ [0, s 0 ], Hence, in particular, using that (by Lemma 3) Since the mapping s → M γs (ω) is nonincreasing on [0, s 0 ] and γ 0 = γ, we have M γs 0 (ω) M γ (ω) and thus, using (19), (21) M γs 0 (ω) m T γ (ω).We finally infer from ( 16), ( 18), ( 20) and (21) that The conclusion follows. If γ ∈ Γ 2 then we are done.Hence, in what follows we assume that γ C n for some C > 0 not depending on γ and n.Indeed, then, it follows from ( 22) and ( 23) that C + 1 n and the item is proved. Note that the rotation R θ can be applied to γ until the geodesic ray γ θ meets a new corner.It is then easy to see that for some C > 0 neither depending on n nor on γ. According to (26), we have with for θ ∈ J.Moreover, we have √ 2/nT and #I (γ) 2n, the latter inequality following from the obvious observation that, when γ leaves a square c n ij , then the next square that γ enters must be of the form c i j with i = i + 1 or j = j + 1. From now on, we let n tend to +∞.In particular, θ 0 depends on n.Let us prove that A similar argument will show that f (θ + ) − min J f C/n.These two estimates imply (24) and hence complete the proof of the item b.It thus remains to prove (30).By the mean value theorem, there exists θ ∈ (0, |θ − |) such that J.É.P. -M., 2022, tome 9 Since 0 θ |θ − |, using that | sin(θ 0 ) cos(θ 0 − θ)| 1, we get from (27) that Now, by the mean value theorem, Using (29), there exists C > 0 neither depending on γ nor on n such that . This implies that Following the same reasoning, we prove that Actually, this last estimate is much easier since | cos(θ 0 )| C cos(θ 0 + θ − ).This leads to (30). 3.2.3. Proof of Lemma 6. -Let (i 1 , j 1 ), (i 2 , j 2 ) ∈ {1, . . ., n} 2 be such that the two square cells c i1j1 and c i2j2 of G n are distinct and let x 1 and x 2 be two distinct corners of the grid G n .Any geodesic ray such that will be denoted by γ ci 1 j 1 ,ci 2 j 2 ,x1,x2 .The set of all geodesic rays γ ci 1 j 1 ,ci 2 j 2 ,x1,x2 is denoted by Γ ci 1 j 1 ,ci 2 j 2 ,x1,x2 .Given any T ∈ (0, 1), this set is nonempty as soon as n is large enough.In order to obtain a finite set, we make the following observation.Considering a ray γ ci 1 j 1 ,ci 2 j 2 ,x1,x2 , any other ray γ passing through x 1 and x 2 and such that γ(0) ∈ c i1j1 and γ(T ) ∈ c i2j2 is obtained from γ ci 1 j 1 ,ci 2 j 2 ,x1,x2 by a time translation.This creates an equivalence relation and we define Γ as the quotient of Γ by this equivalence relation.Any element γ of Γ is uniquely determined by a choice of distinct pairs (i 1 , j 1 ), (i 2 , j 2 ) ∈ {1, . . ., n} 2 and of distinct x 1 , x 2 ∈ G n .Therefore, by construction, we have ) can be expressed by (10) and, using (9) and applying the large deviation result established in Proposition 1 in Section 3.3, we get where m is the number of square cells having a nontrivial intersection with γ([0, T ]).Indeed, m n because T < 1 and thus, by (9), If γ is horizontal or vertical, the argument is more complicated.We consider m T γ (ω n ε ) defined by ( 8) in Section 3.1.Obviously, we have where L is the set of all possible limits (closure points) of m T γ k (ω n ε ) as k → +∞ over all sequences (γ k ) k∈N of geodesic rays converging to γ such that, for every k ∈ N, γ k is obtained by rotating γ around a corner of the grid through which γ passes, with angle ±1/k.There are at most n/T + 1 corners belonging to γ([0, T ]) and thus 2( n/T + 1) possible limits of m T γ k (ω n ε ) by considering positive and negative rotations.Hence #L 2 n/T + 2. We claim that any m ∞ ∈ L can be expressed by ( 10) so that we can still apply Proposition 1. Indeed, assume for instance that γ k is obtained from γ by a rotation of angle 1/k around a corner c of the grid through which γ passes.Then, it is easy to see that where t n ij ( γ) is the time spent by γ in c n ij if γ k crosses c n ij (it does not depend on k for k large enough) and 0 otherwise.More precisely, by Proposition 1, we have Hence, using that #L 2 n/T +2 and modifying the constant C ε,δ , we obtain In all cases, let us estimate m, the number of square cells met by γ.Since the diagonal length of each square cell c n ij is √ 2/n, the length of γ([0, T ]) (which is equal to T because the speed of the geodesic is 1) is bounded above by m √ 2/n.Therefore m nT / √ 2. We have therefore proved that, for every geodesic ray γ ∈ Γ (horizontal or vertical or not), ( 31) Assuming that n is large enough, there exist distinct pairs (i 1 , j 1 ), (i 2 , j 2 ) ∈ {1, . . ., n} 2 and two distinct corners x 1 , x 2 in G n such that γ(0) ∈ c i1j1 , γ(T ) ∈ c i2j2 and x 1 , x 2 ∈ γ([0, T ]).Since γ and γ ci 1 j 1 ,ci 2 j 2 ,x1,x2 are in the same equivalence class, and since the diagonal length of a square cell of G n is √ 2/n, we have We infer that J.É.P. -M., 2022, tome 9 Therefore, recalling that m T γ (ω n ε ) m T γ (ω n ε ) (see (8)), there exists C > 0 (not depending on n) such that, if n is large enough, then Cn 9 e −2nT 2 δ 2 /9(T +1) because # Γ = O(n 8 ).The conclusion follows.and it suffices to prove that because the estimate on P (−Y m −ε + δ) is obtained similarly.Let s > 0 to be chosen later.By the Markov inequality, we have Using and the independence of the Bernoulli variables X i (whose expectation is ε), we infer that Proof of Lemma 7. -Let (a 1 , . . ., a m ) ∈ Σ m be a point at which the continuous function F reaches its maximum over the compact set Σ m .Let (j, k) ∈ {1, . . ., m} 2 be such that j = k.We define the function α on R by α(u) = F (a 1 , . . ., a j + u, . . ., a k − u, . . ., a m ). Setting I j,k = [max(−a j , a k − c/m), min(a k , c/m − a j )], for every u ∈ I j,k (i.e., 0 a j + u c/m and 0 a k − u c/m, we have (a 1 , . . ., a j + u, . . ., a k − u, . . ., a m ) ∈ Σ m .Note that 0 ∈ I j,k and that, since (a 1 , . . ., a m ) is a maximizer of F , we have α(u) α(0) for every u ∈ I j,k .We have two possible cases: (i) a j = 0 or a j = c/m or a k = 0 or a k = c/m; (ii) 0 < a j < c/m and 0 < a m < c/m. In the case (ii), we must have α (0) = 0, and since, by computing this derivative, we have it follows that a j = a k . Since the pair (j, k) of distinct integers was arbitrary, we conclude that there exists λ ∈ (0, c/m) such that a j ∈ {0, λ, c/m} for every j ∈ {1, . . ., m}.Let J be the set of indices such that a j = λ for every j ∈ J. Denote by F J the restriction of F to the set of all (λ 1 , . . ., λ m ) ∈ Σ m such that λ i = a i for every i / ∈ J. Observing that F J is the product of separate variables positive strictly convex functions, its Hessian d 2 F J (a 1 , . . ., a m ) must be positive definite.But, by maximality of (a 1 , . . ., a m ) and since m i=1 a i = 1, d 2 F J (a 1 , . . ., a m ) has at most one positive eigenvalue.Therefore J contains at most one element. Note that f (a) 4 for every a ∈ (0, 1).Integrating two times this inequality and using that f (ε) = f (ε) = 0, we get f (a) 2δ 2 and therefore The lemma is proved. J.É.P. -M., 2022, tome 9 Appendix.Some properties of the functional T Recall that, given any T > 0 and any Lebesgue measurable subset ω of M , denoting by χ ω the characteristic function of ω, we have defined The functional T can be extended by replacing χ ω by any measurable function a on M .It can even be extended further: any geodesic ray γ ∈ Γ is the projection onto M of a geodesic curve on S * M , that is, γ(t) = π • ϕ t (z) for some z ∈ S * M .Here, we denote by (ϕ t ) t∈R the Riemannian geodesic flow, where, for every t ∈ R, ϕ t is a symplectomorphism on (T * M, ω) which preserves S * M .Now, given any bounded measurable function a on (S * M, µ L ) and given any T > 0, we define where a T (z) = 1 T T 0 a•ϕ t (z) dt and where the unit cotangent bundle S * M is endowed with the Liouville measure µ L .Note that T (a) = T (a • ϕ t ), i.e., T is invariant under the geodesic flow. It can also be noted that for a fixed the function T → T T (a) is superadditive.Of course, we recover the initial definition of T by pushforward to M under the canonical projection π : S * M → M : given any bounded measurable function f on (M, dx g ), we have that we simply denote by T (f ).When f = χ ω , we recover T (ω). Remark 3. -Setting a t = a • ϕ t , and assuming that a ∈ C ∞ (S * M ) is the principal symbol of a pseudo-differential operator A ∈ Ψ 0 (M ) (of order 0), that is, a = σ P (A), we have, by the Egorov theorem (see [10]), where σ P (•) is the principal symbol.Accordingly, we have a T = σ P (A T ) with We provide hereafter a microlocal interpretation of the functionals T and we give a relationship with the wave observability constant. Microlocal interpretation of T and of the wave observability Note that R f T = 1, i.e., equivalently, f T (0) = 1.We denote by X the Hamiltonian vector field on S * M of the geodesic flow (we have e tX = ϕ t for every t ∈ R), and we define the selfadjoint operator S = X/i.Using that a•e tX = (e tX ) * a = e tL X a = e itS a, we get Besides, setting A = Op(a) (where Op is a quantization), we have where P λ is the projection onto the eigenspace corresponding to the eigenvalue λ of √ , i.e., √ = λ∈Spec( √ ) λP λ .Restricting to half-waves, the wave observability constant is therefore given (see [7]) by Note that, for every y = λ P λ y = λ y λ φ λ ∈ L 2 (M ), where φ λ is an eigenfunction of norm 1 associated with λ, for every smooth function a on M (i.e., A is the operator of multiplication by a), we have and we thus recover the expression of C T (a) by series expansion (see [7]). Note also that, as said before, the principal symbol of A f T = A T (a) is Semi-continuity properties of T .-Note the obvious fact that if a and b are functions such that a b and for which the following quantities make sense, then T (a) T (b).In other words, the functional T is nondecreasing.If moreover χ ω h k then lim sup k→+∞ T (h k ) T (ω) T (h k ) and the result follows. Note that, in the above proof, we use the fact that h k (x) → χ ω (x) for every x.Almost everywhere convergence (in the Lebesgue sense) would not be enough. Remark 4. -We denote by d the geodesic distance on (M, g).It is interesting to note that, given any subset ω of M : • ω is open if and only if there exists a sequence of continuous functions h k on M satisfying 0 h k h k+1 χ ω for every k ∈ N * and converging pointwise to χ ω . • ω is closed if and only if there exists a sequence of continuous functions h k on M satisfying 0 χ ω h k+1 h k 1 for every k ∈ N * and converging pointwise to χ ω . Indeed, if ω is closed, then one can take h k (x) = max(0, 1 − k d(x, ω)).Conversely, since h k (x) = 1 for every x ∈ ω, by continuity of h k it follows that h k (x) = 1 for every x ∈ ω, and thus χ ω h k 1.Now take x ∈ ω ω.We have h k (x) = 1 and h k (x) → χ ω (x), hence χ ω (x) = 1 and therefore x ∈ ω.Hence ω is closed.Indeed, either γ(t) / ∈ ω and then χ ω (γ(t)) = 0 and the inequality is obviously satisfied, or γ(t) ∈ ω and then, using that ω is open, for k large enough we have γ k (t) ∈ U , where U ⊂ ω is a compact neighborhood of γ(t).Since h k is monotonically nondecreasing and χ ω is continuous on U , it follows from the Dini theorem that h k converges uniformly to χ ω on U , and then we infer that h k (γ k (t)) → 1 = χ ω (γ(t)).The claim is proved.Now, we infer from the Fatou lemma that and the equality follows. Figure 3 . Figure 3. M is pill-shaped and ω is the complement of the hatched area. Figure 4 . Figure 4. Particular geodesics issued from O and meeting the square with bold boundary. σ P (A f T ) = σ P (A T (a)) = a f T = R f T (t)a • e tX dt = f T (S)aand thusT (a) = inf σ P (A T (a)). Lemma 10 . -Let ω be an open subset of M and let T > 0 be arbitrary.For every sequence of continuous functions h k on M converging pointwise to χ ω , satisfying moreover 0 h k h k+1 χ ω for every k ∈ N * , we haveT (ω) = lim k→+∞ T (h k ) = sup k∈N * T (h k ).Proof.-Since h k χ ω , we have T (h k ) T (ω).By continuity of h k and by compactness of geodesics, there exists a geodesic ray γ k such thatT (h k ) = 1 T T 0 h k (γ k (t)) dt.Again by compactness of geodesics, up to some subsequence γ k converges to a ray γ in C 0 ([0, T ], M ).We claim that lim inf k→+∞ h k (γ k (t)) χ ω (γ(t))∀t ∈ [0, T ].
12,444.6
2019-12-05T00:00:00.000
[ "Mathematics" ]
Intelligent Dumbbell Based on Multiple Sensors In recent years, with the economic development, people’s living standards have continuously improved, and obesity, suboptimal health, and other issues have continued to appear and have become the focus of people’s attention. To maintain a healthy body, more people are participating in fitness exercises. However, traditional fitness equipment can be boring and inflexible in its use and does not enable the recording of fitness data, making it difficult for people to monitor fitness data and manage their health information. The design of miniaturized intelligent equipment with comprehensive management will be of great significance in meeting the fitness needs of users and have a wide range of applications. As small fitness equipment, dumbbells are suitable for people of different ages, genders, and physical fitness levels. To increase the fun of dumbbell fitness and realize guidance in dumbbell fitness exercise, we have designed an intelligent dumbbell based on multiple-sensor information fused with wireless communication technology. The intelligent dumbbell can automatically recognize its weight through a pressure sensor and judge fitness exercise through an acceleration sensor. The information is then transmitted by a wireless transmission module to a mobile phone application (app) to realize the intelligent monitoring of dumbbell fitness. At the same time, the user can also set the fitness mode through the mobile phone app to guide the user in exercise and more systematically improve the user’s fitness. Introduction With the rapid development of society, the health problems of young people are becoming more serious due to the rapid pace of life, high work pressure, and lack of exercise. The sudden death of young people can often be seen in the news, which also reminds young people to exercise more. Indoor fitness equipment such as treadmills, spinning bikes, and fitness chairs have entered millions of households, and smart sports bracelets, smart running shoes, smart weight scales, and sports management apps have gradually become part of people's lives. The introduction of sports auxiliary facilities such as intelligent sports equipment and management software has greatly improved people's interest in sports. At the same time, we also found that although existing sports equipment can record the number of steps, the speed of the user, and other data, it is difficult to accurately monitor posture during exercise and give guidance accordingly. However, this also motivated us to design intelligent fitness products using sensors and their related materials and technology. Sports state recognition provides a technical basis for sports monitoring and providing sports status reminders. (1) Motion state recognition records some motion characteristics of people, infers people's movement state, provides people with exercise suggestions, and even guides people to make health plans. (2) At present, the research on motion state recognition is mainly based on the following two approaches: motion state recognition based on visual equipment and image processing and motion state recognition based on wearable sensors, such as motion sensors, wearable devices with gyroscopes, or other sensors. In the motion status recognition of pictures and videos, (3,4) a camera is used to take pictures and monitor people in real time, and then image and video analysis technology is used to identify people's activities. However, this type of technology is mainly used in the home monitoring of elderly people and children. Owing to the restrictions of the camera, the camera sensor can only monitor a limited area, which is more useful in identifying problems such as the fall of the person being monitored. (5) However, it is difficult to promote the use of long-term health monitoring and self-quantification of users. (2) In the recognition of wearable sensor motion status, motion status data are collected by attaching wearable sensor devices on people, including acceleration sensors, gyroscopes, and heart rate monitoring devices. Users only need to carry a set of equipment based on widely available sports state recognition technology to collect their own movement data. Mathie et al. (6) described a novel system for objectively and continuously monitoring movement by accelerometry. Biagetti et al. (7) presented a low-cost wearable wireless system specifically designed to acquire surface electromyography (sEMG) and accelerometer signals for monitoring human activity when performing sports and fitness activities, as well as in healthcare applications. Li (8) combined wearable technology, signal processing technology, and wireless communication technology by using acceleration sensors, biosensors, Bluetooth modules, smartphones, and back-end servers to build a "wearable health monitoring system based on recognition of human movement status" to monitor human movement status. On the basis of real-time recognition, alarms are issued for dangerous fall movements and abnormal physiological signals in different motion states. Han et al. (9) proposed a method of recognizing upper-limb motion gestures for a human-computer interface (HCI) using electronic textile sensors, which consist of a double-layered structure with complementary resistance characteristics. Huang et al. (10) designed a wearable wrist goniometer (WWG) composed of an Arduino Nano and two GY-521 accelerometer-gyroscopes. The WWG can get the carpal postures in six directions and measure actual movement angles. According to the above references, an acceleration sensor can identify movement state information, (11) and two acceleration sensors can achieve accurate monitoring of wrist movements. (9,10,12) Therefore, to solve the problem of inaccurate monitoring of sports equipment, we have designed an intelligent dumbbell as fitness equipment that incorporates film pressure sensor technology, acceleration sensor technology, a Bluetooth wireless communication module, and an Arduino MEGA 2560 control system. Dumbbells have the advantages of occupying a small space, are easy to use, and have fixed movements, making it easy to implement intelligent monitoring. To accurately monitor dumbbell fitness and increase the interest in fitness exercise, the dumbbells use two acceleration sensors to accurately determine the change in the exercise angle and a film pressure sensor (13) to identify the weight of each dumbbell, determine the intensity of the fitness exercise, and connect to the user's mobile phone via Bluetooth to achieve intelligent management of fitness. System Structure and Sensors To achieve accurate monitoring and the feedback of exercise information, the intelligent dumbbell can automatically determine its weight, actively identify the user's fitness exercise posture, record fitness data, and judge the fitness intensity on the basis of the exercise data to act as a fitness sports monitoring guide. At the same time, users can also set up a fitness mode through their mobile phones to carry out targeted training. The intelligent dumbbell system framework mainly includes seven parts: a film pressure sensor, an acceleration sensor module, an Arduino control board, a voice playback module, an LED display, a wireless transmission module, and a mobile phone app. A block diagram of the system is shown in Fig. 1. Arduino control system In order to realize the intelligent dumbbell, the motion data obtained through the acceleration sensor module needs to be processed by the control system. The controller used in the dumbbell is the Arduino MEGA 2560 microcontroller. The microcontroller is an 8-bit microcontroller with 54 digital inputs/outputs, of which 15 can be used as PWM outputs; 16 analog inputs, each with 10-bit resolution; and four UART interfaces to meet the needs of dumbbell status data processing and system control. (14,15) Thin-film pressure sensor As shown in Fig. 2, the thin-film pressure sensor selected for the dumbbell is a piezoresistive flexible pressure sensor. Through the voltage divider circuit in Fig. 2, we can obtain the voltage value of the pressure sensor, and then calculate the weight of the dumbbell. Accelerometer technology For the acceleration sensor, we choose the JY16 acceleration sensor module. This module uses a high-precision MPU6050 accelerometer. The processor reads the measurement data of the accelerator and combines the module's internal attitude solver to obtain the threedimensional acceleration, angular velocity, and angle. The single-chip microcomputer can read the measurement data through the serial port connection. The connection diagram is shown in Fig. 3 and the pin functions are shown in Table 1. To achieve accurate monitoring of dumbbell movements, we designed two acceleration sensors that are installed on both sides of the dumbbell to collect user movement information. By comparing and processing the information from the two accelerator sensors, we can calculate the posture data of the dumbbell, and then judge the user's movement state. Figure 4 shows the acceleration sensor data in the static state. Hardware After deciding the scheme and each part of the module, we designed the hardware circuit. Figure 5 shows the hardware connection of the intelligent dumbbell control system, which includes the above-mentioned module, a music player module, a sound and light alarm module, and buttons. Software The intelligent dumbbell control system includes two parts: the Arduino information acquisition control system and the mobile phone app. The Arduino information acquisition control system calculates the user's movement status through the dumbbell sensor data, and makes corresponding sound and light prompts, and then transmits this movement status information to the mobile phone. The mobile phone app can realize real-time data monitoring, exercise status judgment and management, and can also control the intelligent dumbbell to play music and give reminders. The flowchart of the microprocessor system is shown in Fig. 6. Data analysis Dumbbell exercises can be used for strength training and muscle compound training. Muscle training requires fixed standard movements and repeated training, so dumbbell exercises mainly include push, flexion and extension, pull-ups, and curls. To accurately recognize the attitude of the movement, after obtaining the data of the two acceleration sensors, we perform separate calculations to obtain the attitude information, fuse the two sets of data, and compare the characteristic data of different movement states to identify and determine the attitude of the movement. Figures 7-9 show the motion posture information acquired by the sensors under different motion states. As shown in the figures, the x-axis is acceleration data during motion, and the y-axis is time, and the data of the three axes of each acceleration sensor are displayed in different colors, and different motion states correspond to different acceleration data. Figures 7(a), 8(a), and 9(a) shows the data of the three axes of the acceleration sensor on the left side of Display and interaction The app interface (Fig. 10) includes Bluetooth connection, disconnection, start, stop, exercise time, exercise quantity, exercise mode selection, music selection, data recording, and exercise analysis. Through the mobile app, users can obtain dumbbell fitness information and make judgments on fitness activities, such as whether the amount of exercise is too much and whether the posture is standardized, etc., to achieve intelligent guidance of dumbbell fitness. Actual product As shown in Fig. 11, the control system is installed on the side of the dumbbell so that the user's movement information can be obtained more accurately. Results and Discussion The intelligent dumbbell can directly measure the pressure data through the pressure sensor and then calculate the dumbbell weight. Through the two symmetrical acceleration sensors, we can obtain the acceleration, angular velocity, and angle information of the acceleration sensors in different states, and the dumbbell status can be recognized by the judgment of the state information. Table 2 shows the attitude data of the dumbbell in the horizontal and vertical states, where Z is the yaw angle, X is the roll angle, and Y is the pitch angle. As shown in Table 2, through the two symmetrical acceleration sensors, we can calculate the angle change in dumbbell movement. We can identify the movement in the horizontal direction through the Z angle data and the movement in the vertical direction through the judgment of the Y angle data. After identifying the horizontal and vertical key data, combined with the acceleration information described in Sect 4.1, the system can accurately give the number and status of dumbbell movements. This paper reports the first ever incorporation of posture recognition and gravity detection technologies into dumbbells, where the characteristics of dumbbell fitness are combined to achieve more detailed fitness monitoring and guidance. This has significance for the development of professional sports equipment design and precise motion-monitoring research. Conclusions Fitness exercise has become popular. However, it has been difficult for traditional fitness equipment to meet people's needs, and intelligent fitness equipment has become a trend. To realize an intelligent dumbbell, we combined a thin-film pressure sensor, an acceleration sensor, an Arduino controller, and Bluetooth wireless communication with a mobile phone app. Its functions include recording the time and frequency of dumbbell exercise, recognizing exercise status, and correcting exercise posture. The dumbbell can basically meet the needs of users' fitness. In the future, we can replace the Arduino controller with other control modules according to requirements to achieve miniaturization and low power consumption, which is expected to be significant for the subsequent development of professional fitness equipment.
2,935.8
2020-06-10T00:00:00.000
[ "Computer Science" ]
Fluid structure interaction simulations and experimental validation of a pipeline immersed in liquid In launch vehicles, there are number of fluid systems to cater to various requirements like propellant filling, draining to/from tanks, venting of ullage gases, pressurization of the tanks, feed systems to convey propellants to engine from tanks, lines to convey command gas to actuate various valves etc. These pipe lines are subjected to environmental loads i.e. vibration loads when the launch vehicle starts its course to place the satellite into its designated orbit. It is essential to study the dynamic characteristics of these pipe lines for the design of them for vibration loads. The response of these pipelines depends on fluid environment in which they are immersed. If a pipe line is immersed in liquid, its dynamic characteristics vary largely from that of a pipeline vibrating in ambient environment (air). If the pipe line immersed in fluid, natural frequency reduces and damping increases due to the added mass and fluid viscosity respectively. In order to study these effects, FSI studies are carried out on a pipe line immersed in water using ANSYS CFD and Mechanical software. Convergence studies with respect to time scale are carried out to benchmark the simulation procedure. This simulation procedure is validated by conducting the experiments. Abstract. In launch vehicles, there are number of fluid systems to cater to various requirements like propellant filling, draining to/from tanks, venting of ullage gases, pressurization of the tanks, feed systems to convey propellants to engine from tanks, lines to convey command gas to actuate various valves etc. These pipe lines are subjected to environmental loads i.e. vibration loads when the launch vehicle starts its course to place the satellite into its designated orbit. It is essential to study the dynamic characteristics of these pipe lines for the design of them for vibration loads. The response of these pipelines depends on fluid environment in which they are immersed. If a pipe line is immersed in liquid, its dynamic characteristics vary largely from that of a pipeline vibrating in ambient environment (air). If the pipe line immersed in fluid, natural frequency reduces and damping increases due to the added mass and fluid viscosity respectively. In order to study these effects, FSI studies are carried out on a pipe line immersed in water using ANSYS CFD and Mechanical software. Convergence studies with respect to time scale are carried out to benchmark the simulation procedure. This simulation procedure is validated by conducting the experiments. 1.Introduction Launch vehicles generally consist of solid, liquid and cryogenic stages. Liquid and cryogenic stages use fuel and oxidizer for propulsion. Fuel and oxidizer are stored in two different propellant tanks. Gaseous medium required for actuation of valves and pressurization is stored in spherical gas bottles. There are number of fluid systems to cater to various requirements like propellant filling, draining to/from tanks, venting of ullage gases, pressurization of the tanks, feed line to convey propellants to engine from tanks, lines to convey command gas to actuate various valves etc. A fluid system consists of a set of pipelines of different diameters, their supporting brackets and other elements like valves. In stages, four surrounding environment conditions arise. They are: (i) pipeline carrying ambient gas immersed in ambient air (ii) pipeline carrying ambient gas immersed in liquid (iii) pipeline carrying fluids immersed in fluid (iv) pipelines carrying fluid in ambient air. These pipelines are subjected to different environmental loads i.e. vibration loads when the launch vehicle starts its course to place the satellite into its designated orbit. It is essential to study the dynamic characteristics of these pipelines with the surrounding environment conditions for the design of them. The dynamic characteristics vary with respect to the conditions in which they were immersed. The response of a pipeline immersed in fluid environment varies largely from that of a pipeline vibrating in ambient air. This paper focuses on quantifying the effects of added mass and viscosity of fluid on natural frequency and damping of a pipeline immersed in water subjected to vibration through fluid structure interaction simulations. Simulation studies are carried out by sequential coupling of fluid and structural domains. Convergence studies with respect to time scale are done in order to bench mark the simulation procedure. Experimental studies are also carried out to validate the simulation procedure. Literature Survey Amin Zare et al used Euler Bernoulli beam theory for mathematical formulation for dynamic characterisation of pipelines subjected to fluid flow induced effects [1] . H. S. Simha et all analyzed fluid conveying pipes with simply supported, cantilever and fixed boundary conditions were analyzed [2] . Fluid structure interaction can be studied by two ways, one way coupling and sequential coupling, where later gives the more accurate results [3] . When a structure is immersed in fluid environment, fluid structure interaction takes place which alters the response of the structure [4] . Vipin Kumar et al discussed different methods for solving the dynamic equations. Ritz method was used to obtain dynamic equations for pipelines under different boundary conditions [5] . Added mass Generally added mass is defined technically as a matrix which correlates the interaction of the mechanical structural elements through changes in fluid pressure. Expression for added mass for a perfect fluid at rest can be derived from the case of a simple harmonic oscillator. Added mass for a moving body inside an incompressible fluid does not depend on the viscosity [6] . Reduction in natural frequency of pipelines when immersed in an incompressible fluid is due to axial added mass coefficients caused by external fluid. Numerically the natural frequency can be obtained using Galerkin method [7] . The liquid layers surrounding a structure impose an asymmetry, which produces a difference in natural frequency in vertical and horizontal polarizations. The added mass coefficient is evaluated from the ratio between resultant force and acceleration acted on the wall [8] . Damping There are various factors which govern damping of pipe lines immersed in fluid. Fluid viscosity and flow velocity are main factors which cause increase in damping of the system. The solution for this system is solved using spectral element method in frequency domain [9] . When fluid is flowing around a structure, it induces an additional damping effect as a result of its viscosity. Depending on an initial condition, added stiffness matrix and external force is together modeled to obtain the viscous force [10] . Transfer matrix is a matrix that represents the motion of a single pipeline section. The matrix incorporating the boundary conditions is called a point matrix. Both these matrices are combined to form overall transfer matrix [11] . To 3 study the effects of added mass and viscosity, fluid structure interaction simulations of a pipeline made of polyimide immersed in water are carried out using ANSYS Mechanical and ANSYS CFD codes [12] . Numerical Methodology Numerical simulations are done by CFD (computational fluid dynamics) simulations, which is the science of predicting fluid flow, heat transfer, mass transfer and chemical reactions. The pattern of fluid flow around a body immersed in liquid depends on the geometry of the body. For the FSI studies, fluid model and structure models are prepared. The interaction between the flow and the structure is based on sequential coupling of both. Structural Domain The structural domain represents the structure, its walls and boundary conditions. Basic equation of dynamic structural analysis for structural domain, static and transient analysis has to be carried out. Fluid Domain The fluid domain represents the fluid medium and its boundary conditions. Both the fluid and solid domain will have common boundaries where the fluid structure interaction takes place. Averaged Lagrangian Euler (ALE) method is used to study the dynamics of slowly varying waves in a moving medium. This method can be used for both linear and non linear systems. Momentum and continuity equations are also solved. Numerical Implementation and Mesh Statistics FSI studies are carried out on a polyimide pipeline immersed in water. Figure 1 Analysis Procedure Water and air contained in the tank are idealized as incompressible viscous fluids. Initial volume fractions have been set using step function as specified by free surface modeling of Volume of Fraction (VOF) method. Liquid height of fluid is chosen as 0.6m from the tank bottom. Coupled solver with appropriate convergence control parameters and different time step sizes are used for carrying out convergence studies. Homogeneous Eulerian-Eulerian multiphase model available in ANSYS/CFD is used for obtaining numerical solution in which time dependent momentum (Reynolds Averaged Navier Stokes -RANS) equations along with continuity and volume fraction equations are solved. Initial pressure condition is P initial = P air + ρ water x g x (0.6 -z) x vf water ; set at opening boundary condition. Coupled field analyses are those analyses in which the input of one analysis is dependent on the results obtained from another analysis. There are two ways of coupling the results. For the present FSI studies, sequential coupling is employed. For solving sequentially coupled problem, ANSYS Multifield Solver (MFX solver) is used to obtain robust and accurate solutions. It is can also used to solve complex and larger models. Results and Discussions Convergence studies are carried out for different time scales (Δt) with structural damping ( ) = 0%. The analysis was carried out for time steps of 0.005, 0.002, 0.001, 0.0002 and 0.0001 sec. Figure 2 give the plot of damping obtained with respect to different pipe line tip displacements. From the figure it is observed that damping increases with tip displacement. As time step reduces damping reduces and converges to a similar result for two simultaneous time steps. From figure 2, it is observed that the result converges to a particular value for time steps of 0.0002 sec and 0.0001 sec. Damping of pipeline is estimated by using logarithmic decrement method. Experimental studies are also carried out for different tip displacements. Figure 3 gives the typical test data without and with filter (low pass band filter of 25 Hz) for a particular pipe line tip displacement. Figure 3. Typical test data without and with filter. [13] Experiments are also carried out for the same pipe line in air in order to determine structural damping. From experimental tests, a damping ratio of 1% is obtained. This structural damping is included and FSI simulations are repeated for the time scale of 0.0002 sec. Results obtained are compared with the experimental values and given in Figure 4. From Figure 4, it is concluded that both experimental and numerical damping values match fairly well. Thus simulation procedure is validated. This can be taken as a benchmark criterion for obtaining convergence for simulation procedure. Conclusion Fluid structure interaction studies are carried out using ANSYS Multifield (MFX) solver for a polyimide pipe line immersed in water. It is found that damping ratio increases with increase in amplitude of test specimen. Convergence with respect to of time scale (Δt) is achieved for a time step of 0.0002 sec after several iterations without incorporating structural damping. FSI simulations are repeated for the converged time step after including the experimentally determined structural damping. Experiments are also carried out on similar pipe line immersed in water. Simulation results are compared with experimental results and
2,616.6
2019-11-01T00:00:00.000
[ "Engineering" ]
Machine Learning Algorithm for Delay Prediction in IoT and Tactile Internet : The next-generation cellular systems, including fifth-generation cellular systems (5G), are empowered with the recent advances in artificial intelligence (AI) and other recent paradigms. The internet of things (IoT) and the tactile internet are paradigms that can be empowered with AI solutions and integrated with 5G systems to deliver novel services that impact the future. Machine learning technologies (ML) can understand examples of nonlinearity from the environment and are suitable for network traffic prediction. Network traffic prediction is one of the most active research areas that integrates AI with information networks. Traffic prediction is an integral approach to ensure security, reliability, and quality of service (QoS) requirements. Nowadays, it can be used in various applications, such as network monitoring, resource management, congestion control, network bandwidth allocation, network intrusion detection, etc. This paper performs time series prediction for IoT and tactile internet delays, using the k -step-ahead prediction approach with nonlinear autoregressive with external input (NARX)-enabled recurrent neural network (RNN). The ML was trained with four different training functions: Bayesian regularization backpropagation (Trainbr), Levenberg–Marquardt backpropagation (Trainlm), conjugate gradient backpropagation with Fletcher–Reeves updates (Traincgf), and the resilient backpropagation algorithm (Trainrp). The accuracy of the predicted delay was measured using three functions based on ML: mean square error (MSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). Introduction Artificial intelligence (AI) is emerging as a critical ingredient needed to understand multiple datasets collected and make them commercially valuable. AI can support data analysis from the internet of things (IoT), where the system can perform tasks or improve intelligent information. Moreover, AI in IoT devices can identify data, adopt decisions, and operate on that information without user intervention [1]. Machine learning (ML) is the method that deals with developing algorithms that can learn from information and make predictions. Moreover, modern central processing unit (CPU) technology enables an effective implementation of AI algorithms. However, the traffic volume is increasing, and the heterogeneity of traffic is increasing. IoT and ultra-reliable, low-latency communications bring a variety of demands that require greater efficiency in quality of service (QoS)-related decisions, as current QoS technologies cannot achieve the desired level. Most provisions are necessary to predict load and delay for specific facilities, considering geographic location and dynamics, such as subscriber movement and incredible speeds. Operators also need an overall system that allows for predictions Predicting historical information refers to predicting the following values of the system from previous and current information. Typically, predefined computational modeling (or hybrid modeling) is used to achieve the expected data. A recurrent neural network (RNN) essentially has a memory that can predict time-dependent targets. RNNs can store the previously sensed state of the inputs to determine the future time step. Recently, many variations were introduced to adapt recurrent networks to a variety of domains. Nonlinear autoregressive with external input (NARX) is a powerful RNN used to solve the problem with nonlinear historical information. The widely used NARX model provides promising results for time series problems based on lagged input and output variables and prediction errors, as discussed in these studies. Unlike the conventional RNN, the NARX network provides optimal prediction performance for almost any nonlinear function with little or no computational losses. The NARX model [8][9][10]20,22,23] has been used for various applications in previous studies. There are several uses for the NARX network, which can be used for predicting future information. It can be used for nonlinear cleanup operations, where the desired result is noise-free input information. The use of the NARX network brings several essential features, namely, the representation of dynamic systems. ANN contains a feedback and delay mechanism that allows information to flow between the neurons of the different layers. Feedback is a way of storing a temporary memory that allows the network to retrieve old data. While delay provides direct sets of past data for the current period, response methods perform processing (filtering) of past data [8][9][10]. Accurate network traffic prediction enables efficient resource management that ensures appropriate QoS measures. Traffic prediction can be used for various applications, such as network monitoring, resource management, congestion control, network bandwidth allocation, network intrusion detection (e.g., anomaly detection), etc. With the evaluation of 5G networks, the traffic volume and network complexity have increased. Many network traffic prediction Predicting historical information refers to predicting the following values of the system from previous and current information. Typically, predefined computational modeling (or hybrid modeling) is used to achieve the expected data. A recurrent neural network (RNN) essentially has a memory that can predict time-dependent targets. RNNs can store the previously sensed state of the inputs to determine the future time step. Recently, many variations were introduced to adapt recurrent networks to a variety of domains. Nonlinear autoregressive with external input (NARX) is a powerful RNN used to solve the problem with nonlinear historical information. The widely used NARX model provides promising results for time series problems based on lagged input and output variables and prediction errors, as discussed in these studies. Unlike the conventional RNN, the NARX network provides optimal prediction performance for almost any nonlinear function with little or no computational losses. The NARX model [8][9][10]20,22,23] has been used for various applications in previous studies. There are several uses for the NARX network, which can be used for predicting future information. It can be used for nonlinear cleanup operations, where the desired result is noise-free input information. The use of the NARX network brings several essential features, namely, the representation of dynamic systems. ANN contains a feedback and delay mechanism that allows information to flow between the neurons of the different layers. Feedback is a way of storing a temporary memory that allows the network to retrieve old data. While delay provides direct sets of past data for the current period, response methods perform processing (filtering) of past data [8][9][10]. Accurate network traffic prediction enables efficient resource management that ensures appropriate QoS measures. Traffic prediction can be used for various applications, such as network monitoring, resource management, congestion control, network bandwidth allocation, network intrusion detection (e.g., anomaly detection), etc. With the evaluation of 5G networks, the traffic volume and network complexity have increased. Many network traffic prediction algorithms have been proposed, such as autoregressive integrated moving average (ARIMA), k-nearest neighbors (KNN), random forest, support vector machine (SVM), linear regression, etc. However, when we have a large amount of contaminated data, these methods do not seem to work [24,25]. Therefore, we use a k-step prediction approach with NARX-RNN for accurate network traffic prediction in this work. The motivations behind this study include the following: 1. Optimize the QoS requirements and network monitoring to manage resources and ensure security. 2. The NARX-RNN technique predicts time series data, remembers the historical data, and accurately estimates future time series data. It also has the advantage over other time series prediction approaches in that it serves to maximize the accuracy of the learning method over the training iterations. As more data are added to the model, the model becomes smarter and can better predict network traffic, which is significant for actual traffic information forecasting. 3. There is insufficient knowledge about the delays in IoT communication parameters. 4. There is a lack of accurate ML analysis to achieve reasonable prediction accuracy. 5. The challenging problems in optimizing QoS measures are computationally complex. 6. Monitor network availability and activity to detect security and operational issues. Due to the limitations of the available solutions, this work focuses on forecasting delays in IoT and tactile internet networks. Several application domains are delay aware and necessary (e.g., healthcare, security, and medical emergency). In these applications, e.g., intensive remote patient monitoring, a momentous case must be revealed to a healthcare agency within a specific period to determine appropriate steps. Moreover, this might lead to different amounts of information relying on the number and type of observations performed. To perform predictive modeling of delay, we use an ML-based technique, such as the NARX-RNN model with a k-step prediction approach. The key contributions of the proposed work are summarized as follows: • We proposed a k-step-ahead time series prediction approach with a NARX-enabled RNN for delay prediction in IoT and tactile internet. • The ML model was trained using four ML algorithms: Bayesian regularization backpropagation (Trainbr), Levenberg-Marquardt backpropagation (Trainlm), conjugate gradient backpropagation with Fletcher-Reeves updates (Traincgf), and resilient backpropagation algorithm (Trainrp). • The prediction accuracy was measured using three ML-based functions: mean square error (MSE) loss function, root mean square error (RMSE), and mean absolute percentage error (MAPE). • The above training algorithms were compared depending on the proposed time series prediction approach using RMSE and MAPE. • Finally, the results of the simulation-based tests demonstrate that: o The model trained with the Trainbr training algorithm outperforms its competitors and is the best predictive model in both 1-step prediction and 15-step prediction. o Moreover, the model trained with the algorithm Traingf outperforms its competitors for the 10-step prediction case. o On the other hand, the model trained with the algorithm Trainrp has poor prediction accuracy, compared to its competitors. The outline of the article is structured accordingly: the related literature review is presented in Section 2; machine learning for time series prediction is presented in Section 3; the problem formulation and system model are presented in Section 4; the performance evaluation is presented in Section 5; the simulation results are shown in Section 6; and the conclusions and directions for future work are presented in Section 7. Literature Review Recently, several researchers have focused on the time series prediction of wireless network traffic using ML methods and in 5G technology. This paper aims to predict a delay in IoT and tactile internet traffic using the ML approach, especially the k-step ahead prediction approach using NARX-enabled RNN. Therefore, in this section, we introduce the previous works that relate to our field of study. The author of [9] presented a method for time series prediction for IoT data streams with multi-step prediction (MSP), using NARX-RNN. He calculated the prediction performance using different loss functions, such as MSE, sum squared error (SSE), Mean absolute error (MAE), and MAPE. IoT delay prediction was also performed using a deep neural network (DNN): a multiparameter method in [13]. In [8], traffic prediction was performed using a NARX-enabled RNN based on a software-defined networking (SDN) infrastructure. The ANN training algorithms were trained using three ANN training algorithms: Trainlm, Traincgf, and Traincgp. The prediction accuracy, the MAPE, was measured. ML-based low latency bandwidth prediction was performed for H2M networks [14]. Ali R. Abdellah et al. [10] performed IoT delay prediction using the ML method based on the NARX recurrent neural network. The author proposed two methods for time series prediction: MSP and single-step prediction (SSP). The model ANN was trained with three ANN training algorithms: Traincgf, Trainlm, and Trainrp. The prediction performance was measured using RMSE and MAPE. The packet transmission delay prediction in an ad hoc network was analyzed using ANN [15]. White et al. [16] studied QoS prediction in IoT systems regarding transmission time and traffic capacity. In [17], the short-term prediction was discussed using different ANN methods. Research directions for further applications of ANN-based techniques were also proposed. The evaluation of a graph structure DL was studied for network traffic flow prediction with superior performance [18]. In research [19], the multilayer neural network (MLNN) was proposed for speed prediction by integrating the convolutional neural network (CNN) and gated recurrent units (GRU) to estimate the predicted speed performance with the network performance. Haviluddin et al. [22] studied the performance of time series modeling with the NARX network for network traffic prediction. Khedkar et al. [26] studied the problem of predicting IoT traffic using ML, DL, and statistical time series-based prediction technologies, including long short-term memory (LSTM), ARIMA, Vector autoregressive moving-average (VARMA), and feedforward neural networks (FFN). Li Run et al. [27] predicted subway traffic based on ARIMA and the gray prediction model, but this model has the drawback of being unable to capture rapid changes in traffic data. Alsolami et al. [28] gave an overview of techniques for traffic forecasting and examined the limitations of each technique. Additionally, they provided an overview of the different types of raw traffic data. Finally, they described a hybrid approach for traffic forecasting. The hybrid system is based on ARIMA and SVM techniques. Machine Learning for Time Series Prediction Predicting historical time sequences is an essential aspect of ML; it belongs to supervised learning approaches and is widely used in data science, applied in various domains. Several ML techniques, including regression, ANN, KNN, SVM, random forest, and XG-Boost, can be used to predict time series. ML-based forecasting models have found wide application in time series projects required by various organizations to facilitate predictive allocation of time and resources. ANN can help historical series prediction by eliminating the instantaneous need for extensive feature technology processes, data scaling procedures, and the need for stationarity and differentiation of historical series data. RNN is suitable for supervised learning tasks when data are available in a temporal sequence. It can remember the historical information to estimate future time-series data. The RNN algorithm is trained based on the previous data of the historical series into the input level. The network's connectivity is adapted depending on the difference between the actual and expected outputs over the network. Before configuring the network, the operator must determine the network hidden layers to size and the training termination process. In prediction, the past information is used to predict what will follow, and the following information is predicted, relying on what happened. The temporal sequence adds a temporal dependency between historical information. This dependency is under limitations and is structured to provide an additional source of information. Historical time sequence prediction is a method of predicting information about a historical series. It expects the following information by analyzing the past information if the future information is similar to the historical information. It can be applied in several use cases, such as resource allocation, network traffic, weather forecasting, control engineering, statistics, signal processing, and business planning. These are just a few of the many possible applications for time series forecasting. In real-world time series-i.e., forecasting weather, air quality, and network traffic flow are scenarios based on IoT devices, such as detectors-abnormal time form, missing data, high noise, and complicated correlations are multivariate and current limitations of classical prediction techniques. Such methods usually depend on noise-free and perfect information for good performance: missing data, outliers, and other erroneous features are generally not supported. Time series prediction starts with a historical time sequence, and experts investigate the historical information and temporal decomposition models, such as tendencies, seasonal models, periodic models, and symmetry. Various sectors, such as commerce, utilize historical time sequences prediction to assess potential technological complexity and customer needs. Temporal series data models can have many variations and perform various random processes [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21]. NARX Neural Network NARX represents a nonlinear autoregressive network with external inputs. NARX are dynamical RNN, have return paths surrounding various network connections. NARX networks are based on the auto-regressive with exogenous input (ARX) time series models, commonly used for time series operations, and are considered a nonlinear form of the ARX model. NARX models can simulate various nonlinear dynamic methods; they have been used for multiple problems, including time series simulation. The NARX network uses prior measures of the existing historical time series to make predictions and the previous values of other inputs to make predictions for the target series. NARX is a robust tool suitable for nonlinear modeling systems. Moreover, NARX learns more efficiently than other neural network time series, using a gradient descent learning algorithm. NARX networks have been successfully used in many applications to predict future values of the input signal. NARX networks perform better on predictions, where the desired output depends on inputs that exist at absolutely past the time points. NARX is also used as a nonlinear filter whose training data are trained with a noiseless form of input information. When applied to the historical time sequences [20], potential values for the time sequence y(n + 1) are predicted from past values for that historical time sequence and the final measure for the historical time sequence u(n). The model is mathematically represented as follows: In the compact form, we can rewrite Equation (1) as follows. Here, y(n) ∈ R and u(n) ∈ R define the input and output parameters of the network at the time (n), respectively. Moreover, y(n) and y(n + 1) are the desired and predicted output elements, respectively. Accordingly, d y and d u are the time delays of the output and input variables, respectively (z −1 = time delay unit), and e(n) is the model error between the desired and predicted output [22,23,29,30]. Figure 2 illustrates NARX neural network architecture [22,23,30]. K-Step-Ahead Prediction The primary purpose of modeling is to predict the subsequent output values using the previous and current values of the estimation unit. This unit was first determined using test information, matching several models to the dataset based on historical values, from which the optimal one was selected. Then, a new dataset was used in the unit, and the forecasting performances were compared. It was analyzed how the model can adapt to the output pattern. In this study, many prediction models were compared, k-step-ahead prediction (one-step, …, k-steps), using four different training algorithms-Trainlm, Traincgf, Trainbr, and Trainrp-to investigate which is the optimal performance generated based on these algorithms. In general, time series forecasting determines the prediction of the observed data for the following time step. This is known as one-step prediction, open-loop, or SSP, as only a single time step can be predicted. One-step (n + 1) prediction is performed by passing the present and previous data points (n, n − 1,…, n − k). In this method, the current model is always replaced by updated input values. That is, the model is permanently changed depending on the adjusted parameter values. For multi-step predictions, the same model is used repeatedly to predict all output values. In the simplest case, one-step prediction models have the following form: where y(n + 1) is the predicted output; y(n) and u(n) are the observation of the target and exogenous inputs, respectively; n is the time step; k is the number of inputs (k-step prediction); and f(.) is the prediction function learned by the model. In multi-step or k-step prediction, the same model is used repeatedly to predict all output values. The following equation can describe the prediction model in this method: where y(n + k) is the predicted output, kx is the number of input delays (unit delay), and ky is the number of output delays. The multi-step-ahead method predicts the following values of a historical time sequence step by step. First, u(n + 1) is expected based on the previous x-values, u(n + 1 − x)…, u(n − 1), then u(n + 2) is predicted depending on the past x-values that contain the expected value for u(n + 1). The process repeats until the final value, u(n + k), is measured. K-Step-Ahead Prediction The primary purpose of modeling is to predict the subsequent output values using the previous and current values of the estimation unit. This unit was first determined using test information, matching several models to the dataset based on historical values, from which the optimal one was selected. Then, a new dataset was used in the unit, and the forecasting performances were compared. It was analyzed how the model can adapt to the output pattern. In this study, many prediction models were compared, k-stepahead prediction (one-step, . . . , k-steps), using four different training algorithms-Trainlm, Traincgf, Trainbr, and Trainrp-to investigate which is the optimal performance generated based on these algorithms. In general, time series forecasting determines the prediction of the observed data for the following time step. This is known as one-step prediction, open-loop, or SSP, as only a single time step can be predicted. One-step (n + 1) prediction is performed by passing the present and previous data points (n, n − 1, . . . , n − k). In this method, the current model is always replaced by updated input values. That is, the model is permanently changed depending on the adjusted parameter values. For multi-step predictions, the same model is used repeatedly to predict all output values. In the simplest case, one-step prediction models have the following form: where y(n + 1) is the predicted output; y(n) and u(n) are the observation of the target and exogenous inputs, respectively; n is the time step; k is the number of inputs (k-step prediction); and f (.) is the prediction function learned by the model. In multi-step or k-step prediction, the same model is used repeatedly to predict all output values. The following equation can describe the prediction model in this method: where y(n + k) is the predicted output, k x is the number of input delays (unit delay), and k y is the number of output delays. The multi-step-ahead method predicts the following values of a historical time sequence step by step. First, u(n + 1) is expected based on the previous x-values, u(n + 1 − x) . . . , u(n − 1), then u(n + 2) is predicted depending on the past x-values that contain the expected value for u(n + 1). The process repeats until the final value, u(n + k), is measured. There are some tasks involving time series where multiple time steps must also be predicted. In contrast to SSP, these are called MSP or multi-step time series prediction problems. MSP is an exciting forecasting method that uses an autoregressive model to form a closed loop in the forecast. It is a forecasting method that predicts potential measures in the historical time series based on past values. The multi-step forecasting technique applies a stepwise forecasting model that uses the expected measures in the actual period to determine their values in the following step. Many temporal series problems involve predicting successive potential values, using only the previous values, such as multi-step time sequence prediction, which predicts the historical time sequence for many problems, such as wireless network traffic, energy consumed, etc. Aware of the series of subsequent values, we can extract fascinating features of the historical time sequence, such as expected bandwidth, delay, throughput, energy consumption, uncertainties, attack time, and higher or lower measures with unusual frequency. In particular, predicting the historical time sequence in multiple steps allows us to predict the cropping season for next year, the delay in a wireless network for the next hour, low and high weather temperatures for the coming months, etc. A standard method to solve these problems is to build a specific model from the past values of the time sequence and then apply it gradually to predict their following values. This method is called multi-step prediction. Since it uses the expected information from the previous one, it can be experimentally demonstrated that multi-step prediction is prone to rounding errors, i.e., errors made in the previous one are carried over to the following prediction [8][9][10]21,31]. Problem Formulation and System Model Recently, various ML algorithms were used for network traffic prediction, such as random forest [32], ARIMA [33], LSTM [7,11,12], etc. In this work, the NARX-RNN was proposed for IoT delay prediction as proven in many applications. Moreover, we used the k-step ahead prediction approach with NARX neural network for IoT traffic prediction. First, we built the IoT system to generate the dataset; we also modeled the IoT system using the AnyLogic simulator. Second, after collecting, analyzing, and processing the dataset, we used it as input to ANN for the prediction process. After loading the dataset as input for the network, the dataset was divided into two subsets in the columns Input (I) and Output (O) and then split into training, testing, and validation subsets accordingly. The normalization of the input data must be in the interval [−1, 1] compatible with the actual maximum or minimum values. The model ML was trained with four different training algorithms: Trainbr, Trainlm, Traincgf, and Trainrp. In addition, the prediction accuracy was measured using ML-based functions: MSE, RMSE, and MAPE. IoT System Modeling This section introduces the IoT model used to produce the dataset for the ML training. We built the IoT system, using the AnyLogic simulator [9,10]. Figure 3 depicts the structure of an IoT system for creating traffic data and modeling a set of IoT tools. The model contains an IoT traffic system that models the process of an IoT device or group of IoT devices, a traffic source for conventional communication facilities, and TI traffic referred to as H2H + TI (H2H-human to human, TI-tactile internet). The accomplished entry traffic moves to the connection node, and the architecture is introduced as a queuing system G/G/1/k with the service disciplines (with delay and failure basis system). The arrival time is denoted by t. The number of arrivals per time of IoT traffic is defined by λ IoT , H2H traffic λ H2H , total flow λ = λ H2H + λ IoT . Assume that with probability p, a message reaches the model entry in which all locations in the queue are busy, and an attack (such as DoS) occurs. The entire traffic flow at the outcome of the system is defined by λ. The characteristics of the two flows determine the whole traffic flow at the system's entry; therefore, it is generally different from the characteristics of normal traffic and IoT traffic. . Assume that with probability p, a message reaches the model entry in which all locations in the queue are busy, and an attack (such as DoS) occurs. The entire traffic flow at the outcome of the system is defined by λ. The characteristics of the two flows determine the whole traffic flow at the system's entry; therefore, it is generally different from the characteristics of normal traffic and IoT traffic. The above queuing system (QS) can be described as a G/G/1/k system. However, this system does not have accurate computational modeling to evaluate the probability of the packet loss and delayed arrival. Therefore, in [34,35], the diffusion approximation is used to assess the probability of packet loss, given the aware allocation criterion that describes the incoming and packet processing. ANN Training We used NARX-RNN to predict the IoT delay. For training NARX-RNN, we loaded the training data and used a tapped delay line (TDL) with two input and output delays. This work assumed that the output and input delays are the same, where dy = du = d; also, there are two inputs to the NARX network, u(n) and y(n). We created a NARX network using the narxnet functionality. The network consists of three layers, with a hidden layer containing 20 hidden neurons. For training the network, we proposed four different training algorithms, Trainlm, Traincgf, Trainbr, and Trainrp, for the training tasks and then the preparets function for preparing the data. Finally, we performed the IoT delay prediction using the k-step-ahead prediction approach. The NARX network is trained in two methods: using an open-loop training architecture and a closed-loop architecture. Closed-loop networks produce a multi-step prediction. In other words, they continue to predict when inner returns lose outer returns (responses). The multi-step-ahead forecasting method is often helpful for simulating a network in an open-loop form where the output data are known and then transitioning to a closed-loop form, where the output is returned to the network input via the NARX network to implement multistage prediction, even though the feedback was just provided. First, all stages, except the k-time stages of the input sequence and the desired output sequence, are used to model the network in the open-loop architecture, as shown in Figure 4, to take advantage of the high precision provided by introducing the desired output sequence. Then, the The above queuing system (QS) can be described as a G/G/1/k system. However, this system does not have accurate computational modeling to evaluate the probability of the packet loss and delayed arrival. Therefore, in [34,35], the diffusion approximation is used to assess the probability of packet loss, given the aware allocation criterion that describes the incoming and packet processing. ANN Training We used NARX-RNN to predict the IoT delay. For training NARX-RNN, we loaded the training data and used a tapped delay line (TDL) with two input and output delays. This work assumed that the output and input delays are the same, where d y = d u = d; also, there are two inputs to the NARX network, u(n) and y(n). We created a NARX network using the narxnet functionality. The network consists of three layers, with a hidden layer containing 20 hidden neurons. For training the network, we proposed four different training algorithms, Trainlm, Traincgf, Trainbr, and Trainrp, for the training tasks and then the preparets function for preparing the data. Finally, we performed the IoT delay prediction using the k-step-ahead prediction approach. The NARX network is trained in two methods: using an open-loop training architecture and a closed-loop architecture. Closed-loop networks produce a multi-step prediction. In other words, they continue to predict when inner returns lose outer returns (responses). The multi-step-ahead forecasting method is often helpful for simulating a network in an open-loop form where the output data are known and then transitioning to a closed-loop form, where the output is returned to the network input via the NARX network to implement multistage prediction, even though the feedback was just provided. First, all stages, except the k-time stages of the input sequence and the desired output sequence, are used to model the network in the open-loop architecture, as shown in Figure 4, to take advantage of the high precision provided by introducing the desired output sequence. Then, the network and its final stage are transferred to the closed-loop architecture, as shown in Figure 5, to make k-step predictions with only the k inputs. The output is also an input returned with a unit delay to the network's information within the usual NARX structure. Here, the network is simulated as a closed loop only. The ANN performs multiple predictions for the outer input sequence and the initial terms, as the input sequences have periods. Note that the "y" sequence is a response signal that is also output (desired output). After closing the response path, the corresponding output is delivered to the corresponding input. The one-step prediction for multiple instances helps to obtain the fast time-step prediction. The prediction of the observed data is in the next time step since only a onetime step can be predicted. The one-step (n + 1) prediction is satisfied by crossing the actual and preceding information (n, n − 1 . . . , n − k) and obtaining the expected output y(n + 1) (Algoritm 1). network and its final stage are transferred to the closed-loop architecture, as shown in Figure 5, to make k-step predictions with only the k inputs. In many cases, such as decision making, it would be helpful to have a prediction y(n + 1) when y(n) exists until the actual y(n + 1) appears. The network can return its output for the early time step by eliminating a delay so that its minimum delay unit is now 0 rather than 1. The current network produces a similar output to the primary network, but the output is shifted one step to the left, as shown in Figure 4. The output is also an input returned with a unit delay to the network's information within the usual NARX structure. Here, the network is simulated as a closed loop only. The ANN performs multiple predictions for the outer input sequence and the initial terms, as the input sequences have periods. Note that the "y" sequence is a response signal that is also output (desired output). After closing the response path, the corresponding output is delivered to the corresponding input. The one-step prediction for multiple instances helps to obtain the fast time-step prediction. The prediction of the observed data is in the next time step since only a one-time step can be predicted. The one-step (n + 1) prediction is satisfied by crossing the actual and preceding information (n, n − 1…, n − k) and obtaining the expected output y(n + 1) (Algoritm 1). Performance Evaluation The algorithms of ML have proven successful in network traffic prediction. The prediction error is a kind of bad prediction, and more precisely, if there are no error measurements, the prediction method is optimal. Therefore, our goal of reducing the errors after adjusting the connection weights can be evaluated in minimizing the errors. In this work, we used three different ML-based functions to measure the prediction accuracy, including MSE, RMSE, and MAPE, to estimate the prediction accuracy. MSE (Equation (5)) is a loss function that measures the mean squared error, where the error is the difference between expected and observed values. RMSE is equal to the root of MSE, as shown in Equation (6). Moreover, MAPE measures the average absolute percentage of error using absolute values (Equation (7)). MAPE has two benefits: first, the fundamental measures prevent the positive and negative errors from eliminating each other. Second, because the percent error is independent of the measurement of the reliable variables, you can use this scale to compare predictive accuracy among time sequence data of different magnitudes. where n is the number of data points, y i is the observed value, andŷ i is the predicted value. Simulation Results In this paper, we used the MATLAB environment for simulating IoT delay prediction using k-step-ahead prediction with time series NARX-based RNN. The training data were produced from the IoT network; we modeled the IoT system using the AnyLogic simulator. Before the training phase, the collected data were analyzed and processed. After loading the dataset as input to the network, the dataset was divided into input (I) and output (O) columns, split into training, testing, and validation subsets accordingly. The normalization of the input data must be in the interval [−1, 1] compatible with the actual maximum or minimum values. The prediction accuracy was measured using ML-based functions: MSE, RMSE, and MAPE. The model ML was trained with four different training algorithms: Trainbr, Trainlm, Traincgf, and Trainrp. 1. Traincgf is often significantly faster than Traingda and Traingdx and sometimes more rapid than Trainrp, although results vary from problem to problem. Traincgf requires only slightly more memory than the simpler ones, so it is usually better suited for networks with a large number of weights. When using Traincgf, the loss function decreases fastest along the negative gradient, but this does not necessarily result in the fastest convergence. 2. Trainrp is usually faster than the standard steepest descent algorithm. It also has the special property that it requires only a small increase in memory. We only need to adjust the weight and bias, which is equivalent to storing the gradient. The purpose of the Trainrp training algorithm is to eliminate these harmful effects of the magnitudes of the partial derivatives. 3. Trainlm is a function for training that adjusts the weight and bias according to Levenberg-Marquardt optimization. It emerges to be the fastest method for training medium-sized networks, but it tends to be less efficient for training large networks. It has better training performance than other algorithms in some applications. It is also very efficient, and its properties come out even better in a MATLAB environment. The main drawback of Trainlm is that it requires the keeping of some matrices, which can be very important in certain problems. 4. Trainbr is a function for training that adjusts the weight and bias according to the Levenberg-Marquardt optimization. It reduces the combination of squared errors and weights, and then adjusts the valid combination to achieve better generalization capability than by stopping early; it has the best accuracy in training. Table 1 demonstrates IoT traffic prediction accuracy in four cases corresponding to the Trainbr, Trainlm, Traincgf, and Trainrp training algorithms using RMSE and MAPE. Table 1 shows the prediction accuracy for delays in IoT traffic using four different training functions concerning k-step prediction models, considering MSE loss function as a performance measure. The prediction accuracy was measured in RMSE and MAPE to investigate which prediction model provides optimal accuracy and maximum average improvement. From the tabulated results, the Trainbr algorithm outperforms its competitors and has the best performance in both the 1-step and 15-step predictions. The maximum average improvement is 0.7325% and 6.6% in both cases, respectively. However, the algorithm Trainrp has the lowest performance in both cases compared to the others. The performance with algorithm Traincgf is almost equivalent to that of algorithm Trainbr for the 15-step prediction, with an RMSE of 1.970 and a MAPE value of 0.1580%. Thus, the maximum average improvement, in this case, is 6.5%. Moreover, the Trainlm algorithm has reasonably equivalent accuracy to the Trainbr algorithm for the one-step prediction case with an RMSE of 0.0672 and a MAPE of 0.0851%; the maximum average improvement, in this case, is 0.7%. Moreover, algorithm Traincgf outperforms its competitors in the 10-step prediction, and the maximum improvement is 6.6%. Furthermore, algorithms Trainlm and Trainbr have approximately the same performance as algorithm Traincgf; the maximum improvement, in this case, is 6.5% and 6.4%, respectively. On the other hand, algorithm Trainrp also performs poorly in this case and has the lowest prediction accuracy compared to its competitors. Figures 6-9 show the prediction models with the above training algorithms regarding the k-step prediction model used. As can be seen in Figure 6, for the k-step prediction models in the case of using Trainbr, as shown in the figure, the prediction models in the case of 1-step and 15-step prediction are identical to the observed model. However, the prediction model in the case of using 10-step prediction deviates slightly from the observed model. Moreover, in all cases, we found that the resulting model increases gradually with time. As illustrated in Figure 7, the prediction models in the Traincgf case are similar to the observed models. In all cases, we found that the expected delay increases with time until time 15, which provides the best accuracy. In Figure 8, it can be seen that when the Trainlm algorithm is used, the resulting predicted models deviate from the observed model in the case of 15-step prediction; in the case of 10-step prediction, there is a slight deviation from the observed model, but in the case of 1-step prediction, the predicted model is identical to the observed model. As shown in Figure 9, the prediction model based on the Trainrp As illustrated in Figure 7, the prediction models in the Traincgf case are similar to the observed models. In all cases, we found that the expected delay increases with time until time 15, which provides the best accuracy. In Figure 8, it can be seen that when the Trainlm algorithm is used, the resulting predicted models deviate from the observed model in the case of 15-step prediction; in the case of 10-step prediction, there is a slight deviation from the observed model, but in the case of 1-step prediction, the predicted model is identical to the observed model. As shown in Figure 9, the prediction model based on the Trainrp As illustrated in Figure 7, the prediction models in the Traincgf case are similar to the observed models. In all cases, we found that the expected delay increases with time until time 15, which provides the best accuracy. In Figure 8, it can be seen that when the Trainlm algorithm is used, the resulting predicted models deviate from the observed model in the case of 15-step prediction; in the case of 10-step prediction, there is a slight deviation from the observed model, but in the case of 1-step prediction, the predicted model is identical to the observed model. As shown in Figure 9, the prediction model based on the Trainrp algorithm deviates significantly from the observed model in both the 15-step and 10-step predictions. In Figure 9, the prediction model based on the Trainrp algorithm deviates from the observed model in both the 15-step and 10-step predictions. It also deviates slightly from the observed model in the one-step prediction. The result of the ANN training performance is shown in Figures 9-11. The figures indicate the relationship between MSE (loss) and the number of epochs during the training network for the successful training due to the lowest errors in the training, validation, and testing curves. The error generally minimizes after more training epochs but may rise when the network overfits the training data in the validation dataset. By default, training terminates after six sequential increases in the validation error, and the best performance is taken in the epoch with the minimum validation error. algorithm deviates significantly from the observed model in both the 15-step and 10-step predictions. In Figure 9, the prediction model based on the Trainrp algorithm deviates from the observed model in both the 15-step and 10-step predictions. It also deviates slightly from the observed model in the one-step prediction. The result of the ANN training performance is shown in Figures 9-11. The figures indicate the relationship between MSE (loss) and the number of epochs during the training network for the successful training due to the lowest errors in the training, validation, and testing curves. The error generally minimizes after more training epochs but may rise when the network overfits the training data in the validation dataset. By default, training terminates after six sequential increases in the validation error, and the best performance is taken in the epoch with the minimum validation error. algorithm deviates significantly from the observed model in both the 15-step and 10-step predictions. In Figure 9, the prediction model based on the Trainrp algorithm deviates from the observed model in both the 15-step and 10-step predictions. It also deviates slightly from the observed model in the one-step prediction. The result of the ANN training performance is shown in Figures 9-11. The figures indicate the relationship between MSE (loss) and the number of epochs during the training network for the successful training due to the lowest errors in the training, validation, and testing curves. The error generally minimizes after more training epochs but may rise when the network overfits the training data in the validation dataset. By default, training terminates after six sequential increases in the validation error, and the best performance is taken in the epoch with the minimum validation error. Figure 11. The best validation performance in the case of using Traincgf algorithm. As shown in Figure 10, when the Trainlm algorithm is used, the error decreases after more training epochs. The best validation performance is 0.0053946 at the 3rd iteration of the training network. Nevertheless, the error in the validation dataset may increase when the network begins to overfit the training process. The training is terminated after six consecutive validation errors when the error increases. In Figure 11, the model, in this case, was trained using the training algorithm Traincgf, where the best validation performance at the 23rd iteration is 0.025272. The error starts to rise in the validation set. The network begins to overfit the training process until the training terminates, where the training also increases after six sequential validation errors. On the other hand, Figure 12 shows that the Trainrp training algorithm has the best validation performance of 0.15745 at the 14th iteration. There is a slight rise in the validation error as the network overfits the training process until the training ends. After six consecutive validation errors, the training is also terminated. Figure 13 shows that the best training performance at the 105th iteration is 0.060604 when Trainbr is used. It is noticeable that the Trainbr curve does not have a validation model because the validation stop is disabled by default, due to validation always having regularization, but Trainbr has its form. As shown in Figure 10, when the Trainlm algorithm is used, the error decreases after more training epochs. The best validation performance is 0.0053946 at the 3rd iteration of the training network. Nevertheless, the error in the validation dataset may increase when the network begins to overfit the training process. The training is terminated after six consecutive validation errors when the error increases. In Figure 11, the model, in this case, was trained using the training algorithm Traincgf, where the best validation performance at the 23rd iteration is 0.025272. The error starts to rise in the validation set. The network begins to overfit the training process until the training terminates, where the training also increases after six sequential validation errors. On the other hand, Figure 12 shows that the Trainrp training algorithm has the best validation performance of 0.15745 at the 14th iteration. There is a slight rise in the validation error as the network overfits the training process until the training ends. After six consecutive validation errors, the training is also terminated. Figure 13 shows that the best training performance at the 105th iteration is 0.060604 when Trainbr is used. It is noticeable that the Trainbr curve does not have a validation model because the validation stop is disabled by default, due to validation always having regularization, but Trainbr has its form. Conclusions This paper proposes ML methods for delay prediction in IoT and tactile internet networks, using the k-step prediction approach with the NARX-enabled RNN technique. Conclusions This paper proposes ML methods for delay prediction in IoT and tactile internet networks, using the k-step prediction approach with the NARX-enabled RNN technique. ANN was trained using four different algorithms: Trainbr, Traincgf, Trainlm, and Trainrp, considering the MSE loss function as a performance measure to investigate which prediction model provides optimal accuracy and maximum average improvement. The prediction accuracy was measured in terms of RMSE and MAPE as a measure of prediction accuracy. The results show that the model predicted by the Trainbr training algorithm outperforms its competitors and has the best prediction accuracy for both 1-step prediction and 15-step prediction. Moreover, the model trained with the algorithm Traingf outperforms its competitors for the case of 10-step prediction. On the other hand, the model predicted by the algorithm Trainrp has poor prediction accuracy compared to the others. For future work, the authors suggest the following research plans. • The development of algorithms that can consider all the dynamic parameters of the IoT environment and more accurately predict upcoming traffic. • The development of deep learning algorithms to predict and study the performance based on LSTM, Bi-LSTM, GRU, stacked autoencoder (SAE), and simple recurrent unit (SRU) using loss functions cross-entropy, MSE, MAE, and SSE. • The development of deep learning based on robust loss functions using robust statistical estimators, such as Cauchy, Huber, and Fair, in the presence of outliers (anomalies). • The development of deep-reinforced learning for network prediction and security.
10,946.6
2021-11-26T00:00:00.000
[ "Computer Science", "Engineering" ]
Investigating the Attitudes of Adolescents and Young Adults Towards JUUL: Computational Study Using Twitter Data Background Increases in electronic nicotine delivery system (ENDS) use among high school students from 2017 to 2019 appear to be associated with the increasing popularity of the ENDS device JUUL. Objective We employed a content analysis approach in conjunction with natural language processing methods using Twitter data to understand salient themes regarding JUUL use on Twitter, sentiment towards JUUL, and underage JUUL use. Methods Between July 2018 and August 2019, 11,556 unique tweets containing a JUUL-related keyword were collected. We manually annotated 4000 tweets for JUUL-related themes of use and sentiment. We used 3 machine learning algorithms to classify positive and negative JUUL sentiments as well as underage JUUL mentions. Results Of the annotated tweets, 78.80% (3152/4000) contained a specific mention of JUUL. Only 1.43% (45/3152) of tweets mentioned using JUUL as a method of smoking cessation, and only 6.85% (216/3152) of tweets mentioned the potential health effects of JUUL use. Of the machine learning methods used, the random forest classifier was the best performing algorithm among all 3 classification tasks (ie, positive sentiment, negative sentiment, and underage JUUL mentions). Conclusions Our findings suggest that a vast majority of Twitter users are not using JUUL to aid in smoking cessation nor do they mention the potential health benefits or detriments of JUUL use. Using machine learning algorithms to identify tweets containing underage JUUL mentions can support the timely surveillance of JUUL habits and opinions, further assisting youth-targeted public health intervention strategies. Background Although the overall use of any tobacco product among high school students decreased from 24.2% in 2011 to 19.6% in 2017 [1], overall use increased to 27.1% in 2018 [2] and further to 31.2% in 2019. This increase was primarily influenced by the use of electronic nicotine delivery systems (ENDS). Current use of ENDS among high school students increased from approximately 1.5% in 2011 [1] to approximately 27.5% in 2019 [3]. This rise in ENDS usage appears to be associated with the increasing popularity of the brand JUUL, a compact pod mod device with a disposable or refillable pod typically containing artificial flavors, nicotine salts, and either vegetable glycerin or propylene glycol and whose sales represented 76% of the ENDS market at the end of 2018 [4]. JUUL's popularity stems from 3 main features of the product: appearance, flavors, and nicotine delivery [5,6]. JUUL's sleek "USB-like" design has assisted in the normalization of public ENDS usage and serves to facilitate inconspicuous use in smoking-prohibited areas such as schools and other public places [7]. JUUL was previously available in a variety of youth-appealing flavors, including but not limited to mango, mint, Crème brûlée, and menthol [8]. As of October 2019, JUUL Labs had removed all flavors except for the classic tobacco, Virginia tobacco, and menthol flavors in an attempt to address concerns regarding the appeal of the product to underage users [9]. Where the nicotine concentrations of combustible tobacco products range from 1.5% to 2.5% by weight [10,11], nicotine concentrations in JUUL pods range from 3% (35 mg/mL) to 5% (59 mg/mL) by weight. Although JUUL pods contain a fraction of the total nicotine that a pack of cigarettes does, JUUL users absorb roughly the same amount of nicotine in a single pod as a pack of cigarettes [12]. This suggests that nicotine is being absorbed more efficiently through JUUL pods than through combustible cigarettes -likely a result of cigarette nicotine being combusted into sidestream smoke and JUUL pods' nicotinic formulation [13]. JUUL pods contain a protonated form of nicotine known as nicotine salts [14], of which the absorption resembles freebase nicotine seen in cigarettes [15,16] but has a smoother feel when inhaled and does not taste as bitter [13,17]. A recent study on youth awareness of JUUL's nicotine strength demonstrated that 37.4% of adolescents believed JUUL to contain low or medium nicotine strength and 31.4% were unaware of the nicotine strength [18]. These findings suggest that adolescents are unaware of the relatively high nicotine content in a single JUUL pod. Additional research has documented the emergence of JUUL-compatible pods, some containing nicotine concentrations as high as 6.5% [13]. With approximately 90% of adult daily ever smokers beginning before 18 years of age [19] and a lack of public understanding regarding JUUL's highly concentrated nicotine levels [20], it has been hypothesized that JUUL poses a risk to younger populations for developing nicotine dependency [21,22]. Consequently, nicotine dependency developed in adolescence may result in addiction and potentially a later transition to traditional combustible cigarettes [23]. With the ENDS market rapidly changing in terms of products and patterns of use (ie, pod mods, box mods, vape pens), there are crucial knowledge gaps in understanding underage ENDS use and its consequences [24]. Studies of JUUL Use Using Social Media Free and publicly available data obtained from Twitter can provide insight into public perceptions and knowledge of health behaviors. As reported in 2018 and 2019 Pew Research Center surveys, 32% of teenagers between the ages of 13 and 17 years [25] and 44% of adults between the ages of 18 and 24 years [26] use Twitter. Given this age distribution, the platform serves as a promising source of data for understanding adolescent and young adult JUUL use. Previous studies that have utilized Twitter data on JUUL have identified a number of experiences and insights into the product and its users such as the use of JUUL in prohibited environments (eg, schools) [27], the acquisition of JUUL devices and JUUL pods [28], and the correlation between JUUL mentions on Twitter and JUUL sales [29]. In addition to these studies, there is a growing body of work assessing how JUUL is promoted and used by underage individuals on various social media platforms. Not only does the literature suggest a heavy presence of youth JUUL-related content [30], but younger users are also sharing their opinions and experiences with other users and are talking about the various aspects associated with JUUL use [31][32][33]. However, a large-scale analysis of JUUL-related tweets that utilizes computational methods has, to the best of our knowledge, not been conducted to understand underage patterns of use and perceptions towards JUUL. Using machine learning algorithms to classify tweets allows for the automatic categorization of tweets and eliminates the time-consuming and resource-consuming burden that comes with the labor-intensive manual annotation process. While the application of machine learning to tweets has shown promise in several public health subdisciplines [34,35], these methods are greatly underutilized in ENDS research. Objectives Our primary objective was to further understand salient themes and topics related to JUUL use on Twitter with particular foci on underage JUUL use and health perceptions. Our secondary objective was to use natural language processing (NLP) methods to develop machine learning-based classifiers capable of automatically identifying and evaluating underage-related JUUL mentions as well as positive and negative sentiments towards JUUL. In doing so, we hoped to provide optimally performing classifiers to be further validated and applied to additional work relating to underage JUUL use and its representation on Twitter. Data Collection Using the free Twitter application programming interface (API) [36], we collected a sample of 28,590 tweets from July 2018 to August 2019. To query the Twitter API, appropriate JUUL-related keywords were determined with the aid of a tobacco control researcher (SZ). We used the case-insensitive keywords JUUL, Phix, Sourin, myblu, Aspire Breeze, vaping pod, pod mod, and vape pod, as these terms are all common to pod mod ENDS devices. As we were primarily interested in the organic perspective of individuals regarding JUUL use, we removed all retweets from the dataset. After retweet removal, our dataset was comprised of 11,556 unique English language tweets. Ethical Considerations This study was determined to be exempt from review by the University of Utah Institutional Review Board (IRB#00076188). To protect user privacy, we refrained from including usernames in this paper. Further, all quotations used are synthesized from multiple examples. Manual Twitter Content Analysis To analyze the various themes of our collected tweets, we carried out a manual annotation process in which we categorized each tweet according to its content. We used the classification scheme developed by Myslin et al [34] for emerging tobacco product Twitter surveillance as a starting point, modifying the classification categories to more appropriately reflect our scope of interest in JUUL. We initially included 39 categories to code for tweet relevancy (ie, whether the tweet was JUUL-related), type, content, and sentiment. At this point, an initial annotation coding round was carried out on 200 tweets to determine the interrater agreement between 2 annotators (RB and MC) and refine the annotation scheme. With consensus among annotators, categories deemed extraneous and irrelevant to our analysis of JUUL (eg, hookah) were excluded from the annotation scheme. Additionally, categories deemed too specific were consolidated with closely related categories. For instance, the separate categories "Industry" and "Policy" were combined to form a singular "Industry and Regulation" category. The final annotation scheme was comprised of 22 categories related to themes of JUUL use, its perceptions among users, and an "Unrelated" category. Our final annotation scheme is available in Multimedia Appendix 1, and synthetic examples of these annotation categories are presented in Figure 1. In an attempt to limit our analysis to JUUL use exclusively, tweets that contained keywords other than JUUL were annotated as "Unrelated" unless the tweet also contained the keyword "JUUL." Further, we restricted the underage label to those tweets that contained explicit contextual evidence regarding underage elements (eg, "My parents still don't know I JUUL at school," "FDA warns of JUUL use in high school," "For my 16 th birthday, I want mango JUUL pods"). Once the interrater agreement exceeded an acceptable Cohen kappa level [37] (ie, >0.7 [38]), the remaining manual annotation process was carried out by one annotator (RB). Excluding the tweets used for interrater agreement, a total of 4000 tweets were annotated during the manual annotation to ensure there was a sufficient number of tweets for training the machine learning classifiers. Data Preprocessing Using the Natural Language Toolkit (NLTK) [39] -a widely used Python toolkit for analyzing text data -our manually annotated tweets were tokenized using the TweetTokenizer tool. This tool splits characters into individual tokens while also removing punctuation, @ characters, and other extraneous characters. TweetTokenizer is also capable of handling and tokenizing emojis and emoticons. Since these characters are often used in modern text when conveying emotion and sentiment, they are imperative in understanding tweet content. Consequently, we retained emojis and emoticons in the tweets, and they were tokenized as if they were words themselves. All tokens were then converted into n-gram text sequences. An n-gram (ie, unigram, bigram, trigram) is a contiguous sequence of n features used in NLP to transform raw text into features that can be readily processed by a machine learning algorithm ( Figure 2). Figure 2. Visualization of n-grams. n-grams can be described as a sequence of n-items, can encode additional semantic content beyond individual words, and once vectorized, can be used as features in machine learning algorithms. Machine Learning Classification In an attempt to automatically classify JUUL related tweets, we applied supervised machine learning algorithms to identify tweets related to underage JUUL use, positive sentiment, and negative sentiment. The goal of this machine learning-based approach was to identify a predictive function of the data in which unseen data can be accurately classified as containing either underage JUUL use, positive sentiment, or negative sentiment. The efficient and automatic classification of JUUL-related tweets provides a snapshot into the perceptions and use patterns of JUUL and the potential to scale up the analysis beyond what can be realistically performed by manual annotation alone. The algorithms we used for classification were a logistic regression, Bernoulli naïve Bayes, and random forest classifier. Descriptions of the 3 classification algorithms are available in Figure 3. These models were selected because of their computational simplicity and efficiency in Twitter-based classification tasks [34,[40][41][42]. The input of each classifier consisted of the most salient features determined by feature selection (ie, a process in which the essential terms for model performance are identified automatically, with the rest being discarded). This feature selection was carried out using Sci-Kit Learn (sklearn) [43], another Python toolkit that is frequently used for text analysis. The tool SelectKBest was used to compare chi-square statistics for each feature and retain the most discerning features of the dataset. In addition to reducing the chance of overfitting the models, feature selection improves model performance due to the removal of features deemed irrelevant. Once a range of suitable features had been selected, the hyperparameters for each algorithm were optimized. This hyperparameter optimization was carried out with sklearn's GridSearchCV tool, which iterates through specified model parameters and determines the optimally performing model using 10-fold cross-validation. Finally, we applied the optimally performing model to the remaining unannotated tweets. The following 4 metrics were used to evaluate the performance of the various models: accuracy, precision (positive predictive value), recall (sensitivity), and F1 score (the harmonic mean of precision and recall). These metrics are standard in NLP and reflect a classifier's ability to classify the task at hand effectively [44,45]. Our goal was to develop classifiers capable of performing well across all 4 metrics, and all 4 metrics were considered when evaluating overall performance. Manual Twitter Content Analysis Of the 4000 tweets analyzed during the annotation process, 3152 (78.80%) were relevant to JUUL and explicitly mentioned JUUL or JUUL-related accessories such as JUUL pods and chargers. Of the relevant tweets, the most prevalent category was first person usage or experience (1792/3152, 56.85%). The least prevalent categories were using JUUL as a cessation method (45/3152, 1.43%) and using JUUL for the first time ( Table 1 for the proportions and frequencies obtained in the manual annotation. Machine Learning Classification of Underage JUUL Mentions and Sentiment Using supervised machine learning algorithms, we created models to classify underage JUUL mentions and sentiment towards JUUL among Twitter users. To evaluate the different models, we compared the test metrics for all 3 algorithms using the 500 most relevant features for each model (Table 2). In all 3 classification tasks, the random forest model outperformed the logistic regression and Bernoulli naïve Bayes models. When classifying tweets related to underage usage of JUUL, the random forest model yielded a higher accuracy (99% accuracy) when compared to the logistic regression model (94% accuracy) and substantially higher accuracy than the Bernoulli naïve Bayes model (78% accuracy; Figure 4). When comparing the models' performance for classifying positive and negative tweet sentiment, the random forest model performed considerably better (82% and 91% accuracy, respectively) than the logistic regression model (72% and 78% accuracy, respectively) and the Bernoulli naïve Bayes model (69% and 62% accuracy, respectively). When applying our random forest classifier to additional unseen data (7356 unannotated tweets), our model classified 109 of 7356 tweets as underage-related (1.48%). This proportion is lower than that of the tweets classified as underage-related during the manual annotation process (190/3152, 6.03%), perhaps due to the presence of previously unseen terms related to underage JUUL use. Principal Findings In addition to supporting previous JUUL research using Twitter [27][28][29], our findings identified critical factors in the understanding and usage of JUUL among Twitter users. In our study, only 1.43% (45/3152) of annotated tweets mentioned using JUUL as a method of smoking cessation. This finding seems incongruent with JUUL's stated mission of improving the lives of smokers by eliminating combustible cigarette use and replacing it with the -purportedly less harmful -JUUL product [46]. This observation is also inconsistent with the results of a 2019 survey reporting that around 20% of individuals aged 18-24 years initiated JUUL use in an attempt to quit combustible tobacco [47]. Additional research has suggested that youth not only appear to be experimenting with JUUL but are also habitually using the device [48]. Such results, in addition to our findings, suggest that Twitter may be seen as a method of obtaining information to facilitate JUUL use and procurement among youth. Additionally, only 6.85% (216/3152) of our annotated tweets mention the potential health benefits or detriments of using JUUL, a result consistent with that found by Morean et al [18] and poses the question of whether JUUL users recognize the known effects of high-level nicotine exposure and the potential for developing nicotine dependency and subsequent nicotine addiction. While the long-term effects of JUUL use are yet to be ascertained, there is evidence to support the view that adolescent nicotine exposure may play a significant role in the detrimental alteration of neurochemical, structural, cognitive, and behavioral processes [49]. After removing underage tweets that contained news and media related content, 47% (56/118) of the remaining underage tweets mentioned first-person experiences with JUUL, with 21% (12/56) of those tweets mentioning JUUL pods and flavorsfindings consistent with previous literature [28]. Moreover, of those underage first-person mentions, 32% (18/56) contained positive sentiment (eg, "I love my JUUL so much"), compared to 23% (13/56) containing negative sentiment (eg, "Juul is so disgusting") -a finding that we expected due to the popularity of the pod mod device among youth as compared to other ENDS devices [50]. Although a majority of the tweets that we annotated contained a neutral sentiment towards JUUL (1416/3152, 44.92%), overall tweets contained a more positive sentiment (1052/3152, 33.37%) than negative sentiment (683/3152, 21.67%). And with nearly 20% (586/3152, 18.59%) of the JUUL-related tweets mentioning JUUL pods or flavors, Twitter appears to be regularly used for sharing opinions on various JUUL accessories such as pods or flavors as well as a means to gather information regarding the procurement of such accessories. At face value, it appears that Twitter may be used by individuals to share information about JUUL, thus facilitating its use; additional qualitative research would be necessary to understand the level of exposure of individuals to this content. This finding also suggests the potential for educational campaigns employing Twitter to inform the public about JUUL use, as noted in prior work [16]. Of all the machine learning models we developed, our random forest model performed best in all 3 classification tasks. The performance of the random forest can be primarily attributed to the nature of the algorithm itself. Because a random forest is an ensemble of decision trees containing random subsets of the input features, this algorithm is resilient to outlier data, and the final classification is based on the "majority vote" of the constituent decision trees [51]. Additionally, the random forest's relatively easy implementation and computational simplicity make it a viable candidate for tobacco control researchers to use in Twitter-based ENDS surveillance. Limitations Our work has some limitations to be considered. First, our data were obtained via the free 1% Twitter API using keyword search rather than the entire Twitter "firehose" dataset; therefore, there is the possibility that not all JUUL-related tweets in the study period were collected. Additionally, our list of keywords (JUUL, Phix, Sourin, myblu, Aspire Breeze, vaping pod, pod mod, and vape pod) is not exhaustive and does not include all pod mod devices available in the United States. We also cannot assume that Twitter users nor their tweets are entirely representative of the general population regarding personal health behaviors. Second, the frequency of some annotation categories is relatively low, and our models may risk overfitting. In machine learning, overfitting can be described as a model that accurately recognizes patterns and performs well on the training data, but performance decreases when applied to previously unseen data [52]. For instance, our algorithms may fit the data that it was trained on, but if presented with data it has never seen before, it may not be able to maintain this accuracy as the algorithm cannot recognize patterns in the new data. Additionally, the interpretation of tweet content during the manual annotation process is often subjective due to the brevity of tweet content, lack of grammatical structure, and usage of hyperbole, idioms, and so on. With manual annotation being an inherently interpretive task, we attempted to retain the consistency among our annotations by calculating interrater agreement between annotators, while also focusing on explicit contextual language when assigning labels to tweets. Finally, the results of this study are preliminary, and in order to derive policy implications from our work, these classification algorithms should be further studied and validated using additional unseen data. Future work should look to apply these classifiers on unlabeled data, conduct error analysis, and refine the algorithms as needed. Pending further validation, these classifiers can be used to automatically categorize large quantities of tweets, allowing researchers to further understand how JUUL is disseminated among youth populations and propose policy change to combat underage ENDS use. Conclusions Our analysis provides a snapshot of the representation of JUUL on Twitter and brings forth several interesting observations for future research endeavors. Our work suggests that the majority of JUUL users on Twitter do not use JUUL as a method of smoking cessation. Additionally, there is a paucity of tweets in which users talk about the potential health effects of using JUUL. Using this manually annotated corpus as training data, we developed 3 supervised machine learning models to accurately classify tweets related to underage JUUL use as well as sentiment towards JUUL. Of the 3 models, our random forest classifier most accurately predicted underage JUUL-related tweets and their sentiment. The application of this algorithm is a novel analytic approach to understanding underage JUUL use on Twitter and, with further research and validation, can promote future research on underage JUUL use patterns as manifested on Twitter.
5,023.2
2020-05-07T00:00:00.000
[ "Computer Science" ]
Chimera states and frequency clustering in systems of coupled inner-ear hair cells Coupled hair cells of the auditory and vestibular systems perform the crucial task of converting the energy of sound waves and ground-borne vibrations into ionic currents. We mechanically couple groups of living, active hair cells with artificial membranes, thus mimicking in vitro the coupled dynamical system. We identify chimera states and frequency clustering in the dynamics of these coupled nonlinear, autonomous oscillators. We find that these dynamical states can be reproduced by our numerical model with heterogeneity of the parameters. Further, we find that this model is most sensitive to external signals when poised at the onset of synchronization, where chimera and cluster states are likely to form. We therefore propose that the partial synchronization in our experimental system is a manifestation of a system poised at the verge of synchronization with optimal sensitivity. Our inner ear relies on internal nonlinearities and active, energy-consuming processes in order to detect faint sounds and comprehend speech in noisy environments. Identifying the dynamical states and mechanisms that this system utilizes to achieve such remarkable sensitivity is a long-standing open question. In this study, we identify two forms of partial synchronization in networks of coupled, living hair cells. Partial synchronization has been observed in other biological systems such as the abnormal electrical oscillations in cardiac myocytes, 1 and electrocorticography recordings preceding epileptic seizures. 2,3 In these examples, partial synchronization is an undesirable state. However, our experiments and simulations suggest that the inner ear may rely on partial synchronization in order to optimize its ability to detect weak signals. I. INTRODUCTION The auditory and vestibular systems are extraordinary signal detectors. These end organs can reliably detect sound and mechanical vibrations that induce displacements as small as a few angstroms, comparable to or below the amplitude of motion caused by thermal fluctuations in the surrounding fluid. 4 These sensory systems also exhibit remarkable temporal resolution, frequency selectivity, and dynamic range of detection. How these biological sensors achieve their signal detection properties is a long-standing open question, and the physics of hearing remains an active area of research. 5 Mechanical detection of sound waves, vibrations, and accelerations is performed by hair cells. These specialized cells are named after the rod-like stereovilli that protrude from their apical surfaces. The cluster of inter-connected stereovilli is a) Electronic mail<EMAIL_ADDRESS>b) Electronic mail<EMAIL_ADDRESS>named the hair bundle and performs the essential task of transducing the mechanical energy of sound into electrical signals that take the form of ionic currents into the cell. [6][7][8] A perturbation caused by sound or acceleration results in a deflection of the hair bundles and an increase in the tension of the tip links that connect adjacent rows of stereovilli. A change in tension of the tip links modulates the open probability of the transduction channels that are embedded at the tops of the stereovilli and connected to the tip links. Auditory detection has been shown to require an active, energy-consuming process in order to achieve such remarkable signal detection. 9 This active process manifests itself in a number of phenomena, including the appearance of autonomous motion of the hair bundles, observed in vitro in several species. [10][11][12] These spontaneous oscillation have amplitudes well above the noise induced by thermal fluctuations, and they have been shown to be active, as they violate the fluctuation dissipation theorem. 13 The role of these spontaneous oscillations is not yet fully understood, but prior studies have suggested that they could be utilized as an amplification mechanism for weak signals. 14 These spontaneous oscillations also serve as a probe for studying the active cellular mechanics underlying auditory detection. Another manifestation of this active process is the spontaneous emission of sound, observed in vivo in many species. 15 These spontaneous otoacoustic emissions (SOAEs) exhibit several sharp peaks in their power spectra and are metabolically sensitive, indicating an underlying energy-consuming process. Although SOAEs serve as a diagnostic for hearingrelated disorders in humans, there is currently no consensus on the mechanism responsible for generating them. [16][17][18] One theory suggests that they arise from frequency clustering of actively oscillating coupled hair cells. 19,20 In vivo, hair bundles are attached to overlying structures, which provide coupling between the individual active oscillators. The strength and extent of the coupling varies across species and organs. 8 In the bullfrog sacculus, several thousand hair cells are coupled together by the otolithic membrane. The sacculus is responsible for detecting low-frequency ground- borne and airborne vibrations. In contrast to auditory organs, the sacculus does not display a high degree of frequency selectivity, nor any tonotopic organization of the hair cells: there is no correlation between the characteristic frequencies of the hair cells and their location in the sensory epithelium. 21 It does, however, demonstrate extreme sensitivity of detection. 22 We previously demonstrated that, despite frequency dispersion as large as five-fold, groups of coupled hair bundles can fully synchronize. 23 Our experimental and theoretical studies indicated that the presence of chaotic dynamics in individual oscillators enhances synchronization in the coupled system and allows for highly sensitive detection, even in the presence of biological levels of noise. Other work has proposed that systems of coupled bundles can also exhibit an amplitude death regime, with quenching caused by strong coupling and significant frequency dispersion. 24 In this study, we identify two additional dynamical states that can occur in networks of coupled hair bundles and explore their potential role in the detection capabilities of the auditory system. The first is the chimera state, defined as a system in which a subset of the coupled oscillators shows mutual synchronization, while the rest oscillate incoherently. 25 Previously it had been believed that identical oscillators, coupled through a mean field, could occupy only two dynamical states: full synchronization or incoherence. This assumption was dis-proven by the observation of chimeras, first seen in numerical simulations of identical oscillators. 26,27 As the presence of chimera states depends strongly on the initial conditions of the dynamical system, it was believed that they were too unstable to be observed in an experimental system. However, a decade after their discovery in numerical simulations, chimera states were observed in coupled chemical oscillators 28 and in coupled-map lattices. 29 Chimera states were also shown to arise from heterogeneity in the parameters of the coupled oscillators. 25 As hair cells inherently possess heterogeneity in their size, structure, and time scales of ion-channel dynamics, systems of coupled hair bundles can support chimera states. We here demonstrate signatures of chimeras in experimental recordings obtained from hybrid preparations, in which artificial coupling structures are interfaced with live hair cells. We further explore their potential role in signal detection, with theoretical models that simulate hair bundle dynamics. The second dynamical phenomenon explored in this study is the occurrence of cluster states, another form of partial synchronization in a coupled system, in which each oscillator synchronizes with one of several clusters. We identify states of frequency clustering in our in vitro experiments, lending support to the theory that SOAEs may be generated by frequency clustering of actively oscillating hair bundles. Both types of partial synchronization, chimeras and cluster states, can be reproduced by our simple numerical model of hair cell dynamics, with the introduction of heterogeneity in the set of parameters. Lastly, we use our numerical model to test its sensitivity to external signals, when the system resides in different dynamical regimes. The system is most frequency selective when poised in the regime of strong coupling, where all oscillators synchronize. However, consistent with a previous numerical study, 30 we find that the sensitivity of the system is maximized in the regime of intermediate coupling strength, near the onset of synchronization, in which chimera and cluster states are likely to arise. As hair cells have been shown to utilize a number of adaptive mechanisms, we therefore speculate that the coupled systems within auditory and vestibular end organs may poise themselves at the onset of synchronization in order to optimize their sensitivity to weak, external signals. II. CHIMERA AND CLUSTER STATES IN VITRO We explore the occurrence of partial synchronization in experimental systems of coupled hair cells. To introduce artificial coupling, we use mica flakes and attach them to the tops of groups of hair-cell bundles, following our previously developed methods. 23 The artificial membranes are dispersed atop the bundles by introducing them into the endolymph solution, which bathes the apical surface of the sensory epithelium. The mica membranes are thin and transparent, allowing for precise imaging and tracking of the motion of the underlying hair bundles. Further, the mica flakes only minimally modify the mass and drag of the system, allowing us to explore the dynamics of a wide range of system sizes. As mentioned previously, the bullfrog sacculus is not tonotopically organized, and adjacent hair bundles exhibit up to tenfold differences in their frequencies of spontaneous oscillation. Despite this large frequency dispersion, we find that groups of neighboring hair bundles routinely synchronize upon coupling by the artificial membranes (Fig. 1). We characterize the degree of synchronization by calculating the cross-correlation coefficient (Eq. A.1) between each pair of hair bundles in the system. Due to the differences in heights of the stereovilli, not every hair bundle within a group makes contact with the artificial membrane above it. Therefore, we define a threshold to determine which hair bundles are coupled to a network of others. To find this threshold, we first calculate the cross-correlation coefficient between many unique pairs of uncoupled hair bundles, with no artificial membranes in the vicinity. The distribution of cross-correlation coefficients is centered around 0 and has a standard deviation ≈ 0.02 (Fig. 7). We then set the threshold to be 0.1, more than five standard deviations above the mean and hence unlikely to occur by chance without coupling. In addition to fully synchronized states, we also observe cases of partial synchronization. We use several techniques to characterize these states. First, we generate space-time plots, where the traces of all the oscillators in the coupled system are plotted as a function of time and the amplitude is represented by color, providing a visual observation of synchro-nization between the oscillators (Figs. 2a, 3a). Next, we view the power spectra of the oscillators to confirm that the dominant peaks align at a common frequency for the synchronized portion of the chimera states (Fig. 2b) and that multiple common peaks are present for the oscillators of the cluster states (Fig. 3b). Finally, we plot the correlation matrices of all of the traces within the coupled systems, in order to give another visual representation of the partial synchronization states. Chimera states contain one group of oscillators with large cross-correlation coefficients between each pair, while all other cross-correlation coefficients are low (Fig. 2c). In contrast, cluster states contain multiple groups, where oscillators within a group are strongly correlated with each other but not with those outside of the group (Fig. 3c). III. NUMERICAL MODEL OF COUPLED HAIR CELLS The dynamics of the j th oscillator in the coupled system are described using the complex variable, z j (t) = x j (t) + iy j (t), and are assumed to be governed by the normal form equation for the supercritical Hopf bifurcation, This simple model reproduces many of the experimentally observed phenomena of hair-cell dynamics, such as the autonomous oscillations and the compressive nonlinear response to external signals. 31,32 The real part of z j (t) represents the hair bundle position, while the imaginary part reflects internal parameters of the cell and is not assigned a specific, measurable quantity. µ controls the proximity to the Hopf bifurcation, and ω j represents the natural frequency at this bifurcation (µ = 0) in the absence of coupling. β j characterizes the degree of nonlinearity and controls the level of nonisochronicity of the oscillator. 33,34 In the absence of coupling, and for µ > 0, the system exhibits limit cycle oscillations at radius √ µ and frequency Ω j = ω j − β j µ. We set the frequency dispersion in our model to approximate that of our experimental data. The limit cycle frequencies are hence uniformly spaced from Ω 1 = 1 to Ω N = 2 √ 5 ≈ 4.47. We select an irrational number to avoid spurious mode-locking between oscillators. We set the control parameter to be µ = 1 throughout the study, poising the system deep into the oscillatory regime. The system is subject to external real-valued forcing, F(t), representing acoustic stimulus or linear acceleration, both of which elicit deflection of the hair bundles in the sacculus. Each oscillator is subject to independent, additive white Gaussian noise, η j (t), with independent real and imaginary parts: Re(η j (t))Re(η j (t )) = Im(η j (t))Im(η j (t )) = 2Dδ (t − t ), where D is the noise strength of the system. The dynamics of this system occur at low Reynolds number, 35 and we have previously shown that the drag of the artificial membranes is small in comparison to that of the entire coupled system. 23 Further, the mica flakes exhibit little compliance, as we observe coupling and synchronization between pairs of hair bundles with large spacial separation. For these reasons, we have chosen to model the system with meanfield coupling, where each oscillator is weighted by its degree of attachment to the artificial membrane, k j . The weighted mean field then takes the form, where N is the number of oscillators in the system. IV. CHIMERA AND CLUSTER STATES IN THE NUMERICAL SIMULATIONS OF HAIR CELL DYNAMICS To explore the dynamic states that can occur in the system of coupled hair cells, we perform numerical simulations based on the theoretical model described above. We vary parameters over a physiologically plausible range, to reproduce the dynamical states observed in the experimental system and determine their potential mechanisms. As mentioned earlier, chimera states can arise from heterogeneity of the model parameters. In the biological system, it is unlikely that the level of attachment to the membrane is identical for all oscillators; hence, we randomly select each k j value from a uniform distribution spanning 0.2-3.0. This heterogeneity tends to produce the chimera state in our simulations, as only some of the oscillators synchronize, while others oscillate incoherently (Fig. 4). Next, we explore the effects of introducing dispersion into the selection of the β j parameters, which control the degree of nonisochronicity in individual oscillators. We have previously shown that this parameter, which renders the oscillation frequency of an oscillator dependent on its oscillation amplitude, can lead to chaotic dynamics in the presence of noise. 36 Further, we demonstrated that it can enhance synchronization in a system of coupled nonlinear oscillators, as it allows for a greater shifts in the innate frequencies of oscillation. 23 Here we show that random dispersion in this parameter can also result in multiple frequency clusters (Fig. 5). We find that the system forms a 2-cluster state, where oscillators with positive and negative β j values form separate clusters. The clustering results from coupling, which tends to restrict the dynamics to a smaller region of phase space, thus reducing the amplitude. Since the sign of β j determines whether an oscillator's frequency increases or decreases with amplitude reduction, two distinct frequencies emerge, forming stable clusters. V. OPTIMIZATION OF SIGNAL DETECTION To achieve reliable signal detection, a group of coupled detectors may utilize synchronization. The inherent noise of each component is thus averaged out, and the signal-to-noise ratio (SNR) increases with increasing number of detectors. 37 The drawback to complete synchronization is that the system is then sensitive to only a small range of frequencies surrounding the characteristic frequency. Therefore, total synchronization would be an unfavorable state for the groups of coupled hair cells in the auditory systems, as they are responsible for detecting frequencies that span several octaves. We therefore propose that these systems may utilize a low degree of synchronization in order to improve the SNR without compromising the frequency range of detection. To visualize this inherent trade-off as a function of the cou-pling strength, we construct maps that display the sensitivity of every oscillator to a wide range of stimulus frequencies (Fig. 6a-h). We characterize the degree of phase locking, by calculating the vector strength, where φ i and φ j are the phases of two time series and the angle brackets denote the time average. To quantify the sensitivity, we calculate the vector strength between the stimulus waveform and the response of an oscillator. To characterize the degree of synchronization within the coupled system we calculate the average vector strength between all pairs of oscillators in the absence of stimulus. This synchronization index is 1 for perfectly synchronized oscillators and approximately 0 for incoherent motion. We see that the transition to synchronization occurs at a coupling strength around k j = k = 1.5 and becomes more abrupt for larger system sizes (Fig. 6i). The sensitivity maps show the strongest response at intermediate levels of coupling strength, near the onset of synchronization. If the coupling is too weak, the dynamics are incoherent, and the oscillators are more susceptible to noise. However, if the coupling is too strong, the system is limited to detecting only a small range of frequencies. To see this tradeoff more explicitly, we average the vector strengths over all oscillators and take the maximum across all stimulus frequencies (maximum oscillator-average vector strength). We plot this measure as a function of coupling strength, along with the fraction of stimulus frequencies in which at least one oscillator has a vector strength above 0.2 (Fig. 6j). This trade-off between maximum vector strength and frequency range of detection produces a peak in the average vector strength across all stimulus frequencies and detectors (Fig. 6k). These results suggest that a coupled system responsible for detecting a wide range of frequencies will achieve optimal performance when poised at the onset of synchronization. Vestibular end organs display varying degrees of coupling between hair cells, likely involving the response of multiple oscillators to achieve reliable signal detection. The bullfrog sacculus is innervated in a way that supports this assumption, with afferent fibers synapsing onto multiple hair cells. 21 Secondly, to achieve reliable detection, the system should be sensitive to different frequencies, as the airborne and groundborne vibrations of interest contain energy distributed across a range of low frequencies. We therefore propose that the sacculus is poised at the onset of synchronization in order to optimize signal detection. VI. DISCUSSION The auditory and vestibular systems have provided a testing ground for concepts from bifurcation theory and nonlinear dynamics. 31,[38][39][40] These sensory systems serve different purposes but all rely on hair cells to perform detection of sound, vibration, or acceleration. Active hair cells of these sensory systems have displayed Hopf bifurcations, saddle-node on an invariant circle (SNIC) bifurcations, 41 and the quasiperiodic transition to chaos. 42 The dynamics of hair cells have been described by limit cycles, stable fixed points, chaotic attractors, and amplitude-death states. How these dynamical states shape the response of the full system to external signals remains an open question. As these sensory systems impose different requirements on the sensitivity, frequency selectivity, temporal resolution, and dynamic range of detection, the various organs may have developed different dynamical regimes in which to reside, in order to achieve the signal detection properties of interest. Individual hair cells that comprise these systems have been shown to be versatile, displaying different response characteristics under different mechanical loads or other perturbations. 43 For example, while the bullfrog sacculus is not a frequencyselective organ, when hair cells within it are subject to appropriate experimental manipulation, they were shown to be capable of frequency-selective detection expected for auditory organs. Hence, we expect that differences in the detec-tion properties of the sensory organs lie not only in different properties of individual cells, but also in the coupling conditions and emergent dynamical states of the full system. It is therefore important to understand the different dynamical states that these coupled oscillators can exhibit, in order to understand the full range of signal detection properties that the sensory systems can display. In the present work, we observe two dynamical states that, to the best of our knowledge, have not been previously observed in auditory or vestibular systems. We measure the response of active hair bundles, coupled together with artificial membranes of different sizes, and obtain experimental observations of chimera states and cluster states. Both of these dynamical states can be reproduced with a simple numerical model with the inclusion of heterogeneity of the parameters. One of the signatures of the active process in the inner ear takes the form of otoacoustic emissions, which have been used as a probe of the auditory nonlinearities and internal dynamics in vivo. The mechanism of their generation by the auditory system is, however, not yet fully established. Our experimental data supports the theory that they arise from frequencies clustering of coupled active oscillators within the inner ear, as we observe frequency clustering in vitro in small groups of mechanically coupled hair bundles. Both the cluster states and chimera states are forms of partial synchronization and arise at intermediate levels of coupling strength, near the onset of total synchronization. We find that the numerical model achieves the greatest sensitivity to external stimulus when poised in this regime. It exhibits a balance in the inherent trade-off between the frequency range of signal detection and the number of oscillators that phase lock to the external signals. Therefore, we propose that these partial synchronization states may occur in vivo in systems of coupled hair bundles, if these systems are poised in the optimal regime for signal detection. ACKNOWLEDGMENTS The authors gratefully acknowledge the support of NSF Biomechanics and Mechanobiology, under Grant No. 1916136. The authors thank Dr. Sebastiaan Meenderink for developing the software used for tracking hair bundle movement. DATA AVAILABILITY The data that support the findings of this study are available from the corresponding author upon reasonable request. Appendix: Experimental Methods Biological Preparation Experiments were performed in vitro on hair cells of the American bullfrog (Rana catesbeiana). Sacculi were ex-cised from the inner ear of the animal and mounted in a two-compartment chamber with artificial perilymph and endolymph solutions 10 , mimicking the natural conditions for this tissue. Hair bundles were accessed after digestion and removal of the overlying otolithic membrane 13 . All protocols for animal care and euthanasia were approved by the UCLA Chancellor's Animal Research Committee in accordance with federal and state regulations. Artificial Membranes Mica powder was added to a vial of artificial endolymph solution, thoroughly mixed, and then filtered through several steel mesh gratings. These gratings served as filters to extract only the desired size of mica flakes, typically 20-50 µm. The solution was then pipetted into the artificial endolymph solution, above the biological preparation, causing mica flakes to settle on top of the hair bundles. The hair bundles adhered to the surface of the mica, resulting in coupling. Data Collection Hair bundle motion was recorded with a high-speed camera at frame rates between 250 and 1000 frames per second. The raw images were analyzed in MATLAB to determine the position of the center of the hair bundle in each frame. The motion was tracked along the direction of increasing stereovilli height. Typical noise floors of this technique, combined with stochastic fluctuations of the bundle position in the fluid, were 3-5 nm. Cross-Correlation Coefficient We characterize synchronization between spontaneously oscillating hair bundles using the cross-correlation coefficient C x 1 (t), x 2 (t) = x 1 (t)x 2 (t) σ 1 σ 2 , (A.1) wherex 1 (t) = x 1 (t)− x 1 (t) andx 2 (t) = x 2 (t)− x 2 (t) represent the time traces of the motion with zero mean, σ 1 and σ 2 represent their respective standard deviations, and the angle brackets denote the time average. C = 1 indicates perfectly correlated motion, while C ≈ 0 indicates uncorrelated motion. We do not consider hair bundles to be coupled unless they their cross-correlation coefficient exceeds our threshold of 0.1 with other hair bundles in a network. This threshold is based on the values of the measure in the absence of coupling (Fig. 7).
5,657.4
2021-05-06T00:00:00.000
[ "Physics", "Biology" ]
MeURep: A novel user reputation calculation approach in personalized cloud services User reliability is notably crucial for personalized cloud services. In cloud computing environments, large amounts of cloud services are provided for users. With the exponential increase in number of cloud services, it is difficult for users to select the appropriate services from equivalent or similar candidate services. The quality-of-service (QoS) has become an important criterion for selection, and the users can conduct personalized selection according to the observed QoS data of other users; however, it is difficult to ensure that the users are reliable. Actually, unreliable users may provide unreliable QoS data and have negative effects on the personalized cloud service selection. Therefore, how to determine reliable QoS data for personalized cloud service selection remains a significant problem. To measure the reliability for each user, we present a cloud service selection framework based on user reputation and propose a new user reputation calculation approach, which is named MeURep and includes L1-MeURep and L2-MeURep. Experiments are conducted, and the results confirm that MeURep has higher efficiency than previously proposed approaches. Introduction In the age of Internet of Things (IoT), cloud services have been the widespread concern in many realms [1][2][3]. In cloud environments, large amounts of services are provided for users, such as the computing power, storage, platforms, software, data storage service, and data access service, etc. [4][5][6]. Specifically, based on service-oriented architecture (SOA), currently, cloud services have become the underlying components in building high-quality cloud computing applications [7], [8]. With the exponential increase in number of cloud services, many equivalent or similar candidate services are provided for users, which causes great difficulty in selecting the proper services that provide the best performance for each user. Therefore, it is necessary to explore efficient techniques of personalized service selection. To select the optimal services from multitudinous services, the quality of service (QoS) is generally used as an important criterion [9], [10]. As a nonfunctional requirement, QoS is an important selection criterion to select the a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 candidate cloud services [11], [12]. The QoS properties include the response time, invocation failure rate, etc. Commonly, different users observe different QoS properties when they invoke the same cloud service, which is named personalized QoS [13], [14]. In addition, similar users have similar QoS data when they invoke the same services. Based on these QoS data, a user can select an optimal service if this user knows in advance the QoS data of the services provided by other users [15]. For example, users U1 and U2 are located in the same city. We suppose that U1 had invoked services S1 and S2, which have similar functions, and the response time of invoking S1 is longer than the time of invoking S2; thus, S2 is optimal. When U2 wants to select services S1 or S2, U2 will give priority to S1. However, if U1 is an unreliable user, U2 will make a wrong choice. Under the circumstances, when making service selections, it is unreasonable to assume that all users are reliable. Because of the complexity of real networks, many users on the network provide unreliable QoS data. For example, if U1 and U2 are competitors, then each of them may provide malicious data for each other. Under this circumstance, the users may be simultaneously service providers; then, they may provide good QoS data for themselves and bad QoS data for their competitors. In other cases, some users are pranksters and may provide false data (random, maximal or minimal values) instead of real data. Therefore, unreliable users are very detrimental to service selection. Malicious information provided by unreliable users may disrupt the service choices of other users. On this account, we consider the users' reputation. Generally, reputation concerns the global opinions from the specific social community for a specific target, and it reflects the capability and will of the target to fulfill its promise [16]. Regarding personalized service selection, a higher reputation of a user corresponds to more accurate service selection performance, and vice versa. The users with high reputation will provide reliable QoS data, which generate more reliable conditions for other users to invoke suitable services. On the contrary, the users with low reputation hinder further service invocations because of the relatively high risk of service invocations. Therefore, accurate reputation values will facilitate users to make a suitable decision and promote the development of the cloud service. However, with the variability and uncertainty of user behavior, it is meaningless that the users directly submit their reputation. It is necessary to explore a reasonable method to obtain the user reputation. As mentioned, unreliable users strongly affect the cloud services selection. According to the unreliable QoS data, users may select unsuitable or bad services. To address this issue, it is necessary to evaluate the reliability of users in the cloud services environment. In this paper, we present a user reputation calculation model, which is named MeURep. In our approach, the reputation calculation model is based on the historical QoS data submitted by other users. Our model assumes that each user has invoked the services and observed the QoS data. The user reputation is calculated based on the difference among the QoS data of other users. Iteratively, MeURep computes the user reputation until it converges to fixed values. Based on MeURep, we develop two algorithms with the following advantages: 2. The reputation calculation model named MeURep is represented to calculate the reputation of each user based on the historical QoS data provided by the users in personalized cloud services. 3. Theoretical and experimental analyses show that our approach is more simple and effective. The remainder of this paper is organized as follows. In Section 2, we review the related work. Then, our reputation model is proposed in Section 3. In Section 4, we conduct the experiments and show the results. The conclusion and future work are summarized in Section 5. Background and related work In this section, we review the background and related work from three aspects: cloud services, personalized QoS of cloud services and reputation calculation approaches. Cloud services In recent years, cloud services have become increasingly popular; tens of thousands of cloud services are emerging on the Internet. Generally, cloud services can be classified into three service models according to the needs of IT users [17], [18]: 1. SaaS (Software as a Service): provides users with the provider's applications, which are accessible from various client devices through either a thin client interface, such as a web browser or a program interface. 2. PaaS (Platform as a Service): provides a platform for users to deploy onto the cloud infrastructure consumer-created or acquired applications (e.g. programming, libraries, services, etc.). 3. IaaS (Infrastructure as a Service): provides an environment for the users to deploy, run and manage virtual machines and storage. With the vigorous development of the cloud services, many identical or similar services are offered by IT companies. For storage services, there are many IT companies including Amazon Simple Storage Service, Google Cloud Storage, and Microsoft Azure Storage. For database services, there are Google BigTable, Amazon SimpleDB, Fathom DB, Microsoft SDS, etc. These services are offered online, and the number is growing [19]. With the expansion of services on the Internet, how to select a suitable service from a set of equivalent cloud providers for users becomes an important challenge. Personalized QoS of cloud services Personalized QoS is an important research topic in cloud computing and service computing. Except for functional QoS requirements (e.g., computation, database, storage, document management, etc.), nonfunctional of QoS (e.g., response time, throughput, etc.) are also extensively studied in recent years [20], [21]. Many QoS-based approaches have been proposed for cloud service composition, cloud service selection, etc. Pan et al. [22] proposed a trust-enhanced cloud service selection model based on QoS analysis; they used the trust-enhanced similarity to find similar trusted neighbors and predict the missing QoS data as the basis of the cloud service selection and recommendation. Wu et al. [23] focused on selecting skyline services in the dynamic environment for the QoS-based service composition; they proposed a skyline service model and a novel skyline algorithm to maintain dynamic skyline services. Zheng et al. [10] aimed to assist cloud uses to identify services; they proposed a collaborative filtering approach using the Spearman coefficient to recommend cloud services. However, many previous studies did not consider the reliability of personalized QoS. Thus, to make cloud services selection more reasonable, we propose a QoS-based user reputation calculation approach. Reputation calculation approaches The reputation calculation approach has been widely concerned by many scholars. Generally, reputation calculation approaches can be divided into two types: content-driven and user-driven. The principle of the content-driven approach is that the users' reputation is calculated according to the quality and quantity of the user-generated content and the survival time of these contents. The principle of the user-driven approach is that the system makes a credit or reliability analysis according to the rating of the user feedback. Clearly, the user reputation calculation for cloud services is the user-driven type. In the current related research in the area of service computing, most works focused on the service side's reputation and studied how to avoid the adverse effect from the feedback data of unreliable users. According to the feedback data from unreliable users, [24] introduced a reputation measurement approach based on the user similarity and cumulative sum to test the unreliable user feedback. Li et al. [25] also considered the effect of the similarity among users and proposed the peer trust model to evaluate the reliability of users. Su et al. [21] studied the trust perception approach for service recommendation and used QoS values to calculate the user reputation based on clustering algorithms. Wang et al. [26] introduced the feedback verification, validation, and feedback test to evaluate the service reputation. They calculated the users' reputation through a statistical average approach using the QoS values of user feedback. In order to minimize the number of malicious services, Abdel Wahab et.al [27,28] proposed a trust framework that allows services to establish credible trust relationships. Li et.al [29] presented a trust assessment framework for the security and reputation of cloud services, in this framework, they present a reputation-based trust assessment method, which is based on feedback rating derived from the cloud service providers. From the protocol perspective, Dou et.al [30] presented a distributed trust evaluation protocol for intercloud. From different perspectives and viewpoints, these approaches can be effective for service reputation. Unlike the perspective of service, in this paper, we mainly focus on the perspective of the users. Our approach is based on the users' QoS data and can be applied to personalized service selection, service composition, and service recommendation. In the related studies, Rong-Hua Li et al. [31] introduced six reputation calculations based on convergence algorithms. Baichuan Li et al. [32] proposed a topic-biased model (TBM) to estimate the user reputation applied in rating systems. In our preliminary study [33], we use the L1-AVG algorithm to calculate the users' reputation. However, these approaches are also affected by the parameter settings; thus, there is still room for improvement in terms of effectiveness. Based on the former researchers, we attempt to explore a more effective and direct approach to obtain the users' reputation. Approach This section describes the approach and algorithms for user reputation. First, we present the notations and definitions. Then, we present a system framework and propose two algorithms. Finally, we analyze the time complexity for our MeURep algorithms. Notations and definitions Let there be m different users U = {u 1 , u 2 ,� � �, u m } and n services S = {s 1 , s 2 ,� � �, s n }. In this case, service invocations will produce a user-service QoS matrix with respect to each QoS property. We can represent the user-service matrix as an m×n matrix Q 2 R m×n . In this matrix, each entry q ij (i � m, j � n) denotes the QoS property, which indicates that if the i th user invokes the j th cloud service, it will generate a QoS value. In this matrix, each row and column denotes a service user and a candidate service, respectively, and each entry in the matrix denotes the QoS data observed by a user when invoking a service. If the i th user did not invoke the j th cloud service before, then q ij = null. The reputation of users can be represented as R = {r 1 , r 2 ,� � �, r m }. We assume that the users' reputation ranges from 0 to 1 (0 � r i � 1). The most unreliable user's reputation is 0, whereas the most reliable user's reputation value is 1. Our goal of the reputation calculation is to excavate the information from the QoS property values of each user. System framework We present a framework for cloud service selection based on user reputation in Fig 1. In this framework, the reputation calculation plays an important role. As Fig 1 shows, there are many types of cloud services on the Internet, each of which has many similar or equal services. The users invoke the cloud services and submit their observed QoS data to the QoS database, and the cloud service selection module performs the service selection after calculating the user reputation. Noteworthy, the QoS data can be measured at the server side or the user side. In this framework, QoS data are provided by the user side, which is personalized. In contrary to rating values in rating systems, the QoS data fluctuate in an uncertain range. Therefore, the reputation calculation models designed for the rating system may not be suitable for cloud services. The user reputation can also be applied in cloud service recommendation, prediction, etc. As Fig 2 shows, the entire process of the user reputation applications contains four parts: observe QoS data, collect and store the QoS data, analyze and calculate the user reputation, and applications. The first three parts can be accomplished in real time. This paper mainly focuses on the user reputation calculation. User reputation calculation model To compare with our approach, first, we will introduce a reputation calculation algorithm, which was proposed in [31], named L1-AVG. This algorithm can be expressed as: In (1), q ij denotes a certain QoS value, k is the k th iteration, r k i is the reputation r i in the k th iteration, and A j is the average QoS value for the j th service. After k+1 iterations, A j is changed to A kþ1 j . When the j th service is invoked, it will be recorded and represented as H j . |H j | is the number of users who have invoked the j th service. Similarly, when the i th user invokes some services, it will be recorded and represented as O i . |O i | is the number of services that have been invoked by the i th user. To ensure that r i ranges from 0 to 1, damping coefficient d plays a part in the regulation of the calculation result. For better results, the L1-AVG have to adjust its the parameter d according to different data. Such as in our experiments, d is set to 0.02 for the response time datasets while d is set to 0.01 for the throughput datasets. This shows that it is very inconvenient. From (1), the reputation value is obtained based on the degree of deviation in each convergence process. However, it also has its scope of application. It is applicable to situations where the value is within a certain range. Actually, QoS data are highly skewed with large variances. The unreliability user may supply unlimited values. If an unreliable user submits negative data, the average value may be negative, and the reputation calculation results may be negative, which is out of range of the defined reputation. Meanwhile, although the L1-AVG algorithm uses damping coefficients to adjust the calculation results in each convergence process, it is not convenient to determine the value of damping coefficients. To address this problem, we propose a user reputation calculation approach based on the median value analysis, named MeURep. MeURep includes two algorithms: L1-MeURep and L2-MeURep. The L1-MeURep algorithm is represented as: In (2), T j is the median QoS value for the j th service invoked by the users, and T kþ1 j is T j after the k + 1 th iteration. The meanings of r k i , H j , O i , |O i | are identical to those in (1). Specifically, in the worst case, the median may be negative when half of the users' data are negative. the median also comes from an unreliable user when more than half the users are unreliable. In this case, the system has become meaningless. Therefore, our method is suitable for the situation that the percentage of unreliable users is less than half. Like L1-AVG, the calculation process of L1-MeURep is based on the convergence. Unlike L1-AVG, we use the median value instead of the average value and calculate the maximum of jq ij À T kþ1 j j. r i is largely determined by q ij and T kþ1 j . The main idea of L1-MeURep can be simply represented that if the QoS data provided by a certain user is very different from the median, then this user is probably not reliable. However, in extreme cases, if the quantity of the unreliable users is more than the number of reliable users, the median value will come from an unreliable user, and the QoS data provided by a reliable user may be very different from the median. The methodology of L1-MeURep is in Algorithm 1. In Algorithm 1, we first initialize the parameters. In the initialization, k = 0 and r 0 i ¼ 1. Then, the median QoS value for the j th service and the reputation of the i th user are calculated according to (2) using the iterative approach. When k is more than RMaxI (the maximum number of iterations) or the absolute of r kþ1 i À r k i satisfies the required accuracy (less than thresholds), the algorithm will be terminated and outputs the user reputation. In (2), one of the key steps is to calculate the absolute of q ij À T kþ1 j , and we also try another computation mode as follows. The L2-MeURep algorithm is represented as: Algorithm 1 L1-MeURep algorithm Unlike L1-MeURep, we change the absolute mode to the square mode for q ij À T kþ1 j in (3). Since the pseudo code of L2-MeURep algorithm is similar to the L1-MeURep algorithm, we omit the details which like in Algorithm 1 for it. From (2) and (3), there is no damping coefficient; thus, it is more convenient than L1-AVG algorithm. The complexity analysis of L1-MeURep and L2-MeURep is as follows. We assume that the amortized cost in a single iteration is C(|G|), where |G| is the total number of edges in the bipartite graph. As a result, for k iterations, the total running time of MeURep algorithms is C(k|G|). Experiment In this section, we conduct experiments to validate our MeURep approach. Our experiments are intended to verify the rationality of our proposed theorems and compare our approach with the other approach. Experimental setup The purposes of the experiments are to use the data to calculate each user's reputation value and verify the validity of our algorithms. In our experiments, we use the real-world reliable users' datasets released by Zheng et al. [34]. From these datasets, we use two matrices. Each of them is a 339×5825 matrix, i.e., 339 users and 5825 services. In these two matrices, the entries are the vectors of QoS values observed by a service user on a Web service, which are response time properties and throughputs, respectively. In the experiment, in order to make the experiments more realistic, we mixed many unreliable users which are generated at random into these 339 users. Furthermore, the number of added unreliable users may also impact the algorithm's performance; thus, we adjusted the proportion of unreliable users in the datasets to different levels: 2%, 4%, 6%, 8%, and 10%. Table 1 brief describe the 379×5825 throughput matrix, which contains approximately 10% of unreliable users. In our datasets, the range of the response time is 0�20s, and the range of the throughput is 0�7000kbps. Due to page limitations, we don't describe the response time matrices and throughput matrices of the other different proportions. In this way, we believe the conduct experiments are better and more persuasive. It is worth noting that our matrices are the off-line data of the response time properties and throughputs, so their density is not sparse relative to real-time data. If the data is missing at a certain position in the matrix, we randomly assign it to a non-negative number. According to the range of reputation values defined in Section 3, we further define the calculation average error of the reputation value as follows: In (4) and (5), E re and E ur are the average error of the reputation values for reliable users and unreliable users, respectively. N re and N ur are the numbers of reliable users and unreliable users, respectively. As we mentioned before, RMaxI is the maximum number of iterations, which aim is to avoid getting caught in endless iterations when the algorithm does not converge. In the following experiments, refer to [31] and our results, we set RMaxI as 10 and the threshold as 0.001. To better reflect the experimental results in the paper, we show in the figures is five randomly selected users from the 379×5825 matrices, whose number 1-4 are reliable, and number 5 is unreliable. Experimental results and discussion We present the performance of different approaches in calculating the user reputation in this section. Specifically, we construct an experiment not only in different approaches but also in diverse datasets, which contain the varying proportion of unreliable users. The experimental results reflect the superiority of our methods in accuracy and efficiency from users reputation value and iteration processes. For experiments using L1-AVG, we vary the damping coefficient d with different values to optimize it accordingly to achieve their optimal accuracy. Fig 3 shows the results of the users' reputation values for different damping coefficients for L1-AVG. Fig 3 shows that the value of user reputation significantly varies with different damping coefficients. For example, when d = 0.02, the reputation values of users 1-5 are 0.9644, 0.9836, 0.9834, 0.9804, and 0.0233, respectively. The average error E re is 0.0220, and E ur is 0.0233. In Fig 3, we can find the reputation value of user 5 looks identical to those of users 1-4 when d = 0.0005. Since user 5 is unreliable, d = 0.0005 is unreasonable. When d = 0.05 or 0.1, the reputation value of user 5 is negative, which is out of the defined range of reputation, and it is also unreasonable. The reason can be explained as follows: the value of d � P jq ij À A kþ1 j j is too big, and if we divide it by |O i |, the result may be in excess of 1. Therefore, to obtain satisfactory results, the damping coefficient is adjusted for many times. By this way, we conclude that the optimal value for response time is 0.02 and the optimal value for throughput is 0.001. Fig 4 illustrates the iteration process of L1-AVG. We can see the following: 1. As a whole, the iteration processes of users 1-4 are similar. They are first in an unstable state and subsequently converge to a fixed value after a few iterations. The number of iterations for users 2-5 is four. For reliable users 1-4, the reputation value curve rises until it reaches a stable value. For user 5, the reputation curve is in a descending state until it reaches a stable value. In the iteration process, the iterative initial values of users 1-4 is more than that of user 5 in the first step of iterations. 3. When d = 0.05 (Fig 4(b)), the iterations is three for users 1, 3 and 5, whereas the reputation values achieve convergence after three iterations for users 2 and 4. Therefore, the number of iterations is not inconsonant. For different decay constants, the number of iterations to converge is different in the user reputation calculation iteration process. Even for the same decay constant, the number of iterations to converge also varies. In the following content, we conduct experiments to validate our MeURep approach. We use a part of the response time and throughput dataset and verify our approaches L1-MeURep and L2-MeURep. The The experimental results of using the throughput dataset including approximately 10% of unreliable users are illustrated in Table 2 and the iteration processes are shown in Fig 7. In addition, based on the reputation values of all users, we make the comparison of users reputation errors in different proportions(as shown in Table 3). We observe the following: 2. When we increase the proportion of unreliable users, E re and E ur are both increase under different approaches. However, compared L1-MeURep and L2-MeURep with L1-AVG, the growth rate of these metrics is obviously much lower. 3. Fig 7 denotes our MeURep approaches are also faster than L1-AVG when using the throughput dataset. In fact, the conclusion is still valid in other datasets. 4. Compared L1-MeURep with L2-MeURep, we observed that the E re of L2-MeURep is less than L1-MeURep, but the E ur is larger than it. From the above experimental results, we find that our approach is more simple and efficient than the L1-AVG algorithm. First, it does not require a damping coefficient to adjust the calculation result. Therefore, it is unnecessary to tune parameters in the experiment. Second, for the L1-AVG algorithm, the average value A j is strongly affected by unreliable user data (e.g., the data of user 379 in the Table 1 increase the average value, and the value jq ij À A kþ1 j j has a great change). By contrast, because of the quantity of the unreliable users accounts for a relatively small proportion in reality, our approach uses the median value to avoid being impacted by a specific abnormal data (e.g., regardless of how large or small the value of user 379 is, the median value does not change much). Since jq ij À T kþ1 j j is close to or equal to maxðjq ij À T kþ1 j jÞ, the reputation value of unreliable users is notably small. Third, our model is faster than L1-AVG. The reputations reach convergence values after two iterations in our algorithm (Fig 5(b)) but three iterations in L1-AVG (d = 0.02) (Fig 4(a)). Fourth, the experimental results show that our approach is more accurate than L1-AVG. In addition, for the response time experiments, the value E re of L2-MeURep is better than L1-MeURep, but E ur of L2-MeURep is worse than L1-MeURep. And for the throughput experiments, the result is the same. So L1-MeURep seems more suitable for identifying unreliable users. In a nutshell, the performance of L1-MeURep and L2-MeURep is difficult to distinguish between good or bad in our dataset, how to decide which one should be chosen when implementation reply on the actual situation. Conclusion and future work In the cloud service environment, users usually need to select optimal services according to other users' personalized QoS data to build various applications. However, in the complex network environment, some users may provide unreliable QoS data, which causes a negative effect on the service selection. Therefore, it is important to know the users' reliability. To measure the users' reliability, it is usually necessary to calculate the users' reputation values. In this paper, we present a user reputation calculation approach, namely MeURep. First, we present a cloud service selection framework based on user reputation. Then, we propose MeURep algorithms called L1-MeURep and L2-MeURep. Finally, to verify the validity of our approach, we have conducted experiments on a real-world dataset. The experimental results show that our approaches have high efficiency compared to the other approach. Compared with L1-AVG, the average error E re of our algorithm achieves 89.43% � 95.31% improvement for response time and 85.41% � 86.20% improvement for throughput at the different proportion of unreliable users. Similarly, the average error E ur of our algorithm achieves 93.44% � 94.50% improvement for response time and 47.05% � 50.13% improvement for throughput. In the future, to achieve better performance, we plan to take the subcategory information into consideration to improve the calculation quality. In addition, to improve the real-time ability, we will consider the online environment.
6,718.6
2019-06-21T00:00:00.000
[ "Computer Science" ]
The UNITE database for molecular identification and taxonomic communication of fungi and other eukaryotes: sequences, taxa and classifications reconsidered Abstract UNITE (https://unite.ut.ee) is a web-based database and sequence management environment for molecular identification of eukaryotes. It targets the nuclear ribosomal internal transcribed spacer (ITS) region and offers nearly 10 million such sequences for reference. These are clustered into ∼2.4M species hypotheses (SHs), each assigned a unique digital object identifier (DOI) to promote unambiguous referencing across studies. UNITE users have contributed over 600 000 third-party sequence annotations, which are shared with a range of databases and other community resources. Recent improvements facilitate the detection of cross-kingdom biological associations and the integration of undescribed groups of organisms into everyday biological pursuits. Serving as a digital twin for eukaryotic biodiversity and communities worldwide, the latest release of UNITE offers improved avenues for biodiversity discovery, precise taxonomic communication and integration of biological knowledge across platforms. Introduction Knowledge on species identity is a cornerstone of biology and provides key information for understanding biodiversity changes driven by climate change and other human pressures.Such taxonomic knowledge has traditionally been obtained primarily from sources such as field surveys by skilled practitioners with substantial experience in morphological studies and taxonomy, but the last few decades have seen a steady increase in the use of molecular (DNA sequence) tools for characterization of biodiversity.DNA sequences from substrates such as soil and water invariably indicate a significantly larger extant biodiversity than known from traditional approaches.Indeed, many of the species and evolutionary lineages recovered in this way are, so far, only known from sequence data.Molecular surveys thus bring many pressing questions to the fore, notably how to root environmental DNA sequences at the species level if there is no other descriptive information, and how to communicate species that may lack formal names and taxonomic affiliations all the way up to the kingdom level.Furthermore, many of these studies suggest novel, poorly understood biological associations and co-occurrences among organisms across distinct groups, questioning the current practice of routinely singling out particular groups -such as fungi -for environmental sequencing. The UNITE database ( https://unite.ut.ee ) was launched in 2003 as a Sanger sequence-oriented online resource for molecular identification of fungi.It is focused on the ∼600-base nuclear ribosomal internal transcribed spacer (ITS) region, the formal fungal DNA barcode ( 1 ), and includes all public ITS sequences from the International Nucleotide Sequence Databases Collaboration (INSDC; ( 2 )) plus ITS sequences supplied from UNITE users and partners.The sheer number of unidentified, and for all practical purposes unidentifiable, fungal species recovered from environmental sequencing stimulated UNITE to devise the so-called species hypothesis (SH) concept.SHs represent an open and reproducible approach to unambiguously infer, identify and communicate described as well as undescribed species ( 3 ).UNITE defines SHs from public ITS sequences through a series of quality filtering and single-linkage clustering steps at successively more stringent threshold levels.All SHs, supplemented with their source metadata and trait information, are assigned a dig-ital object identifier (DOI) to facilitate unambiguous scientific communication and ensure data interoperability across datasets and studies (Supplementary Figure S1).Over time, UNITE, together with its data management platform, PlutoF, has evolved along with DNA sequencing technologies into a fully-fledged online workbench and sequence management environment for handling not only sequence identification but most steps in DNA barcoding and metabarcoding studies.UNITE offers web-based third-party sequence curation and addition of metadata, and SH-based reference datasets are released for many popular metabarcoding (massive parallel sequencing of amplified genetic markers; ( 4 )) pipelines, notably QIIME ( 5 ) and SINTAX ( 6 ). The rapid development of high-throughput sequencing methods and the scope of the biological questions that are being addressed in its wake provoke a reconsideration of many aspects of biological research.While assignment of a DOI to an otherwise nameless species ensures scientific reproducibility of that species and its metadata, it does little to address or clarify the higher-level classification of that species.As a result, metabarcoding and taxonomy are often pursued as two essentially distinct disciplines where progress in one is not being incorporated into the other.Furthermore, the fact that many of these nameless species cannot be grown away from their natural habitat hints at currently unknown biological associations, putatively across organism groups and points to a limitation of the current routine use of single-taxon metabarcoding efforts and databases ( 7 ).In parallel, large international biodiversity informatics efforts converge on systems for information dissemination and data exchange about our living world-systems to which individual metabarcoding efforts typically do not contribute at present.In this study we report on recent UNITE developments to refine the discovery potential and maximize the scientific usefulness of metabarcoding data against the backdrop of the massive increase in the volume and read length of environmental sequencing data. Databases Sequence data and quality control UNITE synchronizes with the INSDC to download and update reasonably full-length Sanger-derived eukaryotic D 793 Figure 1.Diagram of the UNITE SH 9.0 calculation steps.The sequences are dereplicated using VSEARCH, and sequences that do not represent the full ITS region according to ITSx are dismissed.Following quality filtering, a series of successive clustering steps of generating subsets of 500 000 (500k) and 30 0 0 0 (30k) sequences and selecting core representativ e sequences (cR epS) is carried out.T his yields what are termed 'compound clusters', which are sequence clusters roughly at the genus / subgenus le v el.T hese are further clustered into species h ypotheses (SH).All clustering steps in the SH calculation w orkflo w are perf ormed using the USEAR CH tool.T he similarity thresholds (97% −95% −90% −80%) f or the nested pre-clustering (5c, 6) were chosen to yield clusters at approximately the genus / subgenus level.A dissimilarity threshold (0.5%) for the complete-linkage clustering (5d) was selected to trim the dataset of closely related sequences around the core representative sequences.The core representative sequences undergo the final single-linkage clustering within a dissimilarity range of 0.5 −3.0% with a 0.5% step.These dissimilarity thresholds were selected as the most commonly applied in species delimitation and sequence identification.For each SH, a representative sequence is selected, either automatically or based on prior manual curation.The species hypotheses are aligned to form the final SH datasets. ribosomal DNA sequences on a quarterly basis.Additionally, it accepts user-provided Sanger-derived sequences and highquality metabarcoding sequences, as long as certain criteria, such as minimum required length and detection of ribosomal gene regions, are met.At present, UNITE features > 2.4M Sanger-derived and > 7M metabarcoding sequences, the latter being representative sequences from operational taxonomic units ( 8 ), originating from the five large metabarcoding datasets ( 9-13 ) so far incorporated into UNITE.All sequences are subjected to a range of quality control steps, including the software tools ITSx ( 14 ) and UCHIME ( 15 ) to eliminate non-ITS and chimeric sequences, respectively.Other aspects of quality control are performed in a semi-automatic or manual way.For instance, sequences with clearly incorrect taxonomic annotations may be renamed automatically, whereas more subtle cases are flagged for manual examination.These various manual steps are very time-consuming, and artificial intelligence-based tools are currently explored to speed up these processes. Other types of quality issues are presently not amenable to algorithmic interpretation.For instance, a sequence may be tagged with the wrong country of origin, or the name of a host may be misspelled.To facilitate the correction of such errors, UNITE offers web-based third-party sequence curation through the PlutoF biological data management environment ( 16 ).To date, > 600 000 third-party annotations have been contributed by UNITE users, including > 170 000 tax-onomic re-annotations, > 107 000 specifications of collection locality and > 55 000 specifications of host and interacting taxa.Nearly 25 000 sequences have been identified as derived from nomenclatural types, and special weight is given to these sequences in subsequent sequence identification steps.Conversely, during the manual curation process, > 13 000 sequences have been flagged for exclusion from active use due to unsatisfactory technical quality.Intragenomic ITS variability, to the extent that distinct ITS copies end up in different SHs, may potentially add noise in the estimation of biodiversity ( 17 ).UNITE keeps track of these copies through (living) specimen-based searches, and cases of non-trivial ITS variability can be accounted for by manually designating a more inclusive clustering threshold on a case-by-case basis.More statistics on third-party sequence curation by the UNITE community can be found at https:// unite.ut.ee/ curation.php, and a list of type-derived as well as low-quality sequences can be downloaded through PlutoF. Species and taxon hypotheses From its sequence data, UNITE infers species hypotheses (SHs) at six clustering dissimilarity thresholds (0.5, 1.0, 1.5, 2.0, 2.5 and 3.0% nucleotide divergence between SHs) to accommodate the dynamic nature of species boundaries across the target group.The ever-growing data volumesprimarily from metabarcoding data -prompted redesign and optimization of the SH inference process in various ways, notably using USEARCH ( 18 ) to sequentially cluster sequences into ever-smaller subsets, temporary dereplication of identical and near-identical sequences using VSEARCH ( 19 ) and the use of highly parallelized software tools in a high-performance computing environment (Figure 1 ).UNITE taxon hypotheses (THs; ( 20 )) are formed by mapping all SHs to the UNITE backbone classification through a taxon name selection algorithm that draws from all constituent sequences of each SH and tries to account for complications such as individual sequences with incompatible taxonomic annotations.Manually curated sequences are given extra weight in this process.Each TH is a dataset that contains all individuals and their ITS sequences from connected SHs.In addition, each dataset includes a distribution map, ecological traits and links to other associated THs.TH datasets are published with DataCite DOIs and are available as linkouts from SH DOIs.A visual example of a TH is shown as a screenshot in Supplementary Figure S2. UNITE currently comprises 442 490 and 340 581 eukaryotic SHs at the 1.0% and 1.5% dissimilarity thresholds, respectively, and are based on 1 309 071 Sanger-derived sequences (of which 96% stem from the INSDC) and 6 825 264 metabarcoding sequences.The number of SHs grows rapidly over time (Figure 2 ).The share of metabarcoding sequences in the current UNITE release is 84%, and 47% and 45% (1.0% and 1.5% clustering dissimilarity, respectively) of all SHs are composed solely of metabarcoding sequences.Interestingly, 47% of all SHs consist of only Sanger-derived sequences, leaving a very modest 6-8% of the SHs composed of both metabarcoding and Sanger-derived sequences.Since all metabarcoding sequences in UNITE are representative sequences from non-singleton operational taxonomic units, no metabarcoding sequence in UNITE is a singleton in the strict sense of the concept (i.e.only one read in one sample).Even so, > 2% of the SHs at the 1.5% threshold gap are formed by single metabarcoding (representative) sequences (7 568 SHs).The corresponding share of SHs composed of singleton Sanger-derived sequences is 31% (103 928 SHs).Sequences that are singletons for technical rather than biological reasons are likely to behave differently as clustering thresholds are relaxed, and we are looking into artificial intelligence-powered tools to further enhance the data quality over time. UNITE taxonomy We have increased the taxonomic scope of UNITE from fungi to all eukaryotes, and UNITE now mirrors the INSDC for 'Eukaryota' rather than just 'Eukaryota:Fungi' (Figure 3 ).This makes UNITE useful for identifying more groups of organisms, for detecting and comparing the frequency of specific cross-kingdom associations in large datasets or sets of datasets and for highlighting non-target cross-kingdom PCR amplifications in single-group datasets ( 7 ).The paneukaryotic scope means that all eukaryotic SHs known from ITS sequence data-regardless of which classification level they are identified at-now have a persistent DOI to facilitate communication and metadata assembly across studies and datasets.The most well-represented kingdom is Fungi followed by Viridiplantae and Metazoa (Figure 3 A).The number of fungal SHs exceeds the number of recognized fungal species names in Catalogue of Life (CoL; ( 21)) (Figure 3 B), thus allowing the identification and communication of many undescribed species for which referencing across time and projects would otherwise be highly challenging.We hope to see a similar trend for other groups of eukaryotes as the amount of data increases. UNITE uses CoL for overall eukaryotic taxonomy and classification.The taxonomic backbone of UNITE is flexible and allows web-based implementation of minor to major changes, such as those arising from publication of new or revised classification systems at any taxonomic level.For fungi, we use the Outline of Fungi ( 22) with some modifications (e.g. ( 23)).Expert users have similarly adjusted the classification in other groups of organisms-such as plants and oomycetes-to better reflect recent scientific results.New names and classifications are imported and verified as far as possible during the quarterly INSDC sequence import process using MycoBank ( 24 ) for fungi and some fungus-like groups, and CoL checklist, World Register of Marine Species (WoRMS; ( 25 )) and Global Biodiversity Information Facility (GBIF; ( 26 )) for the remaining groups of eukaryotes.In between these update sessions, users can add new names through a new import module available in PlutoF.This module fetches taxon names through a GBIF API ( https:// www.gbif.org/developer/ summary ). Database connectivity and data dissemination UNITE has led the development of a third-party curation service in PlutoF to improve the value of public DNA sequences and their source metadata (e.g.material source, geolocation and habitat, taxonomic re-identifications, interacting taxa and literature).In collaboration with the European Nucleotide Archive (ENA; ( 27 )), improved or corrected annotations of INSDC sequences residing in UNITE are fed back to primary repositories through the ELIXIR Contextual Data Clear-ingHouse ( https:// www.ebi.ac.uk/ ena/ clearinghouse/ api ) and shown on their record pages next to the original data ( 28 ,29 ).Searching and browsing of third-party annotations introduced by the UNITE Community can be done via PlutoF and ENA web services or by using the search interfaces of PlutoF and UNITE (Supplementary Figure S3).During 2023 alone, UNITE has contributed > 4 000 annotations to ENA. Metabarcoding is a major source of biodiversity data, and beginning in 2019, UNITE users have been able to publish metabarcoding datasets they manage in PlutoF through the Global Biodiversity Information Facility to become discov-erable at the GBIF.orgportal ( https://www.gbif.org).These DNA-derived taxon occurrences are linked to UNITE SH identifiers that are incorporated in the backbone taxonomy of GBIF, meaning that also undescribed biodiversity is opened up for biodiversity data reuse and policy making along with biodiversity data from all other sources mediated through GBIF.Successive versions of UNITE SH classifications have been published and included in the GBIF backbone classification during the last few years ( 30 ), allowing users to compare and analyse datasets published with SH identifiers from different versions (versions 7-9) over time.To date, 10 datasets with > 7M occurrence records linked to SH persistent identifiers have been published from PlutoF to GBIF.Re-annotations at the sequence level are also shared with the GBIF data portal, which facilitates the placement of SHs in the GBIF taxonomic backbone.This dual connection between the UNITE and the GBIF systems enables constant improvements of the quality of the sequence identification thanks to the evolving reference libraries. The UNITE w ebsit e Bioinformatics underpins much of UNITE, but we strive to make the data in UNITE easy to interpret, interact with and download also for non-bioinformaticians.While some expert tools and queries are reserved for users registered in PlutoF, a range of resources for sequence identification, query and analysis are openly available through the UNITE web portal.Our intention is to provide up-to-date, preformatted DNA sequence and metadata release files for any structured effort that needs these, and we offer such files for a number of tools, notably QIIME, mothur ( 31 ), BLAST ( 32 ), SINTAX and D AD A2 ( 33 ).The underlying PlutoF sequence management platform offers registered users a comprehensive environment D 796 to manage biological collections, scientific studies and longterm datasets.All data and services of UNITE and PlutoF are provided free of charge. The SH matching analysis ( 34) is a nascent digital service for global species discovery from environmental and other DNA sequence data.The tool places a user's unknown DNA sequences into existing UNITE species hypotheses or forms new SHs not yet present in the system, as applicable.Registered users can choose to imprint these (or some of these) new SHs into the SH system for public or personal use, according taxonomic permanence to what otherwise would have been very short-lived detections restricted to individual studies.The SH matching analysis output includes DOI-based identifiers and, if applicable, binomial names for communication of species hypotheses recovered from metabarcoding or Sanger data.The development version of the SH matching analysis is available as an EOSC-Nordic ( https://www.eosc-nordic.eu) service for registered users, and the source code is available at GitHub ( https:// github.com/TU-NHM/ sh _ matching _ pub ). Outlook A formidable challenge in eukaryotic microbiology is the immense number of dark taxa known exclusively from sequence data and defying any effort to isolate them.Current rules of nomenclature preclude formalization of these taxa ( 35 ,36 ), effectively curbing their inclusion in many biological contexts and pursuits.Integrating these taxa alongside formally recognized ones in a classification and naming system from the species to kingdom level and possibly beyond is needed to facilitate standardized and unambiguous communication.The UNITE taxon hypothesis system readily lends itself to this kind of representation, and we are currently exploring the use of artificial intelligence to produce a fully resolved paneukaryotic DOI-based taxon hypothesis release.Such a representation would ultimately allow plotting of metabarcoding datasets across the full eukaryotic tree of life.This, in turn, enables instant automatization of numerous challenging and hotly pursued research questions, for instance repeated detection of cross-kingdom co-occurrences of species to indicate previously overlooked ecological associations, or identification of the most similar communities from the pool of all available metabarcoding datasets. In the near future the increasing read lengths of metabarcoding sequences will allow the full ribosomal operon rather than any of its individual components-the SSU (18S rRNA) and LSU (28S rRNA) genes and the intercalary ITS regionto be routinely targeted.While ribosomal sequencing has a long history in environmental microbiology, the available resources and repositories are essentially compartmentalized and tailored for each ribosomal component.Bridging these resources under a common naming system is highly desirable.This entails virtual assembly of full ribosomal sequences along with their metadata scattered across several separate databases-an undertaking that risks producing chimeric sequences and data.Assembled fungal genomes may offer guidance in this process, and UNITE recently assisted the EU-KARYOME database ( https://eukaryome.org ) in the generation of a pan-eukaryotic, full-ribosomal chimera control reference dataset.We are exploring other avenues for merging data and metadata together with, e.g. the BOLD database ( https://boldsystems.org ).At present, we use ITSx to extract the ITS region from long-read metabarcoding sequences, after which the ITS component is incorporated into UNITE.Long-read metabarcode reads are thus used in UNITE, but their information content is not maximized. By storing sequence occurrence data along with rich metadata on, e.g.locality and substrate of collection as well as interacting taxa, UNITE essentially offers a digital twin of eukaryotic biodiversity and communities worldwide.This virtual representation certainly presents technical challenges, but above all it encourages the life science community to rethink many current standpoints.It calls for a seamless two-way flow of information between metabarcoding and taxonomy, stresses the need for inclusion of as yet undescribed species and groups in all biodiversity-related efforts, and signals that the era when individual groups of organisms were routinely studied in isolation may well be over.Policies and protocols may not change overnight, but the looming biodiversity crisis forms a backdrop against which haste, for once, seems vital. Figure 2 . Figure 2. The number of species hypotheses at 1.0% and 1.5% between-species distance threshold through the four latest major versions of UNITE.Each SH is assigned a unique DOI every time the SHs are recomputed, and a versioning system keeps track of DOI names and contents over time, allowing users to follow how individual SHs are populated with sequences over time. D 795 Figure 3 . Figure 3. ( A ) Treemap of the most abundant taxa (kingdom and phylum) based on the tax onom y of UNITE SHs at 1.0% between-species distance threshold, ( B ) The number of UNITE SHs at 1.0% distance threshold versus species names per fungal phylum in the Catalogue of Life (CoL) c hec klist from 2023-06-29.
4,709.6
2023-11-11T00:00:00.000
[ "Biology", "Computer Science" ]
Quantum Computing and Machine Learning on an Integrated Photonics Platform : Integrated photonic chips leverage the recent developments in integrated circuit technology, along with the control and manipulation of light signals, to realize the integration of multiple optical components onto a single chip. By exploiting the power of light, integrated photonic chips offer numerous advantages over traditional optical and electronic systems, including miniaturization, high-speed data processing and improved energy efficiency. In this review, we survey the current status of quantum computation, optical neural networks and the realization of some algorithms on integrated optical chips. Introduction 1.Background and Motivation The rapid development of technology has given rise to two fields that hold the potential to significantly reshape the landscape of computation: quantum computing and machine learning.Quantum computing (QC) is a computational paradigm that leverages the principles of quantum mechanics to perform complex computations more efficiently than classical computers, particularly for specific problem domains [1].Quantum computing has attracted much interest over the past decade due to possible quantum advantages in solving computationally complex problems using various models, including the qubit model on trapped ion systems [2,3] and super-conducting systems [4,5], measurement-based quantum computing [6,7], and Gaussian boson sampling (GBS) on a photonic platform [8].Researchers have identified several quantum algorithms that outperform their classical counterparts, including Shor's algorithm for integer factorization [9] and Grover's algorithm for unstructured search [10].By exploiting the quantum nature of multiple photons, such as quantum superposition, interference and entanglement, some quantum algorithms have been put forward to offer the potential to reduce computational time for problems in machine learning [11,12], chemistry [13,14] and other areas [15]. In parallel, machine learning (ML) has emerged as a type of artificial intelligence that can process large amounts of data and learn patterns from this data.This approach enables more accurate results in predicting outcomes without being explicitly programmed to do so.This technology is used in a wide range of applications, including recommendation systems, image recognition and autonomous vehicles [16,17]. The integration of quantum computing and machine learning can possibly unlock new opportunities and challenges for various application domains, such as healthcare and medical diagnosis, finance and risk assessment, telecommunications and networking, smart cities and transportation, environmental monitoring and climate modeling, etc.By combining the computational advantages of quantum computing with machine learning, this integrated approach has the potential to transform the way machine learning models are developed, trained and deployed. Although quantum computing has been systematically studied from different perspectives, there are few existing reviews focusing on quantum computing and machine learning on an integrated photonics platform.However, in comparison with other physical platforms, such as superconducting and trapped-ion systems, photonic systems operate at room temperature and are generally less susceptible to lossy errors.Therefore, the photonic systems are worthy of exploration for quantum computing and quantum machine learning.In addition, the integrated platforms have the advantages of ultracompact size, high-density integration and high programmability, which make them more appealing for realizing a large-scale programmable quantum microprocessor.We thus provide a detailed review on the intersection of quantum computing and machine learning from the perspective of the integrated photonics platform.It is the hope of the authors that this comprehensive review will allow researchers to understand the status and challenges of quantum computing on silicon photonics platforms and, thus, inspire and contribute to their further development. Objective and Scope of the Review The objective of this review is to provide an integrative understanding of quantum computing and machine learning, exploring their fundamental principles, state-of-the-art techniques and emerging applications.Our aims are as follows: • Discuss the current state of research in quantum computing and machine learning; • Present case studies and experimental results that demonstrate the potential to integrate quantum computing; • Examine the challenges and opportunities associated with integrating these technologies; • Outline future directions and open research questions in this rapidly evolving field. In this review, we aim to provide a comprehensive understanding of the principles, techniques and emerging applications of the integration of quantum computing and machine learning.We discuss the current state of research based on integrated photonic platforms in this rapidly evolving field, identify the challenges and opportunities associated with integrating these technologies and outline future directions and open research questions. Organization of the Review This review is organized into eight sections, and the structure is as follows: • Section 2 provides an overview of the quantum mechanics principles and QC basics, including quantum superposition, quantum entanglement, quantum measurements, qubit, quantum gates and circuits and quantum algorithms and complexity; • Section 3 provides an overview of quantum algorithms and complexity in terms of quantum machine learning and quantum optimization algorithms; Quantum computing essentially harnesses some unique properties of quantum mechanics to gain a speedup for some specific computational problems compared to similar tasks on classical computers [18].One such feature of quantum theory is superposition.Quantum superposition is a unique property of quantum mechanics [1] that allows a quantum state to be in multiple states at the same time until it is measured.This phenomenon is related to the wave-like nature of quantum particles, such as electrons or photons, which allows them to occupy different positions, energies or other properties at the same time.Mathematically, a quantum system's state is represented by a vector in a complex Hilbert space, and the superposition principle implies that any linear combination of these basis vectors is also a valid state for the system.Superposition is crucial for understanding the behavior of quantum systems and is a key concept underlying many quantum phenomena, such as the so-called "quantum parallelism" and quantum entanglement. Quantum entanglement is another unique phenomenon, in which the states of two or more qubits become intertwined, such that the state of one qubit cannot be described independently of the state of the other(s) [19].Quantum entanglement gives rise to nonclassical correlations.This property arises due to the superposition principle and has profound implications for quantum computing.Entangled qubits can be created through operations like the controlled-NOT (CNOT) gate and can be utilized to perform complex, correlated operations on multiple qubits simultaneously.Quantum entanglement may provide more efficient computation and communication, as well as novel protocols for secure information exchange and distributed computing [20], although the latter statement has never been rigorously proven. Quantum measurement, also known as the "measurement problem", is a key concept in quantum mechanics that describes the process of observing or measuring a quantum system [21].Due to the superposition principle, a quantum system can exist in multiple states simultaneously until a measurement is performed.Upon measurement, the quantum system collapses into one of the possible states, with probabilities determined by the squared magnitudes of the coefficients associated with each state.This collapse is inherently probabilistic, and the outcome cannot be predicted with certainty.Quantum measurement challenges our classical understanding of how physical systems behave, and it is still a topic of ongoing research and debate. Quantum Computing Basics The fundamental unit of quantum computing is the quantum bit, or qubit, which, unlike classical bits, can represent not only 0 and 1 but also a superposition of both states [22].Mathematically, a qubit can be described as a linear combination of its basis states |0⟩ and |1⟩ as |ψ⟩ where α and β are complex numbers satisfying |α| 2 + |β| 2 = 1.This unique property allows quantum computers to process a vast amount of information simultaneously by encoding multiple possibilities in a single qubit, thus enabling them to solve problems that are intractable for classical computers [23]. Quantum gates are the fundamental operations used to manipulate the states of qubits in a controlled manner [24].Unlike classical gates, which operate on bits, quantum gates operate on qubits and are represented as unitary matrices.Some common quantum gates include the Pauli-X, -Y and -Z gates, the Hadamard gate and the CNOT gate.These gates can be combined to form quantum circuits, which can then be used to implement quantum algorithms.Notably, quantum gates are reversible, meaning that they can transform a quantum state back to its original state, and the inverse of a quantum gate can easily be computed [25]. Quantum Computing with Linear Optics A qubit is often encoded in photonics using a single photon with two optical modes.These modes can encompass various degrees of freedom, including time, polarization, frequency and orbital angular momentum [26][27][28].This survey specifically concentrates on path encodings of a photon.To represent a qubit, it is common to use two waveguides, where the upper waveguide indicates a logical state of |0⟩ when a single photon is present and the lower waveguide represents a logical state of |1⟩.Likewise, this definition can be extended to encompass the encoding of d-dimensional qubits when the photon can occupy In linear photonic quantum information processing, the core operation is multipartite entangled states, considered as resources of quantum communication and computation.Due to the absence of nonlinearities, the generation of entanglement in photonics inherently relies on probabilistic methods [29].A photonic implementation of a C-Phase two-qubit gate using interferometers is depicted in Figure 1b, whose scheme is developed in Refs.[30,31].The interferometer in this setup has six modes and comprises three beam splitters with a transmissivity of 1/3.The two input qubits correspond to two photons that enter the four spatial modes of the interferometer.Specifically, the first qubit is associated with the top two spatial modes, while the second qubit is associated with the bottom two spatial modes.To ensure the proper definition of qubits in the output, only those output scenarios where one photon occupies the top two spatial modes and the other photon occupies the bottom two spatial modes are selectively considered, disregarding all other possible output results.The selective process, called post-selection, is a probabilistic way of generating entangled output configuration.It is easy to see that the success probability of the C-Phase is 1/9.Another basic requirement in quantum computation is the generation of multiple pairs of entangled photons, which is core to realizing graph states and error-protected qubits [32].Figure 1c shows a simple scheme to produce an entangled qubit-pair source.Four coherently pumped spiral waveguides (1.5 cm long) initially have two pairs of maximally entangled photons.These photons are then spatially separated using integrated filters of asymmetric Mach-Zehnder interferometers (AMZIs) and Mach-Zehnder interferometers (MZIs).The entangled source |00⟩ + |11⟩ is produced through the waveguide crossers.With these two simple examples, two key elements are identified in linear optical quantum computing: quantum interference in the linear optical circuits and post-selection.The measurements represent non-unitary operations, and such effective interaction is often called measurement-induced nonlinearity.However, this probabilistic post-selection limits the gate numbers and cascaded layers, which further limits the performance of universal quantum computation. The detection system is a multi-channel superconducting single-photon detector.It can absorb an amount of energy equivalent to a single photon and convert it into an electrical signal in the superconducting circuit.Then, the signal is amplified and processed by the time tagger to measure the coincidence count. Quantum Machine Learning Quantum computing uses entanglement, superposition and interference to perform certain tasks significantly faster than classical computing, sometimes exponentially.In fact, although such speedups have been observed for a well-designed problem, for data science, achieving such speedups is still uncertain, even at a theoretical level.This is precisely one of the main goals in building quantum machine learning (QML) [33].QML algorithms for universal quantum computers have been proposed and small-scale demonstrations have been implemented.Relaxing the requirement of universality, quantum machine learning for NISQ processors has emerged as a rapidly advancing field that may provide a plausible route towards practical quantum-enhanced machine learning systems.From the aspect of machine learning models, machine learning algorithms are classified into the three categories: supervised learning, unsupervised learning, reinforcement learning.From the aspect of quantum data encoding, the quantum machine learning is classified into discrete variable quantum computing and continuous variable quantum computing, as shown in Figure 2. In Table 1, we present a comprehensive summary of quantum machine learning algorithms along with their diverse applications across various platforms.The subsequent section provides a succinct yet informative introduction to these quantum neural networks, shedding light on their unique attributes and applications within the quantum computing landscape. Quantum Neural Networks For a classical neural network model, artificial neural networks (ANNs) are comprised of an input layer, one or more hidden layers and an output layer.The connections between layers have two parts: the linear part and the nonlinear part, as shown in Figure 3a.The linear part can be expressed by a vector-matrix multiplier.The nonlinear activation function is a nonlinear function.As a comparison, quantum neural networks (QNNs) combine the architecture of traditional neural networks with principles of quantum computing, thereby establishing a novel paradigm for data processing.QNNs are usually represented as variational circuits, which are parameterized quantum circuits that are optimized using classical optimization techniques (Figure 3b).The power of quantum neural networks is also an important open question, attracting significant attention.Currently, quantum neural networks have demonstrated their quantum advantage in specific tasks, as evidenced by recent studies [44,45]. Variational Quantum Classifier Variational Quantum Classifier (VQC) [46] is a type of quantum machine learning algorithm that leverages the principles of quantum computing to perform classification tasks on data.VQC is built on the concept of variational circuits.The goal of a VQC is to find the optimal parameters that minimize a cost function, which typically represents the difference between the predicted output and the actual output for a given dataset.As shown in Figure 3, the structure of VQC consists of three parts, including the encoding layer, circuit layer and measurement, which correspond to the input layer, hidden layer and output layer of classical neural networks, respectively.The VQC algorithm can be broken down into the following steps: • Data encoding: The classical data are encoded into a quantum state using a quantum feature map.This process translates the input features into a higher-dimensional Hilbert space, where quantum effects can be exploited for classification; • Variational circuit: The parameterized quantum circuit, often referred to as the ansatz, processes the encoded quantum data.The circuit's parameters are adjusted through the optimization process to minimize the cost function; • Measurement: The output of the variational circuit is measured, collapsing the quantum state into a classical probability distribution.This measurement provides the predictions for the input data.• Optimization: A classical optimization algorithm, such as gradient descent, is used to update the parameters of the variational circuit based on the cost function.This iterative process continues until the cost function converges to a minimum value, which signifies the best possible classification performance; • Evaluation: Once the optimal parameters are found, the VQC can be evaluated on unseen data for classification tasks.Overall, the research on VQC has provided insights into the theoretical foundations and practical applications of this algorithmic approach.VQC is frequently utilized to build a QNN, which is a counterpart to the conventional neural network. Variational Quantum Classifiers are promising for a variety of machine learning applications, particularly in cases where quantum advantages may lead to improved performance compared to classical ML algorithms. Quantum Convolutional Neural Networks (QCNN) QCNN [34] is an area of research that explores the potential of quantum computing to accelerate the training and inference of neural networks.Ref. [9] proposes a quantum version of the convolutional neural network (CNN), which is a widely used architecture in classical machine learning.The authors show that QCNN can achieve better performance than classical CNNs on certain image recognition tasks. Quantum Long Short-Term Memory Ref. [35] extends the classical LSTM into the quantum realm by replacing the classical neural networks in the LSTM cells with VQCs, which would play the roles of both feature extraction and data compression.In Ref. [36], the researchers demonstrate that a long short-term memory (LSTM) neural network can successfully learn to model quantum experiments by correctly predicting output state characteristics for given setups without the necessity of computing the states themselves. Quantum Generative Adversarial Network (QGAN) QGAN [37] is an emerging area of research that aims to apply the principles of quantum computing to the field of generative modeling.Refs.[37,38] introduce the notion of QGAN, where the data consist either of quantum states or of classical data, and the generator and discriminator are equipped with quantum information processors. Quantum Transfer Learning Ref. [39] extends the concept of transfer learning, widely applied in modern machine learning algorithms, to the emerging context of hybrid neural networks composed of classical and quantum elements.This paper proposes different implementations of hybrid transfer learning, but we focus mainly on the paradigm in which a pre-trained classical network is modified and augmented by a final variational quantum circuit. Quantum Reinforcement Learning Early versions of quantum reinforcement learning (RL) were based on the Grover algorithm, which resulted in a quadratic speedup compared to classical versions [47,48].However, these methods could only be used for tasks with discrete action and state spaces.Subsequently, with the development of quantum neural networks, the QRL algorithm was extended to continuous space, rendering it more compatible with contemporary NISQ devices [40,41]. Hybrid Classical-Quantum Neural Network Although there are many quantum analogs of the classical DNN, NISQ will be the only quantum devices that can be used in the near-term, where only a limited number of qubits without error-correcting can be used.For this reason, Ref. [42] introduces the quantum deep neural network (QDNN), which is a composition of multiple quantum neural network layers (QNNLs).Unlike other approaches of quantum analogs of DNNs, QDNN still keeps the advantages of the classical DNN such as the non-linear activation, the multi-layer structure and the efficient backpropagation training algorithm.The inputs and the outputs of the QDNN are both classical, which makes the QDNN more practical.Ref. [43] proposes a hybrid quantum-classical neural network architecture where each neuron is a variational quantum circuit. Fundamental Devices A silicon-based photonic chip typically comprises devices such as waveguides, beam splitters, optical couplers and modulators.In this section, a concise overview is provided. Waveguides The optical waveguide serves as a fundamental component in a quantum photonic chip, and the integration of optical elements onto a single chip is achieved through the fabrication of optical waveguides.Common optical waveguides include strip and ridge waveguides, used, respectively, for passive and active optical devices.The characteristics of waveguides are determined by the materials used and the manufacturing techniques employed.Presently, owing to continuous technological advancements, photon absorption and losses in silicon-based waveguides have reached notably low levels [78].Among these platforms, silicon-on-insulator (SOI) has emerged as a highly favored integrated quantum optics platform due to its compatibility with CMOS manufacturing techniques. Beam Splitters An optical beam splitter functions by dividing an incoming light beam into two or more separate beams, thereby distributing the input light across multiple output paths.The most widely employed beam splitter structure is the multimode interferometer (MMI).Other alternatives, such as directional couplers and Y-branch couplers, also exist.A typical photonic beam splitter is shown in Figure 5.It is a multi-mode interferometer (MMI) with specially designed interference length and multi-mode area that splits the photon into a superposition state.For a 50:50 beam splitter, its transformation matrix can be written as The advantage of the MMI lies in its less stringent manufacturing requirements, exhibiting robustness against manufacturing errors.In 2012, the first on-chip 1 × 2 MMI was experimentally demonstrated [79], followed by the design of optimized splitters to further enhance performance and reduce device size [80,81]. Phase Shifters In addition to the MMI, the phase shifter is another component required for constructing a linear optical interferometer.A photonic phase shifter (PS) is shown in Figure 6.It is simply a waveguide with a TiN resistor fused to it.A current can flow through it using the Digital-to-Analogue Converter (DAC), and the latter generates heat and changes the refractive index of the surrounding waveguide.The changes in the optical path induce a phase difference θ.Its transformation matrix is written as In dual encoding, if the PS θ moves to the lower arm of the waveguide, its transformation matrix is adjusted accordingly so that the element T 4,4 becomes e iθ . Modulator A photonic modulator is a core device of integrated quantum photonics that enables encoding information onto optical signals for various applications in quantum information processing.The plasma dispersion (PD) effect is utilized in silicon-based modulators to achieve electro-optic modulation.By controlling the density of free carriers through an applied electric field, the phase or amplitude of light passing through the material can be modulated.In particular, silicon-based electro-optic modulators manipulate carrier density in their active regions to leverage this effect for modulation purposes.The commonly used optical structure for modulators is the Mach-Zehnder interferometer (MZI), which consists of the beam splitters and the phase shifters, as previously introduced.It enables the manipulation of photons with arbitrary splitting ratios and phase differences.The unit for MZI is formed by two beam splitters and two tunable phase shifts, and its transformation can be written as According to Eular's formula, the matrix elements in Equation ( 4) can be simplified to Therefore, the T MZI can be written as The splitting ratio is determined by the inner PS angle, θ, to be sin 2 θ 2 : cos 2 θ 2 , and the phase difference between two output ports is e iϕ .When the PS position is changed to add the ϕ at the front of the MZI structure, its transformation matrix can then be expressed as The transformation matrices of BS and PS both satisfy the definition of a Unitary matrix, given by and it is obvious that T MZI is also a Unitary matrix. An N-mode integrated quantum photonic circuit is composed of several MZI structures, and it can form a complicated N × N Unitary matrix, as shown in Figure 7.The nth MZI between modes i and j is denoted as M n .Its transformation matrix can be represented as an Identity matrix I N with four matrix elements {a i,i , a i,j , a j,i , a j,j } replaced by T MZI , which is expressed as a j,i a j,j . . . where a i,i = T MZI (1, 1), a i,j = T MZI (1, 2), a j,i = T MZI (2, 1) and a j,j = T MZI (2, 2).Therefore, the Unitary matrix of this N-mode photonic circuit U N can be represented as the product of MZI transform matrices in the designed orders as Coupler An optical coupler is used to efficiently couple light in and out of optical waveguides on a chip.Its design aims to facilitate the transmission of light signals between the chip and external optical components.Edge couplers are typically implemented at the periphery or sidewall of a chip, facilitating the ingress or egress of light into/from the waveguide, thereby offering notable advantages such as enhanced efficiency and expanded bandwidth.However, it presents challenges in terms of fabrication processes.Over the past decade, researchers have extensively studied edge couplers and proposed various structural transformations, including edge couplers based on inverse taper with different nonlinear profiles [82] or consisting of double-tip inverse taper [83].Grating coupling utilizes a grating structure to couple the light signal into the chip at a vertical angle.It offers advantages such as compact size and flexible coupling positions, but also has limitations like lower efficiency and narrower bandwidth.Currently, there are ongoing expansions in the applications of grating couplers, such as two-dimensional grating couplers [84] and polarization-splitting grating couplers [85]. Main Components By utilizing the aforementioned fundamental devices, it becomes feasible to achieve silicon-based photonic quantum chips, thereby enabling applications such as large-scale quantum computing and quantum simulation.All of these applications require functionalities encompassing photon generation, manipulation and detection.In this paper, we provide a comprehensive introduction to each of these pivotal components. Photon Source Photon sources find extensive use across various applications, including boson sampling, quantum computing, quantum communication, etc. Depending on their application, there exist three primary techniques for preparing quantum light sources: spontaneous parametric down-conversion (SPDC), stimulated four-wave mixing (SFWM) and quantum dots.The first two methods of single-photon sources are probabilistic in nature, employing nonlinear processes to generate inherently correlated photon pairs.These methods excel in photon production while preserving high indistinguishability between photons.However, the generation of photon pairs through these two approaches involves a probabilistic approach, with a trade-off between generation probability and multi-photon purity.For quantum dot photon sources, the main mechanism is based on the emission of semiconductor material.A pair of carriers, called the exciton, is excited by the injected laser pulse in the quantum dot.The decay of the exciton then emits a single photon via the spontaneous emission process.This is a deterministic single-photon source that each laser pulse would generate, theoretically, only one photon each time.There have been reports that the best single-photon source has reached the detection efficiency of 0.5 for each laser pulse, considering all the collection efficiency, system loss and detection efficiency.However, the single-photon source also possesses its own drawbacks; for instance, it requires a critical working environment, with ultra-low temperature and high-vacuum chambers.It is difficult to maintain the indistinguishability of photons generated from separated quantum dots, and people usually take active de-multiplexing technologies to separate a single-photon source as a multi-photon source.Quantum dot can only generate single photons; it is unable to generate other non-classical quantum states such as the squeeze state, which is another fundamental resource for quantum photonic computing. In this review, we focus on the χ (3) nonlinear material that induces an optical conversion process called spontaneous four-wave mixing (SFWM).It would absorb two pump photons and generate a pair of signal and idler photons.This process is widely used for heralded single-photon source, entangled photon pair and squeezed quantum light source with low phase or amplitude noise beneath the standard quantum limit. Based on the difference between signal and idler photon, the process can be divided into two categories: the non-degenerated SFWM as seen in Figure 8a, in which the two photons generated have the different wavelengths, and the degenerated SFWM as seen in Figure 8b, where the two photons have the identical wavelength.From the pump laser point of view, the non-degenerated SFWM is also called the single-pump scheme, as it only requires a single laser pulse to create the photon pair.The degenerated SFWM is called the dual-pump scheme, as the experimental set-up requires two laser pulses working simultaneously to create the photon pair.The relation between pump frequency and generated photon frequency satisfies the laws of energy conservation and momentum conservation: where k is called the wavevector.In waveguide modes, the momentum conservation is also called the phase-matching condition; these wavevectors are the propagation constant β(ω) = n e f f (ω)ω and n e f f (ω) is the effective index of the corresponding frequency decided by the material nonlinear property. To realize the photon generation, the main problem is to achieve the phase-matching condition.Assuming the non-degenerated SFWM condition of ω p 1 = ω p 2 = ω p and neglecting other nonlinear effects, the difference of propagation constant can be expressed as By taking the Taylor expansion of β(ω p ), the phase-matching condition can be expressed as where ∆β is expanded in β n .Due to the limitation of energy conservation law ω p 1 + ω p 2 = ω s + ω i , the difference of frequency can be written as δω = ω s − ω p 2 = −(ω i − ω p 1 ). Therefore, ∆β can be simplified as where the higher order of terms is ignored.The second-order derivative, β 2 , is known as the group velocity dispersion (GVD) of the waveguide.By selecting the point where GVD = 0, β 2 ≈ 0 can be achieved to meet the phase-matching condition.By considering the higher-order terms for the derivative of propagation constant [86], the phase-matching condition can be written as If the waveguide is designed to make β 2 and β 4 assume opposite signs and the magnitude is appropriately adjusted, phase matching can be achieved.When the pump photon and generated photon are propagating in different modes, their propagation constants are unrelated to each other [87].The phase-matching condition can thus be written as If the propagation constant difference ∆β matching with the group velocity β(ω) is found, the phase matching can be realized.Finally, the waveguide parameter can be modulated periodically with quasi-phasematching conditions [88], which are simplified as where Λ is the periodicity of poling designed to match ∆β = 0.For the simple case of β 2 ≈ 0, the approximation ω p ≈ ω s ≈ ω i is taken, and the energy conservation in the wavelength domain is expressed as The probability of the two-photon state is decided by the energy conservation and phase-matching condition, with expression given by This is interpreted as the distribution of two-photon state |11⟩ s,i at mode s and i, and the probability amplitude F(ω s , ω i ) is called the Joint Spectra Amplitude (JSA).The latter is dictated by the law of energy conservation and phase matching, and it can be expressed as where α(ω s + ω i − ω) is the complex amplitude of the pump ω p at the frequency ω s + ω i − ω.Usually, the pump spectrum is assumed to be a Gaussian distribution with a bandwidth decided by pump filter or laser property.ϕ(ω s , ω i , ω) is defined as where it is determined by the phase-matching condition.L is the interaction length of waveguide.|F(ω s , ω i )| 2 is the real measured probability of the photon pair and is called the Joint Spectra Intensity (JSI).Taking all these factors into consideration, the state can be expressed as where Ŝn s,i (ξ) is called the squeeze operator on the mode n, and ξ n is the squeeze parameter, determined by the material nonlinearity, interaction length, pump energy density and so on.Depending on whether the squeeze parameter condition is filtered or resonated SFWM, the output state is given by and, by writing ξ = re iϕ , the state in photon number basis is expressed as The probabilities for detecting n photons at mode s or mode i are the same, which can be expressed as Following Equation ( 22), a maximum entangled two-photon state from an SFWM process can be written as with different modes from 1 to n.And it is known that the state describing a composition system is decomposed as where {|u i ⟩} and {|v i ⟩} are orthonormal basis states called Schmidt modes.The Schmidt coefficients λ i are the "weights" of each subsystem satisfying ∑ i λ i = 1.The degree of factorizability is called the Schmidt number K and is defined as The photon purity P of this state is defined as where p = 1 represents K = 1 and λ 1 = 1, indicating a perfectly pure two-photon state.If P < 1 is measured, it means the state also contains other degrees of entangled photon pairs.For a maximally entangled state with the condition that λ n = 1 n and n − → ∞, P − → 0, which indicates that the state has almost no purity (maximally mixed) and is not suitable for a heralded single-photon source. In the weak pump regime, the multi-photon probability P(n) is relatively low and the purity P can be directly estimated as where g (2) (0) is called the second-order correlation.It is an experimentally measurable value that describes the statistics of photon pair correlations.From the reference [89], the g (2) can be written as where P ss (∆t) is the probability of measuring coincidence counts at the delay time of ∆t and P s is the probability of measuring signal photon at the detector. With the heralded photon measured, the remaining photon state can be used as a single-photon state, and the purity of this heralded single photon g h (t) describes the quantity of single-photon against the multi-photon emission.It can be written as where P ssi (∆t) and P si (∆t) are the probabilities of measuring coincidence count at the delay time of ∆t and P i is the probability of measuring signal photon.The noise of the measured photon counts is estimated by coincidence to accidental ratio (CAR).Coincidence counts between signal and idler photons from the same pair of photon generation are desired counts, while the spurious coincidence between time uncorrelated different pairs or other noises are called the accidental coincidences.The CAR is defined as where R si is the overall coincidence between signal channel and idler channel and R ac is the accidental coincidence.Currently, there are multiple platforms available for integrating SFWM, including UV-writing silica waveguides [69], Si [53,87] and SoI [90] platforms.To enhance the brightness of light sources and the purity of single-photon states, people have proposed long spiralled waveguides and microring resonators.Furthermore, to tackle the problem of non-deterministic photon production in parametric methods, various techniques such as time [91] or spatial [92] multiplexing have been implemented to enhance their performance. Manipulation Various degrees of freedom of photons such as path, polarization, frequency, spatial and temporal modes, etc., can be utilized for encoding quantum states.In particular, on silicon-based photonic chips, it is already possible to achieve encoding and manipulation of photon quantum states using multiple degrees of freedom.For instance, the path information of photons within parallel-transmitting multiple waveguides enables pathencoded quantum states.Different combinations of on-chip MZIs and phase shifters allow for arbitrary manipulation of path-encoded quantum states.As mentioned earlier, the optical circuit composing several MZIs are universal, meaning that the circuits can be programmed to achieve any Unitary evolution of quantum states encoded in m paths.There has been a study showing that an arbitrary N × N Unitary circuit can be decomposed by N(N − 1) 2 MZIs with specific orders.Two of the main decomposition schemes are the Triangle Circuit [93] and the Square Circuit [94]. Compared to other encoding methods, the advantages of path encoding lie in its straightforward design, enabling high-precision programmable control.Moreover, it is currently extensively employed in the design of large-scale integrated silicon-photonics quantum chips. Single-Photon Detector Photon detection is the process of converting photon signals into electrical signals, which is a crucial step in quantum information processing that is aimed at retrieving information about quantum states.Single-photon detectors mainly include avalanche photodiodes (APDs) and superconducting nanowire single-photon detectors (SNSPD).Most APDs can operate at room temperature, but they exhibit low detection efficiency.At present, SNSPD is the most-studied device due to its advantages such as high detection efficiency, low time jitter, high signal-to-noise ratio, etc. Recent Advances in Chip-Based Quantum-Assist Computational Works With the progressive maturation of silicon-based integration technology, significant strides have been made in large-scale silicon quantum experiments, resulting in continuous enhancement of information processing capabilities and driving the advancement of optical quantum computing systems.In this section, we provide a comprehensive overview of recent advancements in silicon quantum photonics pertaining to the fields of quantum computing and machine learning. In the early stages, experiments based on integrated silicon chips primarily focused on demonstrating fundamental gates for universal computation.For instance, work [49] implemented single-qubit and two-qubit gates using path encoding on an integrated chip.Following this, a demonstration was conducted using two integrated CNOT gates to execute the Shor factorization algorithm on an integrated waveguide silica-on-silicon chip [63].However, these two examples implemented the unheralded CNOT scheme and and did not require auxiliary photons.A landmark achievement was the first implementation of the heralded quantum logic gates on a single SiO 2 chip in 2015 [72].This was also the first universal linear optical circuit to be realized on a silicon-based integrated chip, which is constructed by a cascade of 15 MZIs across 6 modes.Meanwhile, large-scale programmable integrated photonics quantum computing is gradually flourishing.In 2018, Qiang et al. achieved the first universal two-qubit silicon-based photonic quantum computing chip using large-scale silicon-based integrated optical technology [95].This work realized the generation of entangled photons, photon state preparation, manipulation and measurement on a single chip.This laid the foundation for the feasibility of largescale, high-precision, programmable photonic quantum computation using silicon-based photonic chip technology. With the advent of the noisy intermediate-scale quantum (NISQ) era, various platforms have emerged as choices to showcase quantum advantages in this period, and silicon-based quantum optical platforms are also important candidates.Currently, silicon-based photonic quantum computing has been widely applied in areas such as quantum neural networks [96], variational algorithms [97] and coherent Ising machines [98].In particular, neural networks stands as a crucial area in current quantum computing research.Here, we focus on optical neural networks that utilize the principle of optical coherence to perform linear matrix operations in photonics circuits [93,94].In 2017, Shen et al. proposed an on-chip integrated optical neural network constructed by a cascaded array of 56 programmable MZIs, where the parameters of this neural network were real numbers [99].This work successfully conducted experiments using a two-layer fully connected neural network to solve the vowel recognition task.However, due to the influence of noise, the accuracy was only 76.6%.In 2021, work [100] developed fully connected complex-valued neural networks based on an integrated silicon photonics platform.The optical neural networks presented here are capable of processing information in both phase and magnitude, resulting in significantly improved computational speed and energy efficiency.Simultaneously, various optical neural networks with Fourier transform and convolution structures have also been proposed [101,102].However, these approaches are limited by space consumption and the difficulty in real-time programming.To tackle these challenges, work [103] proposed an integrated diffractive optical network utilizing silicon chips with integrated ultracompact diffractive cells and programmable MZIs.This scheme enables parallel Fourier transformation and convolution operations.What originally required a linear matrix calculation using N 2 cascaded MZIs has now been reduced to using two ultracompact diffractive cells and N MZIs.This significantly minimizes the size of integrated photonic chips and reduces energy consumption.The effective training of these photon neural networks is another crucial issue that deserves attention.A gradient-free training scheme was proposed in Ref. [104], which is an efficient, physics-agnostic and closed-loop protocol for training optical neural networks on chip.In addition to the aforementioned neural networks, silicon-based optical chips can also be utilized for the implementation of machine learning models such as quantum autoencoders [105].Moreover, photonic neural networks are specifically tailored for addressing diverse machine learning tasks, encompassing prediction of molecular properties [106] and classification of financial data [107]. Boson sampling is also an important computational task [108].It is widely known that sampling from a distribution that is obtainable by photons propagating through a linear optical network becomes classically intractable as the photon number increases, which suggests that a photonic experiment implementing a Unitary evolution of input photons can be a viable candidate to demonstrate quantum advantage [109].At present, the experimental demonstration of boson sampling is mainly based on integrated photon platforms [66,[110][111][112][113][114][115].Recently, Paesani et al., achieved the generation of an eight-photon state and implemented the Gaussian boson sampling algorithm on a silicon-based photonic chip [75].Another set of the latest results is from Wang et al., who realized a large-scale programmable silicon-based photonic chip based on graph theory, integrating approximately 2500 components in a single device [50].This work demonstrates multi-photon high-dimensional quantum entanglement preparation and programmable boson sampling for specialized quantum computing.In addition, the application of photon sampling problems has been extensively studied in the fields of graph theory [116][117][118][119][120] and quantum simulation [121,122]. Furthermore, utilizing quantum algorithms for molecular simulation is an intriguing research direction.Typically, phase estimation [9] or variational quantum eigensolvers [97] are employed to find the eigenvalues and eigenvectors of a Hamiltonian.Both these algorithms have been implemented on silicon-based devices [123,124].Recently, an experimental realization of a combined scheme that incorporates these two algorithms has demonstrated remarkable fidelity, exceeding 99% in approximating ground-and excited-state eigenvalues [125].Scalability: Scaling up the number of qubits in a quantum computer is a significant challenge due to the need for error correction and fault tolerance; b.System size: The limited number of qubits impacts the size and complexity of quantum federated learning algorithms that can be executed, hindering the ability to solve larger problems; c. Challenges and Open Issues Resource-efficient algorithms: Designing quantum algorithms that are resourceefficient in terms of qubits; gates can help mitigate these limitations. (2) Coherence time: a.Quantum gate operations: The short coherence time limits the number of quantum gate operations that can be performed before the quantum state becomes decoherent, impacting the complexity of quantum federated learning algorithms; b.Qubit materials and designs: Investigating novel qubit materials and designs that exhibit longer coherence times can help overcome the limitations posed by decoherence in quantum computations; c. Environmental noise: Reducing the impact of environmental noise on quantum hardware can help extend coherence times and improve the performance of quantum algorithms; d.Dynamical decoupling: Exploring dynamical decoupling techniques, which involve applying a sequence of control pulses to mitigate the effects of noise, can contribute to the preservation of quantum states during computations. (3) Connectivity: a. Topology: Quantum hardware architectures may have different qubit connectivity topologies, which can impact the performance of quantum algorithms, including quantum federated learning; b.Hardware-aware algorithms: Developing hardware-aware algorithms that consider qubit connectivity can help optimize the implementation of quantum federated learning on various quantum devices.Addressing these challenges and open issues in greater detail will help drive significant advancements in the field of quantum machine learning.Ongoing research and development efforts will be essential for overcoming these obstacles and realizing the full potential of quantum-enhanced federated learning in various applications and industries. Open Opportunities and Future Directions Integrated photonics is a rapidly evolving field with several open opportunities and future directions.Some key areas include the following: (1) Higher Integration Technologies: a. Increased complexity: Developing more complex integrated photonics circuits with higher component counts to enable advanced functionalities; b.Multi-functional chips: Designing chips that serve multiple purposes, integrating various components on a single platform. (2) Novel Materials and Components with Explorative New Materials: Researching novel materials with unique optical properties to enhance device performance.In addition, it is possible to explore the implementation of heterogeneous integrated photonic chips based on multiple material systems; (3) Machine Learning Assistance Using machine learning technologies: Combining machine learning algorithms with integrated optical devices can improve the performance of quantum machines. Conclusions Integrated photonic quantum technologies provides a new pathway for quantum computing and machine learning, harnessing the innate properties of photons to achieve rapid information processing and transmission.The silicon-based photon platform exhibits significant promise in this domain, as evidenced by our comprehensive review summarizing the latest advancements.Furthermore, we highlight some opportunities and challenges faced by integrated photonic quantum technology currently, seeking to offer novel perspectives for future advancements in this field.With ongoing technological progress, we firmly anticipate that integrated chip technology will assume an increasingly pivotal role across diverse applications. Figure 2 . Figure 2. Summary of various quantum machine learning tasks. Figure 3 . Figure 3.The structure of classical neural networks and Variational Quantum Classifier. Figure 5 . Figure 5. Multi-mode interferometer to split the light passively with a fixed ratio of 1:1. Figure 6 . Figure 6.Phase shifter to induce relative phase change between two arms. Figure 7 . Figure 7.Typical schematic of an N-mode photonic integrated circuit to represent an arbitrary N × N Unitary matrix.The final Unitary matrix form is the product of the matrices for each MZI component. Figure 8 . Figure 8.(a) Non-degenerated and (b) degenerated spontaneous four-wave mixing process to generate photon pairs on chips by absorbing two pump photons. ( 1 ) Error correction: a.Fault-tolerant quantum computation: Developing fault-tolerant quantum computation techniques, which allow for the execution of quantum algorithms despite the presence of errors, is crucial for the practical implementation of quantum federated learning; b.Resource overhead reduction: Investigating methods to reduce the resource overhead associated with quantum error correction, such as optimized encoding schemes and error-correction-friendly quantum circuit designs, can enable the efficient integration of error correction into quantum federated learning algorithms.(2)Error-aware training: a. Noise extrapolation: Techniques such as Richardson extrapolation and zeronoise extrapolation can be used to estimate and mitigate the impact of noise on quantum federated learning algorithms; b.Error-aware training: Developing error-aware training techniques that incorporate noise models into the learning process can help enhance the performance of quantum federate learning algorithms in noisy environments. Table 1 . Summary for quantum machine learning algorithms.
10,179.4
2024-02-07T00:00:00.000
[ "Computer Science", "Engineering", "Physics" ]
Gravity Field Recovery Using High-Precision, High–Low Inter-Satellite Links : Past temporal gravity field solutions from the Gravity Recovery and Climate Experiment (GRACE), as well as current solutions from GRACE Follow-On, suffer from temporal aliasing errors due to undersampling of the signal to be recovered (e.g., hydrology), which arise in terms of stripes caused by the north–south observation direction. In this paper, we investigate the potential of the proposed mass variation observing system by high–low inter-satellite links (MOBILE) mission. We quantify the impact of instrument errors of the main sensors (inter-satellite link and accelerometer) and high-frequency tidal and non-tidal gravity signals on achievable performance of the temporal gravity field retrieval. The multi-directional observation geometry of the MOBILE concept with a strong dominance of the radial component result in a close-to-isotropic error behavior, and the retrieved gravity field solutions show reduced temporal aliasing errors of at least 30% for non-tidal, as well as tidal, mass variation signals compared to a low–low satellite pair configuration. The quality of the MOBILE range observations enables the application of extended alternative processing methods leading to further reduction of temporal aliasing errors. The results demonstrate that such a mission can help to get an improved understanding of different components of the Earth system. Introduction In times of a changing climate the need for innovative observation techniques for capturing geophysical processes in the Earth system becomes increasingly urgent. In this context, the observation of the temporal gravity field by satellites from space play an important role when investigating, e.g., rapid changes in the cryosphere, oceans, water cycle, and solid Earth processes on a global scale. For the determination of temporal gravity fields, in the last decade satellite missions such as GRACE [1] or Challenging Minisatellite Payload (CHAMP) [2,3] orbited around the globe and helped to get a better understanding of the Earth's mass flux signals. CHAMP was based on high-low satellite-to-satellite tracking (SST) exploiting the Global Positioning System (GPS) [4] over a time span of 10 years. The accuracy of the CHAMP orbit information of 2-3 cm [5] derived from GPS allowed for resolving only the long wave range of the time varying gravity field, with spatial scales of ≈1000 km, e.g., Baur 2013 [6]. Analyzing the perturbed orbit of other Low Earth Orbiters (LEO), such as the Swarm satellites, allows for a similar performance [7]. The GRACE mission reached spatial scales of the temporal gravity field of ≈300 km and below due to a combination of K-band microwave low-low inter-satellite ranging between two identical satellites following each other in the same orbit at a distance of about 220 km with micrometer precision, and high-low GPS satellite-to-satellite tracking plus accelerometer observations. These missions improved our knowledge of water mass variations on the continents, in the oceans, and the atmosphere to a great extent. Additionally, the static gravity observation equations and stochastic modelling. In Section 4 the estimated gravity field solutions are analyzed and assessed. The main conclusions are summarized in Section 5, and in Section 6, a short outlook is given. Observation Geometry In contrast to the past GRACE and the current GRACE Follow-On missions, which are mainly based on LEO satellites (several hundred km), the MOBILE minimum configuration consists of a constellation of two high and one low orbiting satellites. As done for GRACE and GRACE Follow-On, the main observable is the gravity-induced inter-satellite distance change, which is in case of MOBILE measured between medium orbiting satellites (MEO; several thousand km) and LEO satellites though. As a second gravity observation type, high-precision orbit positions based on Global Navigation Satellite System (GNSS) orbit determination are used. This idea of high-precision high-low tracking was first investigated by Hauk et al. 2017 [26] using the inter-satellite link technique as part of the payload on-board Galileo satellites of future generation in connection with LEO satellites, where the main error sources and the corresponding achievable performance were analyzed. Due to the fact that the MOBILE constellation presents a stand-alone concept without the need to place an additional payload on another space infrastructure, and the very large distance between the high-and the low orbiting satellites, which plays a crucial role in the framework of high-precision high-low tracking, dedicated MEO satellites were included in the concept. It should be emphasized that alternatively, a constellation of one MEO and two LEO satellites could be envisaged, which turns out to provide nearly the same performance as the proposed one, but might be more expensive due to the need to build and maintain at least two LEO satellites in orbit. Table 1. The MEO satellites orbit at an altitude of about 10,150 km in the same orbital plane, separated by an 180-degree mean anomaly as alternating targets of the LEO satellites, in order to maximize the visibility and thus observation time. The LEO satellite is orbiting in an altitude of about 360 km. Both LEO and MEO satellites are flying in polar orbits in order to maintain a long-term stable formation (no relative drifts of the orbit planes). Additionally, two LEO satellites with near-polar orbits flying in an altitude of about 470 km with an inter-satellite distance of 200 km are set up in order to perform comparability studies between the MOBILE constellation and a GRACE Follow-On-like mission. All orbits have certain repeat cycles, after which the satellites reach the same position on Earth again in order to maintain a stable ground track pattern and related stable gravity model quality. The choice of the orbit height of the MEO satellites underlies three major constraints: (1) A high altitude of several thousand kilometers is necessary in order to ensure long observation periods and preferably measurements of multi-directional distance variations, with a strong dominance of the radial component, resulting in a close to isotropic error behavior of the retrieved gravity field solution (see Section 4). (2) The distance between a MEO-LEO pair must not be too large, because the larger the distance, the more difficult it is to fulfill the 1-µm accuracy requirement for the inter-satellite link established by a laser range interferometer (see Section 2.2). (3) The third constraint is driven by solar radiation belts encircling the Earth in which energetic charged particles are trapped inside the Earth's magnetic field [27], which are of different intensities dependent on the solar cycle, altitude, and inclination of the satellite orbit. As a result, altitude ranges of several thousand kilometers below the chosen orbit height drop out. These conditions connected with the repeat orbit lead to an altitude of about 10,000 km for the MEO satellites. The high-low tracking concept enables a multi-directional observation geometry with differing elevation angles from 3 • (assumed minimum elevation angle of visible MEOs observed by the LEO) up to a near-radial direction. However, due to the observation geometry of the MEO-LEO satellite pairs and the changing satellite links from one to the other MEO, data gaps arise for every satellite pair, leading to a non-continuous measurement time series of these pairs. For the simulated MOBILE constellation, this results in a ranging window maximum of 45 min, and a maximum data gap of 18 min. The separation of the two MEO satellites of 180-degree mean anomaly is chosen to keep the time period of the data gap as small as possible. In Figure 2, the LEO ground track of the MOBILE concept is displayed for 1 day together with the corresponding elevation angles. Instrumentation The main observable in the MOBILE mission are range measurements from the LEO to the MEOs, where the MEOs are alternating targets. The ranging accuracy is on the micrometer level in order to be sensitive for gravitational forces and its changes on Earth. For distances of several thousand kilometers, a laser-based distance measurement system can reach such an accuracy. The laser range interferometer is placed at the LEO satellite, while the MEOs are equipped with passive reflectors or transponders. In case of the GRACE Follow-On mission, the measurement of inter-satellite ranges by laser range interferometry (LRI) has been successfully established. The link between the two satellites was generated with an active laser on one satellite, and a phase-locked amplifying transponder on the second spacecraft [10]. For the MOBILE concept, the laser ranging instrument needs to be adapted due to the very large distance and the relative motion of the LEO and MEO satellites. In contrast to the GRACE Follow-On, the large distance and the relative speed lead to a range of Doppler shifts of several GHz compared to a few MHz, which causes the need of a reference laser source with a larger range of reference frequencies and a faster phase-tracking capability than implemented for the GRACE Follow-On. The required parameters (<10 GHz range, <10 MHz/s tracking) are within the range of existing, space qualified reference lasers (e.g., the one used for the ATmospheric LIDar (ATLID) instrument on the Earth Clouds Aerosols and Radiation Explorer (CARE) mission) [28], but their compatibility with the needs of an interferometric instrument has to be the subject of further studies. Due to the relative motion of the LEO and MEO satellites, pointing tracking capabilities are required, which requires a modified link implementation. The LEO satellite is selected to play the active part in the tracking mechanism, while the partner satellites (MEOs) are equipped with passive retroreflectors. This type of laser tracking and ranging has been successfully performed for decades with active laser systems on the ground and passive retroreflectors on satellites in orbit (e.g., Laser Geodynamics Satellite (LAGEOS), Ball Lens In The Space (BLITS)) [29,30]. The scientific benefit of deploying a passive payload in space is the significantly increased mission duration when compared to complex active payloads. The main technological challenge in utilizing this setup for an LRI instrument is the need to achieve a sufficiently high level of retrieved power without the need for amplification between the two passes, ideally close to the 80 pW received by the GRACE Follow-On implementation, but at least to levels above ≈1 pW in order to allow phase tracking. The main design factors impacting the received power are the initial output power, the size of the retroreflector, and the size of the receiving telescope. Satellite on-board sensors play an important role in the gravity field retrieval by influencing satellite observations due to correlated noise. In our study, the error assumptions used for the laser ranging instrument in the MOBILE concept are based on the time-series provided by Schäfer et al. 2013 [31], which originated in connection with ESA's GETRIS study, and show micrometer ranging accuracy around 1 MHz. Due to simulation purposes, this time-series was adapted by means of cascaded second order Butterworth auto regressive moving average (ARMA) filter model. The spectral behavior of the LRI is shown in Figure 3 (light green curve) in terms of an amplitude spectral density (ASD). The relative distance measurement errors assumed for the low-low satellite pair are identical to those used in the frame of the ESA-Assessment of Satellite Constellations for Monitoring the Variations in Earth's Gravity Field (SC4MGV) project [32], provided from the consultancy support of Thales Alenia Space Italia, and show a performance of about several 10 nanometers. The corresponding analytical noise model of the used laser interferometer is given by the ASD in terms of range-rates ( Figure 3, light blue curve): The generation of all noise time-series was done by scaling the spectrum of normally distributed random time-series with their individual spectral model. The non-gravitational forces are typically sensed by the on-board accelerometers located in the center-of-mass of the satellite. In case of the LEO satellites, the implementation of an accelerometer is absolutely necessary due to air drag as the main contributor. For the low-low pair a GRACE-like electrostatic accelerometer is assumed with two highly sensitive axes oriented in the flight direction (largest signal) and in the radial direction, and one low-sensitive axis in the cross-track direction (see Figure 4, blue and red curves). The accuracy level in terms of accelerations is derived by Iran Pour et al. 2015 [32], and is expressed by: with x denoting along-track, y across-track, and z (close to) the radial direction. Based on the heritage of previous gravity missions for MOBILE, we seek a resolution on the level of 10 −11 m/s 2 , which is the same as assumed for the along-track and radial axes of the accelerometer on-board the low-low satellite pair, but ideally with the same performance in all three directions. Furthermore, the slope at frequencies from 10 −3 Hz and lower is pressed down from 1/f 2 for the low-low pair to 1/f for MOBILE. The performance of the relative acceleration measurement error is displayed in Figure 4 (green curve). While an accelerometer is mandatory for the MOBILE LEO satellite, for the MEOs, less stringent requirements might apply because of the substantially smaller amplitude of the signal and the fact that non-conservative forces can be modelled much more accurately in high altitudes. Also, the design of the MEO could be optimized for the high predictability of non-gravitational forces, e.g., by implementing very simple geometrical surfaces wherever radiative pressure is relevant. In spite of these facts, in the MOBILE concept, the implementation of accelerometers is proposed. Geo-location of satellite observations, as well as gravity retrieval, require highly accurate continuous orbit determination, making GNSS space receivers on all satellites obligatory. In our simulations, we assume an absolute kinematic positioning on a cm level. Using a laser ranging instrument as the main measurement system requires exact pointing of the tracking antenna in the order of 10 µrad or less, and therefore the implementation of systems for attitude determination and control. We assume star camera sensor errors for all satellites represented as rotation angles around the along-track (roll), cross-track (pitch), and radial (yaw) axes, expressed by the ASD of the following analytical noise models [32]: In addition, for the MOBILE LEO satellite, a drag-reduction system needs to be implemented in order to maintain the orbit, and not to saturate the accelerometers due to non-gravitational accelerations. The MEO satellites will very likely require an electrical propulsion system to move to their target orbit from the lower separation altitude achievable with a low-cost launcher. Simulation Environment All simulations were executed with a full numerical mission simulator [33,34], which has already been successfully applied to recover satellite-only gravitational field models from GOCE data [35]. The simulation environment is based on numerical orbit integration, following a multistep method for the numerical integration according to Shampine & Gordon 1976 [36], which applies a modified divided difference form of the Adams predict-evaluate-correct-evaluate (PECE) formulas and local extrapolation. According to this method, the order and the step size are adjusted to control the local error per unit step in a generalized sense. The generation of "true" dynamic orbits and, subsequently, the "true" GNSS high-low SST and low-low laser ranging SST observations, is done by adding different force models according to the "true" world of Table 2. The impact of orbit errors on the gravity field processing is taken into account as well by propagating 1 cm white noise of the integrated orbit positions of each satellite. The resulting erroneous dynamic orbits serve as computational points for the reference values of the observations and enable the computation of the GNSS high-low SST observations in three directions. In order to ensure the accuracy of the inter-satellite link, error-free dynamic orbits are used for the reference values of the low-low SST observations from the laser interferometer system, which are expressed in terms of range-rates. The adopted gravity field approach is based on a modification of the integral equation approach from Schneider 1969 [37] where the orbit is divided into continuous short arcs of 6 h length, and the position vectors at the arc node points are set up as unknown parameters, which are estimated together with the gravity field coefficients. This technique has already been successfully applied in real data applications to recover satellite-only gravitational field models for CHAMP and GRACE [38] (ITSG-Grace2016) [39]. The functional model follows the typical formulation used for low-low SST missions like GRACE, which comprises a high-low SST and a low-low SST component. Position differences between two satellites are used for the computation of the reference values for the high-low SST part of the observation system, whereas the reference values for the low-low SST part are derived by projecting position and velocity differences between two satellites onto the line-of-sight, leading to the computation of inter-satellite range-rates. Table 2 gives an overview of the force and noise models used in the processing for the "true" and "reference" world. The static gravity field model is represented by the GOCO03s model, which is a satellite-only gravity field model based on GRACE, GOCE, and LAGEOS [40]. In order to simulate geophysical signals, ESA's updated Earth system model [41] has been used, which contains the five main geophysical signal components atmosphere (A), ocean (O), hydrology (H), ice (I), and solid Earth (S) with a time resolution of six hours, linearly interpolated to the epochs. The Earth system model covers the time period 1995-2006, and contains plausible variability and trends in both low-degree coefficients and the global mean eustatic sea level. It depicts reasonable mass variability all over the globe at a wide range of frequencies including multi-year trends, year-to-year variability, and seasonal variability, even at very fine spatial scales, which is important for a realistic representation of spatial aliasing and leakage. The impact of ocean tide model errors is assessed by taking the difference of two tide models, EOT11a [42], and GOT4.7 [43]. The total stochastic model for the observations is approximated individually for both satellite formations by means of a cascade of digital Butterworth ARMA filters [44,45]. Filter coefficients are chosen in such a way that the cascade's frequency response optimally matches the inverse of the amplitude spectrum of the previously generated pre-fit residuals. They are estimated as a result of the computation of the linearized normal equations, which include differences between the "true" (only the static GOCO03s gravity field model and sensor noise are included) and the reference observations (only the static GOCO03s gravity field model is included), such that the error sources from the sensors are considered exclusively. Assuming uncorrelated high-low and low-low SST observations, weighting matrices are set up for all observation components separately. The goal is the retrieval of all spherical-harmonic (SH) coefficients up to a maximum SH degree of 100 from observations sampled every 5 seconds for the first 30 days of the year 2001. Due to the fact of non-linear observation equations, the "reference" observations are reduced from the "true" observations as a result of the linearization process. The gravity field parameters are estimated by solving full normal equations of a least squares system based on a standard Gauss-Markov model using weighted least squares with stochastic models in accordance with the simulated instrument noise levels. The resulting gravity field coefficients are analyzed and compared regarding quality and performance in terms of retrieval errors by removing a monthly average of the true mass transport model from the recovered signal. Gravity Field Retrieval Performance Due to Instrument Errors At first the impact of the instrument errors on the gravity field retrieval were quantified. For this task, we performed simulations where each error source according to the assumptions described in Section 2.2 was treated individually. Figure 5 shows the gravity field retrieval performance in terms of equivalent water height (EWH) errors per SH degree per coefficient for the low-low pair constellation and the MOBILE concept. Furthermore, the results were quantified using global RMS values of the errors in the recovered signal expressed in terms of cm of EWH, listed in Table 3 (see part: instrument errors). If only white-noise positioning errors were considered ( Figure 5, green curves), the gravity field retrieval performance mainly depended on the observation geometry. The comparison between both satellite concepts revealed strongly reduced retrieval errors for MOBILE, which benefited from multi-directional observations. In the case of accelerometer noise in combination with star camera errors ( Figure 5, blue curves), the MOBILE constellation showed reduced error behavior compared to the low-low pair as well. This was mainly caused by the observation geometry, but also by the improved accelerometers (≈23%) with 3D capabilities to certain parts in the case of MOBILE. In contrast, the retrieval performance of the low-low pair benefits from the nanometer accuracy of the laser interferometer compared to the micrometer accuracy of MOBILE's laser link sensor for the most part of the spectrum (>SH degree 10). This became evident when only laser interferometer noise was considered ( Figure 5, red curves). However, in the very low degrees, the MOBILE concept performed better than GRACE, which was again owed to the fact of an improved observation geometry. When considering all instrument error sources together, the retrieval errors of the low-low satellite pair are dominated by the accelerometer plus star camera sensor performance, while for the MOBILE constellation, the laser link error was the dominating error source for SH degrees higher than 20, and the accelerometer plus star camera noise only dominated the spectrum in the lower degrees. These results led to the conclusion that the gravity field retrieval based on instrument error sources showed smaller errors below SH degree 40 for the MOBILE concept compared to the low-low pair, but increased errors in the higher frequency spectrum due to the lower accuracy of the laser interferometer. Next to the estimation of SH coefficients, we estimated their formal errors as well, shown in Figure 6. The noise of the different sensors in combination with the observation geometry reveal the performance of a specific satellite concept. In our case, they demonstrated the impact of the MOBILE high-low tracking concept by showing an almost uniform (isotropic) error spectrum and a high sensitivity in the sectorial coefficients (SH degree equal to SH order). In case of the low-low pair configuration especially, the sectorial coefficients were less well-determined than the zonal coefficients (SH order equal to zero). Figure 6b,d gave a closer view of the formal errors located in the long wavelength (low-degree) spectrum. The comparison between MOBILE and the low-low pair led to the assumption that the determination of the very low SH coefficients could be accomplished with a higher sensitivity through the MOBILE concept. In contrast to the observations in the along-track direction of the low-low pearl-string configuration, the multi-directional observations of the high-low tracking concept with a strong dominance of the radial component enabled an improved estimation of the very low SH coefficients. The close to radial observation geometry of MOBILE was comparable to satellite laser ranging (SLR) observations, showing superior performance in observing the very long wavelength gravity field variations, in particular the zonal SH coefficient of degree 2, which physically represented the Earth's dynamic oblateness [46]. In order to make the effect of the different error spectra of both satellite concepts even more visible, spatial covariance functions were computed for a position at the equator and at 45 • latitude (see Figure 7). They describe the correlation of the computation point with its neighborhood in the normal equation system due to the used stochastic model and the observation geometry. The spatial characteristics and the pattern of the covariances provide information about the spatial behavior of the retrieved signals. In our case, the figures show the typical stripes for the low-low tracking concept caused by the north-south observation direction that are known from the GRACE temporal gravity models, while the MOBILE concept exhibited an isotropic error structure at both latitudes. Temporal Gravity Field Retrieval The retrieval of the temporal gravity field is dominated by temporal aliasing errors due to the undersampling of high frequency geophysical signals and imperfect de-aliasing models, which has already been shown by, e.g., References [47][48][49] for the GRACE mission. In order to analyze the impact of different time-varying mass signals on gravity field retrieval, we performed simulations by using signals that were subdivided into non-tidal AOHIS, HIS, and tidal signals, including the instrument errors described in Section 2.2. Figure 8 displays the corresponding retrieval errors for both satellite concepts. The results indicate that the errors with the highest signal amplitudes were related to AOHIS signals (Figure 8, red curves), and in particular to atmospheric and oceanic signals. Tidal aliasing effects played a key role in the total error budget as well (Figure 8, blue curves) by representing the highest aliasing errors next to non-tidal atmosphere and ocean aliasing errors. In this context it is important to mention that errors in ocean tide models are considered as one of the major sources of error in the determination of temporal gravity field models from GRACE data [50,51]. Our simulations show that the MOBILE configuration can reduce non-tidal aliasing errors (≈45%) as well as tidal aliasing errors (≈30%) over the whole spectrum significantly (see also Table 3, part: temporal aliasing errors). Despite the fact that for the high-low tracking concept the assumed arrangement of satellites causes incomplete data time series, the multi-directional observation geometry enables the sampling of time varying signals with reduced aliasing errors compared to the low-low pair configuration. Usually high-frequency mass signals are a priori reduced based on atmosphere and ocean de-aliasing (AOD) products [52], and ocean tide de-aliasing models. The resulting temporal gravity field models thus contain mainly information on sub-seasonal, seasonal, and secular continental hydrological mass variations and ice mass variations on Earth [53,54], and solid Earth signals related to glacial isostatic adjustment (GIA), and co-and post-seismic gravity changes of big earthquakes. For the analysis of such mass flux signals we performed simulations by using only the HIS signal. The resulting gravity field retrieval errors (Figure 8, green curves) again revealed smaller aliasing effects for the MOBILE concept (≈60%), which is even better visible when looking at the spatial domain, shown in Figure 9. As already suggested by Figure 7, the retrieved HIS fields demonstrate, that the error pattern of MOBILE was much more homogeneous, and the typical striping of a low-low along-track ranging system is significantly reduced, particularly in the equatorial regions where the orbit ground tracks were less dense. This resulted in a clearly improved free representation of hydrological and ice mass signals for the MOBILE concept. The high quality of multi-directional observations of the high-low tracking concept allows the application of an extended alternative processing method first proposed by Wiese et al. 2011 [23], which enables the mitigation of temporal aliasing effects due to non-tidal time varying signals, as it was stated in Section 1. Wiese et al. 2011 [23] demonstrated the benefit of the co-parameterization of additional daily low degree and order gravity field coefficients for Bender-type satellite constellations. We investigated the potential of this methodology regarding the MOBILE concept by simulating a monthly solution while co-estimating daily gravity fields up to SH degree and order 10, including HIS signal plus instrument errors. The resulting retrieval errors are displayed in Figure 8 (magenta curve). They revealed an error reduction of about 40% compared to the nominal solution (green curve), which led to an increased spatial resolution of about SH degree 50 (≈400 km) instead of 40 (≈500 km). The corresponding spatial plot (Figure 9, f and g) shows a global reduced aliasing pattern, especially in higher latitudes. The comparison between the true HIS signal and the MOBILE recovered signal (nominal and extended processed) displayed in Figure 9 shows that the quality of the solutions could be improved to such a level that de-striping and smoothing the solutions was no longer necessary when examining signals to degree and order 50. Therefore, a possible loss of signal by a posteriori filtering of the gravity field solutions recovered by MOBILE could be avoided. Conclusions In this study, we investigated the gravity field retrieval performance of the novel and innovative MOBILE high-low satellite tracking concept and compared it with a low-low GRACE Follow-On-like configuration qualitatively and quantitatively. Based on full numerical simulations, gravity field parameters were estimated in terms of SH coefficients by solving a least-squares system by inverting full normal equations over a time span of 1 month. The most important error sources affecting the gravity field retrieval performance, key instruments on-board the satellites, as well as time varying mass flux signals, were included in order to assess their impact on gravity field retrieval for both mission concepts. The results regarding the instrumental impact on the gravity field solution show that the performance of the MOBILE configuration was mainly limited by the assumed micrometer accuracy of the laser interferometer, especially in the short wavelength spectrum, while the performance in the lower wavelengths of the gravity field benefited from the multi-directional observation geometry and optimized 3D accelerometer. In contrast, the gravity field retrieval of the low-low pair constellation was limited mainly by the accelerometer, which predominated the nanometer accuracy of the assumed laser interferometer. The multi-directional observations of MOBILE mentioned above included a strong radial component and led to an almost uniform (isotropic) error spectrum, while the low-low tracking concept showed the typical stripes caused by the north-south observation direction. However, the high accuracy of the low-low satellite pair's inter-satellite link led to an improved gravity field performance from SH degree 40 and higher compared to MOBILE, which performed better in the long wavelength spectrum where the largest amplitudes of time varying gravity field signals occurred. The benefit of MOBILE's multi-directional observation geometry arose when including tidal and non-tidal mass variation signals into the simulation process. The results revealed significantly reduced temporal aliasing errors in the recovered gravity field signal compared to the low-low tracking concept over the whole spectrum. In the case of the separate treatment of the HIS signal, the resulting gravity field error performance of MOBILE improved even by about 60%, and the application of an extended processing method to reduce temporal aliasing errors by co-estimation of daily gravity field parameters, led to a further reduction of retrieval errors of about 40%. Furthermore, the results show that the quality of recovered MOBILE gravity field solutions could make a treatment of such solutions using a posteriori filtering techniques obsolete. The gravity field solutions retrieved using MOBILE can contribute to an improved understanding of different components of the Earth system, such as the estimation of continental water storage and freshwater fluxes, the quantification of large-scale flood and drought events and their monitoring and forecasting, or understanding the mass balance of ice sheets and larger glacier systems, just to name a few. The application of the extended processing method implied the co-parameterized gravity field parameters (which, in our case, are daily gravity fields) as a side product. These daily solutions with low spatial resolution could aid in improving atmospheric models, and possibly be beneficial to the oceanography community as well, as many of these short-term signals have large spatial scales. Outlook The gravity field solutions of the high-low tracking concept presented in this paper were based on the minimal configuration of MOBILE. On top of this scenario, the optional implementation of a third or fourth MEO satellite, but also a second LEO, could be considered to further increase the mission performance, but also significantly improve the temporal resolution. Due to the largely passive instrumentation of this mass transport mission, the function of the MEO satellites could be implemented as a backpack application of other MEO missions, such as the Galileo next-generation satellites, in order to extend and maintain the infrastructure for laser ranging payloads. Aside, one of the most important fields of research is the mitigation of temporal aliasing errors. In this context it is important to mention that aliasing effects due to imperfect ocean tide models represent one of the largest error sources in temporal gravity field retrieval. The capability of the multi-directional observations of MOBILE of co-parameterize tidal parameters over long time spans, as proposed in Reference [24], in order to improve current ocean tide models will be the subject of further study.
7,415.8
2019-03-05T00:00:00.000
[ "Physics", "Environmental Science" ]
RNAtips: analysis of temperature-induced changes of RNA secondary structure Although multiple biological phenomena are related to temperature (e.g. elevation of body temperature due to an illness, adaptation to environmental temperature conditions, biology of coldblooded versus warm-blooded organisms), the molecular mechanisms of these processes remain to be understood. Perturbations of secondary RNA structures may play an important role in an organism’s reaction to temperature change—in all organisms from viruses and bacteria to humans. Here, we present RNAtips (temperature-induced perturbation of structure) web server, which can be used to predict regions of RNA secondary structures that are likely to undergo structural alterations prompted by temperature change. The server can also be used to: (i) detect those regions in two homologous RNA sequences that undergo different structural perturbations due to temperature change and (ii) test whether these differences are specific to the particular nucleotide substitutions distinguishing the sequences. The RNAtips web server is freely accessible without any login requirement at http://rnatips.org. INTRODUCTION Structural perturbations in RNA molecules induced by temperature change may have important biological implications. For instance, the stability of mRNA structural elements in 5 0 -untranslated regions correlates with the translation rate in Saccharomyces cerevisiae (1). Another example is the temperature-sensitivity of cold-adapted influenza vaccine strains. For decades, it was a conundrum why wild-type influenza strains react differently to elevated temperature than their cold-adapted temperature-sensitive counterparts. Recently, it has been demonstrated that this difference in temperature sensitivity may be due to the difference in temperature-induced perturbations in mRNA secondary structures (2). Perhaps, the most widely known example is RNA thermometers, which at a particular temperature alter their structure, and regulate translation of heat-shock, cold-shock and virulence genes (3)(4)(5)(6)(7)(8). Usually, RNA thermometers are located in 5 0 -untranslated regions, and their structures melt at an elevated temperature thereby permitting ribosomes to initiate the translation process. There are several experimental approaches to measuring the melting temperature of an RNA structure (9), including ultraviolet absorbance (10,11), Fuorescencebased techniques (12,13) and thermal gradient electrophoresis (14)(15)(16). Recently, temperature stability of RNA structural elements was assessed on a genomewide basis (17). The Parallel Analysis of RNA Structures with Temperature Elevation technique was applied to the yeast transcriptome, and relative melting temperatures for RNA structures were obtained by probing RNA structures at different temperatures from 23 to 75 C. As a result of this assessment, thousands of potential RNA thermometers and highly temperaturestable structures were identified. Temperature-induced perturbations of RNA structures may play crucial, and yet unknown, biological role(s) in a variety of processes. Elevation of body temperature is the most common symptom of many illnesses. The effects of elevated body temperature on RNA structures both in pathogens and their hosts are still unknown, although they may constitute a defense mechanism. Additionally, it would be interesting to assess whether RNA temperature sensitivity plays an evolutionary role in organism adaptation to different climate zones, as well as to seasonal and day-night temperature change. The latter question is especially important owing to global climate change. Is temperature sensitivity of RNA structures in bacteria living in geysers different from that of bacteria living at negative temperatures? Do RNA structures from warm-blooded organisms react to the temperature change similarly to their counterparts in cold-blooded animals? These and many other questions could not be systematically addressed, however, as (to the best of our knowledge) there is no convenient instrument to identify and compare temperature-sensitive regions of RNA molecules. To close this gap, RNAtips (temperature-induced perturbation of structure) web server has been developed. For a single RNA sequence, RNAtips identifies (i) those nucleotides for which temperature change causes appreciable alteration of the probability to form Watson-Crick (W-C) pairs and (ii) clusters of such temperature-sensitive nucleotides. If the research goal is to compare two RNA sequences and identify whether they react differently to a temperature change, the locations of temperature-sensitive clusters within the two RNAs are compared. If the two sequences are homologs with a limited number of base substitutions, an analysis can be performed to demonstrate whether the difference in location of the temperature-sensitive clusters between the two sequences is specific to these particular nucleotide substitutions, or if it could be achieved with the same number of random mutations (synonymous and/or non-synonymous). METHOD SUMMARY The methodology implemented in RNAtips web server for assessing such impacts of temperature change was previously described and published by Chursov et al. (2). In short, each nucleotide within an RNA sequence has a probability of being paired via W-C bonds. This probability is temperature dependent; therefore, temperature changes influence the probability of forming W-C pairs for each and every nucleotide. However, some nucleotides change their pairing probabilities to a much greater extent than others. Moreover, these highly temperature-sensitive nucleotides may not be evenly distributed along the RNA sequence but rather form distinct clusters (2). Thus, the first task performed by the RNAtips web server is identification of those positions, which are prone through temperature elevation to significantly change their probability of being paired. This task is performed through the following steps. Step 1: Probabilities of nucleotides to be coupled within a double-stranded conformation are assessed at each temperature within the given range by using the RNAfold tool of the ViennaRNA package (18). Step 2: For each nucleotide, RNAtips calculates the difference in probability for it to be in a paired state at the lower temperature and at the higher one. These differences are calculated for the entire temperature range (t 1 : t 2 ) [i.e. for (t 1 +1) À t 1 , . . . , t 2 À t 1 ] and then combined into one data set. For example, if the temperature range is set to 32-39 C and the length of the sequence is 1000, then the changes of probabilities are considered for 33 C compared with 32 C, 34 C compared with 32 C, . . . , 39 C compared with 32 C, and the final data set would contain 7000 values. Step 3: The server identifies the most temperature-sensitive positions. For this purpose, the server selects those values (and their corresponding nucleotides) from the data set generated in Step 2, which are distant from the mean by more than three standard deviations (the default value can be changed by the user). The server then considers these positions to be the most temperature-sensitive, and they are then mapped on the original sequence. Furthermore, clusters of significantly changing positions are then identified by applying the density-based spatial clustering of applications with noise (DBSCAN) algorithm to the locations of such positions. The server default action is to apply the cluster analysis algorithm only to the highest temperature differences t 2 -t 1 , (32-39 C in the previous example) (19,20). It may be important to assess whether structures of RNA molecules sharing sequence similarity react (dis)similarly to temperature change. For simplicity of explanation, assume that one RNA sequence was derived from another sequence via some mutations. Then, the second task, which can be performed by RNAtips server, is to identify whether structures of two homologous RNA sequences react differently to the temperature change and, if they do, whether this difference can be attributed to the specific mutations separating the two homologous sequences. Thus, if a user inputs two sequences, RNAtips identifies clusters of temperature-sensitive positions, which could be either common for both sequences or uniquely present in only one of the two RNA molecules. If the clusters of temperature-sensitive positions are not identical for the two sequences, the server offers statistical analysis identifying whether the difference in temperature sensitivity is specific to the particular nucleotide substitutions naturally differentiating the sequences or whether any set of mutations comparable in size could lead to the same difference. Therefore, assume that N nucleotide substitutions differentiate sequence A from sequence B. The server generates a data set of derivative sequences for A introducing N substitutions into each derivative sequence. There are two different methods of introducing random substitutions into a sequence(s) depending on whether the sequence(s) is(are) non-coding or coding. If A is a coding sequence (default), mutants will be generated by introducing synonymous mutations only. If A is a non-coding sequence, the user should mark a checkbox: 'The input sequence(s) is(are) non-coding'. In this case, in silico mutations will be introduced at random positions mimicking frequencies of nucleotide substitutions naturally occurring between A and B (e.g. if 25% of nucleotide substitutions between A and B are T->C, then T->C substitutions will be introduced in 25% of random in silico mutations). For each computer-generated sequence, the server will calculate its clusters of the temperature-sensitive positions as described earlier in the text. If sequence B has a sequence-specific cluster of temperature-sensitive positions not present in A, some of in silico derivatives of A may possess clusters overlapping with the sequence-specific cluster observed in B. Let us assume that 1% or less of computer-generated sequences possess such clusters overlapping with the sequence-specific cluster in B. This means that 99% of random mutation sets did not lead to the appearance of this sequence-specific cluster of temperature sensitivity specific for sequence B, but not for A. Thus, one can conclude that the RNA structure of sequence B reacts to the temperature change differently than the structure of sequence A because it possesses a specific set of mutations as opposed to just N nonspecific mutations. The RNAtips server performs a statistical analysis calculating a P-value for every sequencespecific cluster by performing a one-sided binomial test. For sequence-specific clusters occurring in the first sequence but not in the second one, the null hypothesis (H 0 ) is that the probability to observe this cluster is <95%. Consequently, a small P-value shows that the cluster is unlikely to disappear in the second sequence by chance. For sequence-specific clusters occurring in the second sequence but not in the first one, the null hypothesis (H 0 ) is that the probability to observe this cluster amongst the mutants generated in silico is !5%. Therefore, a small P-value shows that the cluster is unlikely to appear in the second sequence by chance. Input data The input for RNAtips consists of either one or two RNA sequences of the same length that should be provided in FASTA-format (the header can be omitted). The sequences can either be uploaded as text files (each file may contain only one sequence), or the sequences may be directly pasted into an input field. The sequences may contain the characters A, C, G, U and T (for further computations, all Thymidines will be replaced with Uracils automatically). The maximal length of the sequences is limited to 9999 nt. To see an example of possible input sequences, the user can click on the 'sample' link on the Start page. Influenza strains A/Leningrad/134/57 and its cold-adapted temperature-sensitive mutant A/Leningrad/ 134/47/57 are used as sample sequences. Additionally, a user has to specify two temperatures t 1 and t 2 (in C) to define the temperature range (t 1 : t 2 ) for which the RNA structural perturbation should be calculated (the default range is 32-39 C). t 2 is the temperature for which the actual cluster identification will be performed. The minimal allowed temperature is 0 C, and the maximal allowed temperature is 99 C. Furthermore, the maximal allowed temperature difference (i.e. t 2 -t 1 +1) is restricted to 20 C (e.g. 30-49 C). If a user inputs two sequences, two options are available. The default option is to perform a statistical analysis of the sequence-specific clusters identified in each of the two sequences and to test whether these sequence-specific clusters result from the particular set of mutations distinguishing the sequences. However, this analysis takes some time and may be unnecessary for the particular user. In this case, the user can choose the checkbox for the 'Don't Create a Mutant Dataset' option (this option is only relevant for two input sequences). An advanced user can deviate from this default setup and input his parameters of choice. It was described in the 'Method Summary' section that the statistical threshold for identifying significantly changing positions is 3.0 standard deviations. However, in a custom calculation, a user may also choose a threshold level other than 3.0. The next two advanced parameters, e and MinPts, are both parameters for the clustering algorithm called DBSCAN. As the first step, the algorithm randomly selects one significantly changing position. MinPts specifies the minimal number of significantly changing positions in a cluster. e specifies the distance from the chosen nucleotide. If the number of significant positions specified by MinPts is located within distance e, the sequence segment is then considered part of a cluster. The default values for e and MinPts are 11 and 5, respectively. Finally, a user can then select checkbox options: 'Don't allow GU pairs at the end of helices' and/or 'Don't allow GU pairs'. These selections instruct the server whether GU pairs should be considered in calculations when the probabilities of nucleotides to be in a double-stranded confirmation at any given temperature are calculated. These two checkboxes are converted into the -noCloseGU and -noGU parameters of RNAfold during the calculations of probabilities of nucleotides to be coupled. Server output At the top of the results page from RNAtips, the HTML output provides colored visual representation of identified temperature-sensitive positions ( Figure 1). The left column contains values for each temperature within the temperature range (t 1 : t 2 ). The right column presents the input sequence with those nucleotides-which are the most temperature-sensitive at this temperature-marked in either blue or orange color. The header line presents the FASTA header of the sequence(s). Position numbers are indicated under the header line. In the case of two input sequences, a line between the results for both sequences indicated matching positions with 'j' and mismatches (mutations) with '-'. Additionally, significant positions that are sequence specific to one of the two sequences only are displayed in orange color. Positions that change their probabilities to be paired significantly in both sequences are displayed in blue color. If only one sequence is used as an input, then all positions demonstrating the highest potency to change their likelihood of forming W-C bonds are displayed in orange color. In addition, the HTML output demonstrates the temperature initiating a perturbation of the RNA structure. All tables and figures presenting more detailed results are shown in the lower part of the page and described in the following paragraphs. The first table displays general information on identified significant positions and clusters. For every input sequence, it has the following fields: 'Sequence' (shows the ID of the input sequence); '#significant positions/total length' (the number of significantly changing positions and the total length of the input sequence); 'signif. pos. < 0/signif. pos. > 0' [the numbers of significantly changing positions that decrease (or increase) their probability to be paired with temperature elevation]; 'Number of clusters' (the total number of identified clusters of significantly changing positions); 'Avg. cluster density' (the average density of significantly changing positions in the identified clusters); and 'Avg. cluster length' (the average length of the identified clusters). The probability difference values are calculated by subtracting the value at the highest temperature from the value at the lowest temperature (p 39 C -p 32 C for the previous example). Cluster density is calculated as the number of significantly changing positions in a cluster divided by the total length of the cluster. If a sequence contains 1000 nt and the temperature changes from 32 to 39 C, there are 7000 values reflecting how much each nucleotide would change its probability to form W-C couples when the temperature increases from 32 to 33 C, from 32 to 34 C, . . . , from 32 to 39 C. The output for this example would contain a histogram over all these 7000 data points (Figure 2). These histograms are used to identify the most temperature-sensitive positions (by default, further than 3 standard deviations away from the mean value). Overall, a histogram contains (t 2 -t 1 )*(sequence length) values. For every input sequence, one histogram is presented at the output page. Thus, if two closely related sequences were used to compare their temperature sensitivity, the output would possess two histograms. The exact location (start and end positions) of the identified clusters (if any) is shown in the following table. The accompanying output figure shows the relationship between the length and density of the clusters (Figure 3). In this figure, each point represents one cluster. The cluster density is plotted versus the cluster length. Several clusters can have the same properties, and in such a case, the corresponding points will overlap. Therefore, the total number of apparent points can be different from the total number of clusters. Such tables and figures are presented for every sequence in which clusters of the most temperature-sensitive positions were identified. Otherwise, the web server directly indicates that no clusters were identified for a particular sequence. For every input sequence, the following figure demonstrates density of the most temperature-sensitive positions The histogram of differences in probability values of nucleotides to be in a double-stranded conformation for mRNA of nucleoprotein (NP) of influenza strain A/Leningrad/134/57 on temperature change between 32 and 39 C. The probability values of nucleotides to be paired for 32 C were subtracted from the probability values for every temperature from 33 to 39 C. All the differences were combined into one data set. over the whole RNA sequence together with localization of clusters and localization of nucleotide substitutions (if any) (Figure 4). The upper part of this figure is created by moving a sliding window of size 2*e+1 over the corresponding sequence and determining the density of significantly changing positions within it. The lower part shows the localization of clusters and mutation sites on the sequence. As described earlier in the text, if two homologous RNA sequences constitute an input, one of the sequences may possess clusters of temperature-sensitive nucleotides, which are not present in the other RNA molecule (i.e. clusters that can be found for the given DBSCAN parameters in one RNA, and they do not overlap with any clusters from the other RNA). Appearance of these sequence-specific clusters may be a specific consequence of the particular nucleotide substitutions differentiating the RNAs. Alternatively, the clusters could result from a high number of non-specific mutations. Results of the statistical analysis presented in the last table (if conducted) demonstrate whether a sequence-specific temperature-sensitive cluster observed in one RNA but not in another is due to specific nucleotide substitutions taking place in the sequences. In other words, these data demonstrate whether such a specific difference between the two RNAs can be achieved by introducing the same number of random mutations. The server generates a data set of in silico mutants for the first RNA as described in the 'Method Summary' section. Some of these in silico mutants may possess temperature-sensitive clusters, which are not present in the original RNA sequence. The table shows positions of sequence-specific clusters observed in the RNA sequence, the frequency for each sequence-specific cluster to be overlapping with a cluster in the computer-generated mutants (at least, by one position), the P-value and 95% confidence interval calculated from the binomial test for each sequence-specific cluster to be a result of a random mutation set introduced into the original RNA. All figures and additional information can be downloaded by RNAtips users. The results page enables a user to download a zip-file of all sequences of the in silico mutants (if generated). Results of every job will be stored on the server for at least 3 days. Every submitted job receives a unique URL and a user can browse the results during this period. Implementation RNAtips web server has a user-friendly interface and runs under the Linux operating system. The server's back-end, including the core part of computations as well as implementation of the DBSCAN algorithm, is written in Python. Statistical tests and generation of plots are implemented in R programming language. Calculation of probabilities of nucleotides to be paired in a doublestranded conformation is performed by using the RNAfold tool of the ViennaRNA package. The frontend part of the web server is implemented in HTML markup language with dynamic parts written in JavaScript programming language. A MySQL database is used to store the input parameters and results of the computations. The server contains a help page with detailed explanation of its functionality. DISCUSSION Before this presentation of RNAtips web server, researchers did not have a simple and feasible way to evaluate the affect of temperature change on secondary RNA structure. RNAtips is based on the analysis proposed and described by Chursov et al. (2). The name RNAtips stands for 'temperature-induced perturbation of structure'. This server can be used to analyze localization of temperature-induced changes in the secondary structures of RNA and to compare such changes between two sequences of the same length. There are at least three advantages of using RNAtips web server instead of simply calculating the probabilities of nucleotides to be paired at two different temperatures and then comparing those probabilities. First, RNAtips deciphers those nucleotides within the RNA sequence, which change the most in their probability to form W-C bonds in response to a given temperature change. The web server demonstrates clusters of these positions within a sequence, which constitute the most temperature-sensitive structural regions. The second major benefit of RNAtips is the tool it provides to compare whether RNA structures of two closely related sequences would react (dis)similarly to a temperature change. If two RNA molecules possess different clusters of temperature-sensitive positions, their RNA structures react to the temperature change differently. Furthermore, if two RNA sequences are distinct in some nucleotide substitutions, RNAtips can be used to analyze whether either the difference in temperature sensitive clusters is specific to these particular nucleotide substitutions or whether it was likely to be caused by a similar number of non-specific nucleotide substitutions. Finally, the top RNAtips' results page is an HTML output that presents the temperature initiating a perturbation of secondary structure in a particular temperature-sensitive region. To the best of our knowledge, no other server provides these options. RNAtips web server can be applied to a broad spectrum of research topics such as drug development, molecular diagnostic and disease prognosis, evolutionary mechanisms, ecology, investigation of climate change effects and many more. In addition, currently, we are preparing a downloadable version of the source code for local usage.
5,227.6
2013-06-12T00:00:00.000
[ "Biology", "Computer Science", "Environmental Science" ]
Molecular Engineering of Metalloporphyrins for High‐Performance Energy Storage: Central Metal Matters Abstract Porphyrin derivatives represent an emerging class of redox‐active materials for sustainable electrochemical energy storage. However, their structure–performance relationship is poorly understood, which confines their rational design and thus limits access to their full potential. To gain such understanding, we here focus on the role of the metal ion within porphyrin molecules. The A2B2‐type porphyrin 5,15‐bis(ethynyl)‐10,20‐diphenylporphyrin and its first‐row transition metal complexes from Co to Zn are used as models to investigate the relationships between structure and electrochemical performance. It turned out that the choice of central metal atom has a profound influence on the practical voltage window and discharge capacity. The results of DFT calculations suggest that the choice of central metal atom triggers the degree of planarity of the porphyrin. Single crystal diffraction studies illustrate the consequences on the intramolecular rearrangement and packing of metalloporphyrins. Besides the direct effect of the metal choice on the undesired solubility, efficient packing and crystallinity are found to dictate the rate capability and the ion diffusion along with the porosity. Such findings open up a vast space of compositions and morphologies to accelerate the practical application of resource‐friendly cathode materials to satisfy the rapidly increasing need for efficient electrical energy storage. Synthesis: The synthesis of A2B2-porphyrins is straightforward and consists of three synthesis steps (Scheme 1). In step one, we synthesize the appropriate meso-dipyrromethane from pyrrole and an aldehyde with the intended substituent with the electron-withdrawing group. In step two, the ring-closing reaction takes place, flowing the Macdonald condensation. The condensation takes place between the meso-dipyrromethane and (trimethylsilyl)-propiolaldehyde. The metalation of the free-base porphyrin is unproblematic -conditions depend on the used metal. Synthesis of 5-phenylpyrromethane (1). [2] A mixture of pyrrole (140 mL, 2 mol) and benzaldehyde (10.2 mL, 0.1 mol) was bubbled 15 min with Argon. The reaction mixture was cooled with an ice bath and trifluoroacetic acid (0.78 mL, 0.01 mol) was added dropwise. After that, the reaction mixture was extracted 3x with ethyl acetate. The combined organic phase was extracted with water and dried over sodium sulphate. After column chromatography (SiO2, hexane:ethyl acetate, 2:1) , a yellow solid was obtained with 9% (2.0 g) yield. 1 (2). [3] A mixture of ( General procedure for deprotection of TMS-group. [4] The porphyrin (0.05 mmol) was dissolved in 20 mL dry THF and 1 mL of 1M solution of TBAF in THF was added. The mixture was stirred overnight under an argon atmosphere. The reaction was quenched by adding 50 mL of water. THF was removed under reduced pressure. The precipitate was filtrated and dried overnight at 100 °C and 2.0•10 -2 mbar. Supporting Figures and Tables Matrix-assisted laser desorption and ionization time-of-flight mass spectrometry (MALDI ToF MS) is frequently applied to analyze macrocycles and their metal complexes. All MALDI mass spectra were obtained by solvent-based sample preparation methods. About 0.1 mg of the analyte was dissolved or suspended in 2 ml of MeOH. A small amount (0.1-2.5 µL) of the solution was put on the stainless steel substrate and dried in air. The MALDI-TOF mass spectra of newly synthesized 4, 5, 9, 10 showed appropriate signals for the molecular ions and proved their desired nature. The molecular ion was the most abundant high mass ion with a distinct isotopic distribution in all cases. The relative abundances of the isotopic ions are in good agreement with the simulated spectra, as reported in Figure S3-S6, where every spectral result and calculated values are summarised. In all cases except NiDEPP (10), mass spectra were detected in the positive ion mode. In addition, acidic compounds (such as alkynes) can also be detected as single negatively charged ions in the negative ion MALDI ToF mass spectra. It is common for small ions like Ni 2+ that insertion into porphyrins suffers from the fact that Ni 2+ is too small to perfectly fit into the square planar cavity formed by the four pyrrole nitrogen atoms (ionic radii Figure S1) Ni II -porphyrins show a rich conformational behavior: a (dz2) 2 electronic configuration and small ionic radius (0.69 Å) of Ni II favor relatively short equilibrium Ni-N bond distances. This results in nonplanar ruffled Ni-porphyrin conformations, [7] in which individual pyrrole rings are twisted about the Ni-N axes and significant alternating displacements of the Cm sites above and below the mean molecular plane take place. [8][9][10] (Figure 2b-e). Figure S14. The rate capability of DEPP electrode with an increase in the charge-discharge rate from 100 mA g -1 to 10 A g -1 and then a decrease to 500 mA g -1 (a) and selected voltage profile (b). Figure S15. The rate capability of CoDEPP electrode with an increase in the charge-discharge rate from 100 mA g -1 to 10 A g -1 and then a decrease to 500 mA g -1 . Figure S16. The rate capability of NiDEPP electrode with an increase in the charge-discharge rate from 100 mA g -1 to 10 A g -1 and then a decrease to 500 mA g -1 . Coulombic efficiency (%) Figure S17. The rate capability of CuDEPP electrode with an increase in the charge-discharge rate from 100 mA g -1 to 10 A g -1 and then a decrease to 500 mA g -1 . Figure S18. The rate capability of ZnDEPP electrode with an increase in the charge-discharge rate from 100 mA g -1 to 10 A g -1 and then a decrease to 500 mA g -1 . Table S4: Crystal data and structure refinement.
1,290.8
2022-11-29T00:00:00.000
[ "Chemistry", "Engineering", "Materials Science" ]
MUON SPIN RELAXATION IN THE HEAVY FERMION SYSTEM We report muon spin rotation/relaxation (µSR) measurements of the heavy fermion superconductor UPt 3 in external fields Hcnll2 We find that the muon Knight shift is unchanged in the superconducting state, consistent with odd-parity pairing (such as p wave). The transverse field relaxation is observed to be strongly field dependent, decreasing with increasing field. Below Tc the increase is barely detectable in an applied field of 4 kGllC. On the basis of the high field measurements, we estimate the low temperature penetration depth to be Ä.(T-0)>11000 A. transverse field relaxation is observed to be strongly field dependent, decreasing with increasing field. Below Tc the increase is barely detectable in an applied field of 4 kGllC. On the basis of the high field measurements, we estimate the low temperature penetration depth to be Ä.(T-0)>11000 A. There is a growing body of experimental evidence that shows that the heavy fermion system UPt 3 (Ref. 1) is a non-s-wave superconductor. Neutron scattering2 and heat capacity measurements detect strong spin fluctuations in the superronducting state, thougbt to favor anisotropic pairing (such as p or d wave). In addition, ultrasound velocity 3 and heat capacity 4 measurements have detected possible phase boundaries within the superconducting state. Tue existence of several superconducting phases has spurred the development of theories to identify the various states. In this vein, several authors 5 have suggested that UPt 3 possesses a multicomponent superconducting order parameter transforming according to a nonidentity representation of the hexagonal D 6 h group. There is still great uncertainty about the properties of UPt 3 : even the parity of the superconducting pair state has not been unambiguously determined. Many properties of the superconducting state can reftect the underlying symmetry of the pairing. Among these are the spin susceptibility X and the magnetic field penetration depth Ä.. In an even parity (such as s wave) superconductor, the electrons are paired in states with opposite spin. Tue combined susceptibility of that pair is O; the measured spin susceptibility refiects that of the normal state electrons, approaching zero as the temperature approaches zero [following thc Yosida function Y(T)]. In odd parity superconductors, different susceptibilities are possible, depending on the pair wavefunction, and can be markedly different from the even parity case. For example, the triplet ABM and BW states of 3 He have susceptibilities XABMIXn = 1, XswlXn = j + ~Y(T), respectively, where Xn is the normal state susceptibility. The magnetic field penetration depth A. describes the screening effects of the superconducting electrons. Its temperature dependence can also provide information about the pairing symmetry. 6 If there are nodes in the superconducting gap, characteristic of higher l pairing, thermal pair breaking will give rise to a power law temperature dependence in Ä.. By comparison, s-wave superconductors have no nodes in the gap, and as a result, the penetration depth shows little temperature dependence for T < T cf3. Muon spin relaxation measurements are useful for deter.mining both A.(T) and X(T) simultaneously. 7 In a time differential µSR experiment, 100% spin polarized positive muons are injected individually into a specimen, where the m uon spins precess in the local magnetic field. Tue µ + decays (lifetime -r µ = 2.2 µs -1 ), emitting a positron, preferentially along the instantaneous muon spin direction. A histogram of positrons detected versus the time interval after implantation will exhibit the lifetime exponential decay superimposed on the muon spin polarization function. The "asymmetry," which is the ratio of the dilference and the sum of spectra from two opposing counters, is directly proportional to the muon polarization. Typically, in transverse field, the polarization function is given by where the frequency is given by the local field (J) = (rµ/21T')ß1oc and the relaxation rate u reftects the inhomogeneity in the local field. The local field can be different from the applied field due to a muon-conduction electron hyperfine interaction. The measured fractional shift in the muon precession frequency from this interaction is the sum of the muon Knight shift (Kµ,), Lorentz and demagnetizing shifts, and a diamagnetic shift in the superconducting state; and is given by x (x10-J emu/mole) Comparing measurements of the muon Knight shift with those of the de susceptibility (both with the field applied along the c axis), we sec a similar temperature dependence; plotting K,,. vs X in Fig. 1, we obtain a linear relationship. Since the susceptibility in the basal plane displays a different temperature dependence, 1 we see that the muon K.night sbift reßects the susceptibility for fields along the c axis. The susceptibility x is largely due to the spin susceptibility xs; extrapolating to Xs = 0, we find Kµ.(Xs = 0) = + 0.13%. The slope of the K,,. vs x curve gives us a byperfine fi.eld of about -4.2 kG/µ 8 . This is substantially larger (and of opposite sign) than reported in previous measurements of polycrystalline UPt 3 , 8 which is not surprising in view of the strong anisotropy of X in UPt 3 . Upon cooling through Tc :::::: 0.45 K, we see that there is no discemable change in K,,.. The measured shift, shown in Fig. 2(b), remains about -0.3%, of which about -0.12% comes from Lorentz and demagnetizing shifts ( which likc the Knight shift are proportional to X). Diamagnetic contn"butions to the shift [4nxd( 1 -n)] due to superconductivity are negligible since n is near to 1 in our geometry and Xd is small in high fields. We therefore conclude that the spin susceptibility is unchanged below Tc· This is in agreement with 195 Pt-NMR (Ref. 9) and induced moment form factor measurements. 10 In addition to measuring the frequency of the precession signal, we have simultaneously determined the relaxation rate u(T) for several fields <H112> up to 3.9 kG. The inhomogeneity in the local fields from the vortex lattice 11 13 Plotting the field dependence of the increase in tbe relaxation rate below Tc [inset of Fig. 2(a)], we see that there is a reasonably smootb decrease in u with increasing applied field. We expect that u should be field independent over a large range of fields between H c 1 and Hc2 (Refs. 11,12) in order to extract A.. If the measured inbomogeneity is fi.eld dependent it generally implies that the measured relaxation does not accurately reflect the penetration depth. In this case, the value of 11 000 A can only serve as a lower bound for the penetration depth, wbich may in fact be much longer. There are several possible sources of increased broadening on low fields that could account for the enhanced relaxation. One of these is flux pinning, acting to prevent formation of a uniform fiux lattice. Zero field cooled measurements in 3.9 kG show greatly enhanced relaxation, characteristic of strong flux pinning. Other possible sources of low field broadening below Tc include proximity to Hc 1 and s hape-dependent inhomogeneities in the demagnetizing factor. 14 Ultrasound measurements 3 have detected an anomaly in UPt 3 around H = 12 kG (for HJIC). lt has been suggested that this anomaly indicates a phase boundary between different superconducting states. There is a possibility that our field-dependent relaxation may be related to this anomaly. However, we are prevented from accessing the feasibility of such an effect by a lack of theoretical understanding of the superconducting states of UPt 3 • Nevertheless, we note that all of these µ,SR measurements lie in the London limit, where we do not expect significant field dependence in the relaxation rate. In conclusion, we find that the muon Knight shift is uncbanged in the superconducting state of UPt 3 , supporting the idea of odd-parity pairing. Although it is possible for spin orbit scattering to reduce changes in the Knight shift. ts we would argue the the long mean free path (/-1000 A in UPt 3 (Ref. 16)) suggests that scattering is not im.portant here. We estimate that the low temperature penetration depth is in excess of 11 000 A, roughly consistent with the estimate from a Oux confinement measurement [A.(0)-19000 Ä.]. 17 Since the penetration deptb ...t o:: m*lns and the effective mass is large [e.g., cyclotron effective mass m-c = 25 -+ 90m, (Ref. 16)], we expect i1. to be rather large. The change below T" in relaxation rate is so small in high fields tbat is is not possible to discuss its temperature dependence in terms of different possible gap node structures.
1,976.8
1991-11-15T00:00:00.000
[ "Physics" ]
Achieving near-infrared-light-mediated switchable friction regulation on MXene-based double network hydrogels MXene possesses great potential in enriching the functionalities of hydrogels due to its unique metallic conductivity, high aspect ratio, near-infrared light (NIR light) responsiveness, and wide tunability, however, the poor compatibility of MXene with hydrogels limits further applications. In this work, we report a uniformly dispersed MXene-functionalized poly-N-isopropylacrylamide (PNIPAM)/poly-2-acrylamido-2-methyl-1-propanesulfonic acid (PAMPS) double network hydrogel (M—DN hydrogel) that can achieve switchable friction regulation by using the NIR light. The dispersity of MXene in hydrogels was significantly improved by incorporating the chitosan (CS) polymer. This M—DN hydrogel showed much low coefficient of friction (COF) at 25 °C due to the presence of hydration layer on hydrogel surface. After illuminating with the NIR light, M—DN hydrogel with good photothermal effect rapidly raised the temperature to above the lower critical solution temperature (LCST), which led to an obvious increase of surface COF owing to the destruction of the hydration layer. In addition, M—DN friction control hydrogel showed good recyclability and controllability by tuning “on-off” of the NIR light. This work highlights the construction of functional MXene hydrogels for intelligent lubrication, which provides insight for interface sensing, controlled transmission, and flexible robotic arms. Introduction In modern society, with increasing environmental complexity, the research and development of artificial intelligence materials is on the agenda.Stimuliresponsive interface materials can rapidly achieve reversible physical or chemical property transformations when stimulated by temperature, pH value, magnetic field, light, and so on [1][2][3][4][5].These materials are mainly based on stimuli-responsive polymers [1] and phase-change materials [6].On the basis of the original response, adding nanoparticles to enrich the versatility of materials has become the main research direction [1,7,8].Meanwhile, compared with passive stimulation, active regulation is more controllable. Poly-N-isopropylacrylamide (PNIPAM)-based hydrogels have become representative stimuliresponsive hydrogels.When below the lower critical solution temperature (LCST), the PNIPAM molecular chain stretches.Hydrogen bonds are formed, which causes the appearance of a hydrated layer.In contrast, when above the LCST, the PNIPAM molecular chain shrinks.The hydrogen bonds are broken, which causes the disappearance of the hydrated layer [9].Taking advantage of this property, it has been widely used in many situations, such as drug delivery [10], temperature sensors [11], smart actuators [12], friction regulation [13][14][15], and so on.Among them, the 40 Friction 12(1): 39-51 (2024) | https://mc03.manuscriptcentral.com/frictionapplication of friction regulation has gradually attracted attention.For example, Zhu et al. [16] combined PNIPAM as a coating on the surface of polydimethylsiloxane (PDMS) to achieve mutual conversion between hydrophilic lubrication and antibacterial properties.To achieve active regulation, incorporating materials that respond to stimuliresponsive heating inside PNIPAM hydrogels becomes particularly important.Chen et al. [13] fixed Fe 3 O 4 inside PNIPAM microgels and realized the purpose of actively utilizing the stimulus response of near-infrared (NIR) light to control the COF of water lubrication. MXene is a kind of two-dimensional lamellar material composed of transition metal carbides, nitrides, or carbonitrides with unique metallic conductivity, high aspect ratio, near-infrared photoresponsivity, and widely tunable properties [17].In addition, Ti 3 C 2 T x is one of the most widely used MXene materials, where T x represents different surface termination groups (e.g., OH, O, and F) [17][18][19].Because of its excellent properties, MXene is widely used to realize the functionalization of hydrogels.For example, the excellent electrical conductivity and two-dimensional sheet structure is conducive to the preparation of motion sensors, electrical conductivity and NIR light stimuli-responsiveness benefit the preparation of programmable stimuli-responsive hydrogels and the electromagnetic shielding ability helps prepare electromagnetic shielding materials [12,20,21].However, the applications of MXenes for switchable friction regulation are deficient.In previous reports, MXene easily exhibits agglomeration behavior.This is because various interactions between the MXene surface groups and the polarized groups of various molecules can form in the hydrogel prepolymerization solution [22,23].It is particularly important to make MXene disperse uniformly in the hydrogel, which can increase the utilization of MXene and prevent the hydrogel from bending due to uneven distribution. In this work, we utilized hydrogen bonding between CS and MXene nanosheets, which greatly reduced the possibility of contact between the surface groups of MXene nanosheets and various molecules in the prepolymerization solution.Moreover, the larger viscosity of the CS solution slowed the settling rate of the inner nanosheets.The treated CS-MXene was then combined with the traditional double-network hydrogel [24], in which 2-acrylamido-2-methyl-1propanesulfonic acid (AMPS) was used as the first-layer network and NIPAM was used as the second-layer network.M-DN hydrogels that can realize the regulation of the COF at the interface by NIR light were prepared.The large amount of free-flowing water inside the M-DN hydrogel provides the possibility for hydration lubrication.During the friction process, the free-flowing water was affected by the amide groups on PNIPAM and formed a hydrated layer, which can reduce the COF between the interfaces.When irradiated with NIR light, the MXene nanosheets reacted rapidly and generated a large amount of heat.As the temperature increased, PNIPAM underwent a phase transition.The hydrogen bonds between PNIPAM chains and water molecules were broken.Thus, the hydrated layer was destroyed, and the COF was increased on the interface [25][26][27]. Through this work, MXene (Ti 3 C 2 T x ) nanosheets were treated through hydrogen bonding interactions between CS and MXene, which caused MXene to be coated with chitosan to weaken the interaction between AMPS and MXene.In this way, a dispersion method of MXene nanosheets in hydrogels was proposed.Thus, the friction regulation of the hydrogel surface through the photothermal effect of MXene nanosheets was realized, which further expanded the application of MXene hydrogels.This friction-tunable hydrogel has great potential in the fields of frictioninterface sensing, intelligent manipulators, and controlled transportation. Preparation of MXene nanosheets MXene nanosheets were prepared in two main steps.First, Ti 3 AlC 2 (MAX) was etched to obtain few-layered Ti 3 C 2 T x MXene flakes.3.2 g of LiF, 10 mL of pure water, and 30 mL of concentrated hydrochloric acid were added to a 50 mL polytetrafluoroethylene beaker.The mixed solution was placed in a water bath with a stirrer at 40 °C.Subsequently, added 2 g of MAX into the beaker in several portions.The reaction lasted for 24 h.Then, the solution was centrifuged many times www.Springer.com/journal/40544| Friction (3,500 rpm, 5 min) using pure water until the liquid in the centrifuge tube was no longer translucent.The suspension containing few-layered MXene flakes was collected.The collected suspension was centrifuged (10,000 rpm, 10 min) to remove water.Thus, few-layered MXene flakes could be obtained after the freezedrying treatment of the sedimentation.Second, 200 mg of freeze-dried MXene was added to a 100 mL jacketed reactor with 50 mL of pure water.The above dispersion was sonicated for 2 h at 10 °C with 30% power using a probe sonicator.The treated solution was directly lyophilized to obtain dry MXene nanosheets. Preparation of homogeneously dispersed M-SN hydrogels 2 mg (4, 6, and 8 mg) of freeze-dried MXene nanosheets was dispersed into 5 mL of pure water through water bath ultrasonic.Then, added 0.1 g of CS into the solution.Two drops of glacial acetic acid were added to help dissolve the CS.The above solution was stirred at high speed for 1 h.The subsequent solution was labelled as solution A. Next, 4 g of 2-acrylamido-2-methylpropanesulfonic acid (AMPS), 90 mg of MBAA crosslinker and 10 mg of 2-oxoglutaric acid initiator were added to 5 mL of pure water to obtain homogeneous solution B. Solutions A and B were uniformly mixed, and the air bubbles in the mixed solution were removed through water bath ultrasonic.Then the mixture was photoinitiated under UV irradiation for 30 min to form a covalent cross-linked hydrogel with MXene intercalation.Finally, the prepared hydrogel was immersed in pure water for 24 h to obtain the M-SN hydrogel. Preparation of homogeneously dispersed M-DN hydrogels 4 g of NIPAM, 5.4 mg of MBAA crosslinker, and 10 mg of 2-oxoglutaric acid initiator were added to 20 mL of pure water to obtain a homogeneous PNIPAM prepolymerization solution.Then, the M-SN hydrogel was placed in a petri dish to fully soak in the prepolymerization solution at 20 °C for 48 h.The treated SN hydrogels were removed.The upper and lower surfaces of the hydrogels were adequately covered by glass plates.Double network hydrogels with uniformly dispersed MXene were obtained by UV light initiation at 20 °C for 30 min.Finally, the M-DN hydrogels were immersed in pure water for 24 h. Characterization An X-ray diffractometer (Bruker D8 X-ray, Germany) was used for X-ray diffraction (XRD) measurements. Transmission electron microscopy (TEM) (FEI, Talos F200X, and USA) was used to characterize the morphology of MXene nanosheets.Porous morphology of lyophilised hydrogel sections obtained by scanning electron microscopy (SEM) (FEI, Helios G4 CX).Characterization of CS and MXene interactions was performed using a TENSOR II Fourier transform infrared (FTIR) spectrometer (Bruker, Germany).The size of the MXene nanosheets was measured using a particle size analyser (Malvern Instruments, UK).X-ray photoelectron spectroscopy (XPS) (PHI 5000 VersaProbe III) with an Al Kα X-ray source was used for elemental analysis.For the photothermal performance study, M-DN hydrogels were placed in a petri dish with appropriate pure water.Then, they were irradiated with an NIR laser (BST808-5-F, Xi'an Best Laser Optronics Co., Ltd.) at a wavelength of 808 nm.The irradiation diameter and distance were 1 mm and 5 cm, respectively.The temperature was measured by infrared thermography (FLIR, E8-XT, and USA). Mechanical performance tests The tensile strength at break of the M-SN hydrogel and M-DN hydrogel was tested using a universal mechanical testing machine (INSTRON 5982).For the test at 50 °C, a layer of silicone oil was applied to the surface of the hydrogel to prevent water loss.The constant temperature of 50 °C in the chamber was remained for 5 min before the test of tensile strength at break.The sample was processed into I-beam shape.Their width and length were 4 and 10 mm, respectively.The stretching speed was 50 mm/min. Rheology testing The G' and G'' moduli of the M-SN hydrogels and M-DN hydrogels were tested with frequency and temperature as variables using a rotational rheometer (HAAKE MARS III, USA).Tests were carried out on circular hydrogels with a diameter of 25 mm.Frequency change tests were carried out from an angular frequency of 100 rad/s to an angular frequency of 0.1 rad/s.Before the test, the sample was kept at a constant temperature of 25 °C for 5 min before being heated to 50 °C at a rate of 2.0 °C/min.The load was maintained at 0.1 N and the angular frequency at 10 rad/s. Tribological test The tribological properties of the M-DN hydrogels were tested using a ball-disk contact reciprocating friction tester (UMT-3, Bruker, Germany).In all tests, the lower surface was wiped to dry and fixed in the sink using double-sided tape to prevent slippage when rubbing.After fixing the M-DN hydrogel in the water bath, the entire surface of the sample was kept flat.Deionized water was added to make water surface parallel to the upper surface of the sample.An amount of water was dripped between the upper friction pair and the sample to maintain the hydrated layer during the whole friction when the NIR light was switched off.The upper friction pair was made of glass ball with 5 mm in diameter to maximize the transmission of NIR light.The reciprocation stroke was 5.0 mm.The COF was derived with the help of software by dividing the frictional force by the normal load. Synthesis and characterization of M-DN hydrogels Figure 1 shows the dispersion principle of MXene (Ti 3 C 2 T x ) and the preparation process of the M-DN hydrogel.There were a large number of groups (such as -OH, -O, and -F) on the surface of the MXene sheet, as shown in Fig. 1(a).These groups can form strong hydrogen bonds with the molecular chains in the hydrogel network.The MXene surface groups interact easily with the polar groups of various molecules in the hydrogel prepolymerization solution [22,23], which results in the agglomeration of MXene sheets and prevents the MXene sheet from being uniformly dispersed in the hydrogel.In addition, this phenomenon was more apparent as the concentration of MXene sheets increased.To solve this problem, a small amount of CS was introduced to promote the dispersion of MXene, as shown in Fig. 1(b).CS played a dual role in the MXene dispersion problem.First, the strong hydrogen bonds between the hydroxyl as well as amino groups of CS and the hydroxyl groups on the MXene surface can weaken the interaction between the MXene surface groups and molecules of the prepolymerization solution.Second, CS triggered the transformation of the solution into a sol state because of its feature of a short-chain polymer, which could slow down the sedimentation rate of MXene.On this basis, a method for friction-controlled M-DN hydrogels was devised [24], which is shown in Fig. 1(c).The mass of this hydrogel before and after lyophilization changed from 0.3251 to 0.0512 g, possessing a water content of up to 84.3 wt%.As shown in Fig. 1(d), suitable initiators were tested to verify the effect of CS on the MXene dispersion.The thermal initiator (potassium persulfate, KPS) and the photoinitiator (2-oxoglutaric acid) were added to the same concentration of MXene nanosheet dispersion.The results showed that KPS caused the agglomeration of MXene nanosheets quickly, while 2-oxoglutaric acid did not.Therefore, it was more suitable to use photoinitiator to prepare hydrogels.In addition, the monomer also had an effect on the MXene nanosheets.As shown in Fig. S1(a) in the Electronic Supplementary Material (ESM), the surface in the etched Ti 3 C 2 T x contained a large number of hydroxyl groups that could interact with water molecules, resulting in excellent dispersion.However, agglomeration rapidly occurred after a certain amount of AMPS was dissolved.This was attributed to the phase separation caused by the interaction between ionized AMPS and MXene.It was interesting to note that the solution in which the MXene nanosheets and AMPS were comingled gradually gelled with time (Fig. S1(b) in the ESM).This was enhanced with increasing concentrations of MXene nanosheets.The phenomenon also poses difficulties for the preparation of MXene hydrogels.The modification of MXene with chitosan significantly addressed these problems.When equal concentrations of MXene and CS-MXene were added to the AMPS prepolymerization solution, the unmodified MXene agglomerated in 10 s, while CS-MXene still had good dispersibility after 24 h.The two solutions were further initiated to form a hydrogel.As shown in Fig. 1(e), CS-MXene was dispersed uniformly, and the whole material was not bent.This highly dispersed method was beneficial to M-DN hydrogels with a larger receptive specific surface area when irradiated by NIR light, and the high transparency could promote fully exothermic irradiation parts. To uniformly disperse MXene inside the hydrogel, further reduction of MXene size was considered based on the previously reported Ti 3 AlC 2 (MAX) etching method [17,18].Probe sonicator was used to obtain smaller MXene nanosheets (Fig. S2 in the ESM).The XRD was performed to compare MAX with MXene.aluminium (Al) was etched completely, and the (002) peak in the MAX phase was shifted from 9.5° to 7.0°.Compared with the previously reported multilayer MXene sheet (002) peak shifts (9.5° to 9.0°), the shift amplitude of the MAX phase is larger due to the greater distance between the layers of the sheet [18].It indicates that few-layer or single-layer MXene nanosheets were obtained (Fig. 2(a)).The morphology of MXene nanosheets was characterized by TEM.It was observed that the MXene nanosheets exhibited a nanoscale oligomeric state, which was consistent with the particle size scale tested in Fig. 2(b).Energy dispersive spectrometry (EDS) analysis showed that a large number of oxygen (O) and fluorine (F) groups were distributed on the MXene nanosheet surface, which demonstrated that the probe sonication approach did not destroy the various groups on the MXene surface (Fig. S3 in the ESM).The composition and changes in the elements of MXene were investigated by X-ray photoelectron spectroscopy (XPS).The full spectrum showed distinct peaks of O and F on the surface of the MXene nanosheets.The high-resolution XPS spectra of Ti 2p, fitted by Ti 2p3/2 to Ti 2p1/2, were consistent with other previous reports [28,29], which confirmed that MXene nanosheets were successfully prepared (Figs.2(c) and 2(d)). After modification by CS, the elements on the CS-MXene surface were analysed by XPS.It was evident that the peaks of titanium (Ti) and F did not appear in the full spectra.Instead, the nitrogen (N) peak of CS appeared, indicating that CS was encapsulated on the MXene surface (Fig. 2(c)).The | https://mc03.manuscriptcentral.com/frictionbinding mode between CS and MXene was recognized by FTIR spectra.As shown in Fig. 2(e), the shift in the -OH characteristic peak from 3,434 cm -1 (red line) to 3,424 cm -1 (blue line) suggests the formation of strong hydrogen bonding interactions between CS and MXene [21].Comparing the MXene sheet modified with CS to the unmodified MXene after lyophilization, CS did not affect the overall black color.This means that CS had essentially no effect on the NIR light response of the MXene.In addition, the CS-modified MXene evidently became more compact and difficult to crush due to the excellent fixation effect of CS on MXene, which made it difficult to peel (Fig. 2(f)).After mixing CS-MXene with the prepolymerization solution, the hydrogel was prepared.The large pores of several hundred microns inside the single network hydrogel and double network hydrogel were observed by SEM.The messy microporous structure of the single network hydrogel contrasts sharply with the uniform regular pore structure of the dual network hydrogel, as shown in Figs.2(g)-2(i) and 2(g)-2(ii).This regular pore structure was more conducive to improve the hydrogel strength and form more hydrated layers.The EDS analysis showed that the basic carbon (C) and O elemental distribution was consistent with the holes.In addition, the Ti elements did not show large-scale agglomerations, confirming that the modified MXene was more uniformly dispersed. Characterization of the mechanical properties of M-DN hydrogels The PNIPAM component in the M-DN hydrogel acted as both a ductile substance and a molecular chain in Meanwhile, the molecular chains contracted, resulting in poor ductility.Therefore, the stress and strain of the M-DN hydrogel were reduced but still remained at 0.15 MPa (Fig. 3(a)) [30].The compressive strengths of the M-SN and M-DN hydrogels were tested to reveal the compression condition during the friction process.The results showed that the M-DN hydrogel had a strong compression resistance of 1.0 MPa, which far exceeded the strength of the M-SN hydrogel and was sufficient to guarantee its strength for use at low loads (Fig. 3(b)).To further investigate the strength differences between the single and double network hydrogels, rheological tests were carried out on the M-SN hydrogels and M-DN hydrogels.The energy storage modulus (G') and loss modulus (G") of the double network hydrogels were both an order of magnitude higher than those of the single network, exhibiting greater energy dissipation and use strength (Fig. 3(c)). To measure the phase change temperature of the M-DN hydrogels, a temperature scan of their mechanical properties was carried out by using a rheometer.The sudden rise in G" at approximately 40 °C confirmed that PNIPAM underwent a phase change and molecular chain contraction.It transformed from the hydrophilic state to hydrophobic state at this point, which provided a reference for the subsequent frictional regulation of the temperature inflection point (Fig. 3(d)).In addition, the hydrogel's resistance to both tensile and compressive processes was accompanied by covalent bond breakage, which was a major challenge for the versatile applications of the hydrogel.Ten loading-unloading tensile tests at 30% strain were carried out on M-DN hydrogels at room temperature.The stresses and strains remained essentially unchanged, indicating good resilience with little breakage of covalent bonds under these tensile strain conditions (Fig. S5(a) in the ESM).Then, loadingunloading compression tests were carried out at different strains (15%, 18%, and 20%).It was evident that the compression strength decreased from 0.29 to 0.2 MPa (approximately 69.0% of the original value) as the number of cycles increased after 10 cycles of the 15% strain test.As the strain increased, more covalent bonds were broken.The magnitude of this change increased sequentially but remained high compressive strength (Figs.S5(b)-S5(d) in the ESM), which endowed the M-DN hydrogel with high strength to prevent breakage in friction applications. Near-infrared photoresponse properties of M-DN hydrogels The MXene nanosheets in the M-DN hydrogel conferred unique NIR photoresponsiveness to the hydrogel.As shown in Fig. 4(a), the hydrogel was placed in a petri dish with the addition of an appropriate amount of water.It was found that the response temperature showed a gradient increase for different concentrations of MXene nanosheets doped into the hydrogel.M-DN hydrogels were stimulated by NIR light before and after the response.It could be visually observed that the M-DN hydrogel became whitish from a completely transparent state, and the water spilled around it.This phenomenon occurred because the PNIPAM molecular chain became hydrophobic and contracted, destroying the surface as well as the internal hydration layer of the hydrogel (Fig. 4(b)). The effects of the MXene nanosheet concentration and NIR light intensity on the M-DN hydrogels were further evaluated.As shown in Fig. 4(c), the temperature of five different concentrations of MXene nanosheets under NIR light irradiation was tested.The results showed that the changing rate of temperature and final value increased with concentration of MXene nanosheets.Then, the effects of different irradiation powers on the M-DN hydrogel (0.8 mg/mL) were explored.Figure 4(d) shows that both the changing rate of temperature and the final value increased with power, even reaching over 80 °C at 2.0 W/cm 2 .Combined with the previous rheological tests, it www.Springer.com/journal/40544| Friction was concluded that 40 °C was the phase transition temperature for this system.Thus, an irradiation power of 1 W/cm 2 was selected to modulate the 0.8 mg/mL hydrogel to achieve a fast response and a low upper limit temperature.Under the above conditions, the M-DN hydrogel of 0.8 mg/mL was tested for cycling at NIR light-regulated temperature with the boundary line of 40 °C.The M-DN hydrogel could quickly rise above that temperature under NIR light stimulation, while it quickly cooled to below 40 °C with the help of the surrounding water when the NIR light was switched off.The phenomenon remained repetitive under three cycles (Fig. 4(e)), which achieved a stable and rapid NIR light-stimulated phase transition. Friction-modulated properties of M-DN hydrogels To investigate the effect of NIR light on the friction modulation of M-DN hydrogels, the COF of hydrogels was obtained by a universal mechanical tester (UMT-3) in reciprocating mode.The glass ball and the hydrogel were used as the upper and the lower friction pair, respectively.As shown in Fig. S6(a | https://mc03.manuscriptcentral.com/frictionlayer on the surface of the hydrogel was the key to achieve low friction.When the NIR light was turned on, MXene rapidly responded and generated a large amount of heat, which raised the temperature of the hydrogel above the LCST.At this point, PNIPAM underwent a phase transition into the hydrophobic state.Consequently, the surface hydration layer was destroyed rapidly.At the same time, the internal hydration layer also decreased.These aspects decreased the friction reduction capacity and thus increased the COF [27,31,32]. Initial tests of the loads and frequencies required for hydrogel testing were carried out prior to modulation.The results showed that the COF continued to increase as the load increased.Even though the double network hydrogel was still a flexible material, the friction pair caught in the hydrogel under higher load conditions, increasing the resistance during friction.The friction pairs didn't completely contact at lower load conditions.Therefore, 1 N was chosen as the load pressure for all subsequent tests.Friction was unstable both at high and low frequency conditions.Thus, the tests were carried out at 1 Hz (Fig. S7 in the ESM).As shown in Fig. 5(b), the power of the NIR light was adjusted after the COF was stable for 300 s.The COF of the hydrogel changed from ~0.02 to ~0.15 with increasing power.This change rate increased with power, which was consistent with the previous NIR light response rate.In Fig. 5(c), the comparison on the response rates of hydrogels with different MXene nanosheet concentrations under NIR light (1 W/cm 2 ) were conducted.It took approximately 100 s to reach the highest COF for the sample with a 0.8 mg/mL MXene nanosheet in M-DN hydrogel.It was evident that the COF of the hydrogel changed more dramatically under NIR light irradiation with the increase of MXene nanosheets concentration.Three successive modulated cycle tests were performed on 0.8 mg/mL M-DN hydrogels under NIR light with a power of 1 W/cm 2 .Repeated switching of the NIR light stimulated the MXene nanosheet response.It enabled the M-DN hydrogel with the fast transition between low and high COF.The low and high COF value always remained approximately 0.02 and 0.135, respectively.This is because the uniformly dispersed MXene nanosheets enhanced their utilization, thus prompting a rapid increase in temperature.The surface and a certain amount of internal hydration layer could be quickly destroyed by the phase transition of PNIPAM.When the ambient temperature returned to room temperature after the water loss, PAMPS had strong water absorption capacity as well as the recovery of hydrophilicity.The rapid absorption of pure water in the tank increased the amount of hydrated layer.As a result, the COF returned to the original value (Fig. 5(d)).To further verify the modulation stability of the M-DN hydrogels under 1 N loading condition, three samples were taken for ten modulation cycles.As shown in Fig. 5(e), the mean COF value remained at approximately 0.02 for low values and 0.135 for high values.It indicated that the M-DN hydrogel was more stable under 1 N loading condition for modulation.This confirmed the potential of M-DN hydrogels as controlled drives, flexible robotic arms, and gripping soft materials [33]. Conclusions In summary, we achieved the construction of a well-disperse MXene double-network hydrogel by incorporating the chitosan (CS) polymer, which could rapidly responded to the near-infrared light (NIR light) with good recyclability and controllability.Poly-N-isopropylacrylamide (PNIPAM) polymer was selected as the second layer of the MXene dual-network hydrogel, which could increase both the hydrogel strength and the sensitivity under the external stimuli.The thermo-responsive behavior of PNIPAM polymer endowed the MXene-functionalized PNIPAM/CS double network hydrogel (M-DN hydrogel) with rapid transition between the low friction state (μ~0.021) and the high friction state (μ~0.135)by forming and disrupting the hydration layer below and above the lower critical solution temperature (LCST), respectively.This MXene-based hydrogels show great potential in friction modulation and intelligent lubrication.In addition, the high strength of M-DN enabled the potential application in interfacial sensing, controlled actuation, and flexible robotic arms. Fig. 1 Fig. 1 Dispersion of MXene improvement and the preparation of M-DN hydrogels.(a) Chemical structure of MXene nanosheets; (b) schematic diagram of the hydrogen bonding interaction between CS and MXene nanosheets; (c) schematic diagram of the preparation of M-DN hydrogels; (d) comparing dispersibility of MXene before and after improvement; and (e) difference between the CS-MXene@PAMPS single network hydrogel (M-SN hydrogel) and MXene@PAMPS hydrogel. Fig. 3 Fig. 3 Characterization of the mechanical properties of M-DN hydrogels.(a) Results of tensile strength at break of the M-SN hydrogel and M-DN hydrogel at 25 and 50 °C, respectively; (b) comparison of the compressive strength of M-SN hydrogels with M-DN hydrogels; (c) comparison of the energy storage modulus and loss modulus of the M-SN hydrogel and M-DN hydrogel; and (d) variation in the energy storage modulus and loss modulus of the M-DN hydrogel from 25 to 50 °C. Fig. 4 Fig. 4 NIR photoresponse properties of M-DN hydrogels.(a) Infrared thermal images of M-DN hydrogels with different MXene nanosheet concentrations and different NIR light irradiation times; (b) schematic diagram of the M-DN hydrogel response and optical images before and after the response; (c) temperature curves of M-DN hydrogels with different MXene nanosheet concentrations as a function of irradiation time at NIR light irradiation power of 1 W/cm 2 ; (d) temperature versus time curves for 0.8 mg/mL M-DN hydrogels irradiated with different powers of NIR light; and (e) temperature cycling profile of the 0.8 mg/mL M-DN hydrogel under NIR light irradiation at a power of 1.0 W/cm 2 . ) in the ESM, the downwards pressure volume of the glass sphere was difficult to calculate accurately.The double network hydrogel was stiffer than conventional hydrogels.To test the contact area, dye was applied to the glass spheres.The size of the contact area between the glass sphere and the hydrogel was tested indirectly by pressing the glass sphere down onto the paper on the surface of the hydrogel.As shown in Fig.S6(b) in the ESM, the contact area was approximately 3.14 mm 2 and was fully covered by NIR light.As shown in Fig.S6(c) in the ESM, the NIR light during the experiment was illuminated in a circle with a diameter of 1 cm, which was sufficient to cover the entire test area of 5 mm reciprocating stroke.As shown in Fig.5(a), without the NIR light irradiation, PNIPAM was hydrophilic.The hydrated
6,596.8
2023-03-13T00:00:00.000
[ "Materials Science" ]
Correction to: Community detection and unveiling of hierarchy in networks: a density-based clustering approach An amendment to this paper has been published and can be accessed via the original article. Following the publication of the original article (Felfli et al. 2019), multiple errors were identified in the sections and Figure 1. The changes have been highlighted in bold typeface. Abstract: We experimentally evaluate a statistical mechanics approach, the Correlation Density Rank, that uses a normalized Gaussian function which captures the impact of a node within its neighborhood and leads to a density-ranking of nodes by considering the distance between nodes as punishment. The technique uses hill-climbing procedure to determine the density attractors and identify the unique parent (leader) of each member as well as the group leader. The method is exhaustively tested using synthetic networks generated by the LFR benchmarking algorithm for network sizes between 500 and 30000 nodes and mixing parameter between 0.1 and 0.9. Introduction: The approach is based on previous work [25][26], which evaluated a small synthetic network. In the present paper, we use larger networks generated by the LFR benchmark in an attempt to establish the validity and utility of the approach and identify its limitations. The approach assumes a Gaussian density distribution which is constructed so as to unveil the relative importance (influence) of the various nodes (members) of the network, allowing for the identification of the immediate leader of every member and hence the ranking of all members including the emergence of the group leader. Section 4 concludes the paper, evaluates the performance of the algorithm and addresses its limitations. Approach: The distance matrix is mapped into a Gaussian kernel matrix [26] which is a nonlinear Euclidean distance (a radial basis function (RBF) kernel and commonly used in Support Vector Machine classification). The density function first introduced in [25] considers the distance between nodes as punishment and captures the impact of a node within its neighborhood. The normalized Gauss influence function takes the form To illustrate this outcome, we show in Figure 1 the results of the clustering method for LFR-benchmark graphs with N = 250 and mixing parameter, μ = 0.1, 0.3 and 0.5. Results: The dependence of the average degree on the network size and, in the case of scale free networks, to the degree distribution exponent is quite loose [27]. The benchmarking procedure allowed us to assess the quality of the present formulation. Work on extending the methodology so that overlapping communities are taken into account, is in progress. Discussion and Conclusions: The benchmarking process allowed us to assess the performance of the approach. References: [26] Z. Bahrami Bidoni and R. George, "Network service quality rank: a network selection algorithm for heterogeneous wireless networks," ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS) (pp. 239-240). IEEE. [27] G. K. Orman, V. Labatut and H. Cherifi, "Towards Realistic Artificial Benchmark for Community Detection Algorithms Evaluation," International Journal of Web Based Communities 9 (3), 349-370, 2013 [28] Pablo Meyer et al, Network topology and parameter estimation: from experimental design methods to gene regulatory network kinetics using a community-based approach, BMC Systems Biology 2014, 8:13 Towards Realistic Artificial Benchmark for Community Detection Algorithms Evaluation
743.4
2020-07-30T00:00:00.000
[ "Computer Science", "Mathematics" ]
Assessment of the Effectiveness of a Computerised Decision-Support Tool for Health Professionals for the Prevention and Treatment of Childhood Obesity. Results from a Randomised Controlled Trial We examined the effectiveness of a computerised decision-support tool (DST), designed for paediatric healthcare professionals, as a means to tackle childhood obesity. A randomised controlled trial was conducted with 65 families of 6–12-year old overweight or obese children. Paediatricians, paediatric endocrinologists and a dietitian in two children’s hospitals implemented the intervention. The intervention group (IG) received personalised meal plans and lifestyle optimisation recommendations via the DST, while families in the control group (CG) received general recommendations. After three months of intervention, the IG had a significant change in dietary fibre and sucrose intake by 4.1 and −4.6 g/day, respectively. In addition, the IG significantly reduced consumption of sweets (i.e., chocolates and cakes) and salty snacks (i.e., potato chips) by −0.1 and −0.3 portions/day, respectively. Furthermore, the CG had a significant increase of body weight and waist circumference by 1.4 kg and 2.1 cm, respectively, while Body Mass Index (BMI) decreased only in the IG by −0.4 kg/m2. However, the aforementioned findings did not differ significantly between study groups. In conclusion, these findings indicate the dynamics of the DST in supporting paediatric healthcare professionals to improve the effectiveness of care in modifying obesity-related behaviours. Further research is needed to confirm these findings. Introduction A plethora of epidemiological data reports the high prevalence of obesity, an "epidemic" that represents a huge public health burden for many countries. Besides the increased risk for chronic diseases, obesity is also related to nutrient insufficiencies, a paradox that has been characterised as the "double burden of malnutrition" [1]. This "double burden" paradox can be interpreted by the existence of a chronic, low-grade inflammation state that is produced and sustained in obese children [2], leading to low blood concentration of essential micronutrients, such as iron [3] and vitamin D [4]. Considering the important roles of these micronutrients in several cellular, metabolic and physiological processes, their long-term insufficiency in obese individuals may become detrimental for children's optimal growth and development. Due to the huge dimensions and detrimental effects of obesity and related complications, these conditions have been the major focus of public health research over the past decade. However, existing tools, programmes and strategies to counteract the "obesity epidemic" have only experienced limited success [5]. This is mainly due to the inadequate understanding of the complex mosaic of mechanistic pathways leading to obesity. In this regard, excess body weight is not only the product of a positive energy balance, but also an interaction of a plethora of other etiological factors, such as environmental ones that exert their effects even from very early life stages, such as the prenatal period and the first 5 years of life. By acting "in utero" (e.g., maternal obesity, smoking during pregnancy, etc.) or during infancy (infant formula feeding, growth velocity, etc.), perinatal factors can cause permanent endocrine adaptations, usually expressed as increased hunger, adipogenesis and consequently obesity at later life stages [6,7]. Another important reason for the limited or only short-term effectiveness of weight management programs is usually their delayed implementation in already obese children or in adulthood, when the energy-balance-related behaviours (EBRBs) and consequently the obesity phenotype are already established [8]. As such, the implementation of intervention initiatives as early in life as possible, when EBRBs and their determinants are still flexible, is promising for the prevention of obesity and related cardiometabolic complications [9]. Health professionals (i.e., general practitioners, family doctors, paediatricians, dietitians, nutritionists) have a key role among health experts, in prospectively and frequently monitoring children [10,11]. Furthermore, this key role places them into a central position, with regards to childhood obesity prevention and treatment, since they are also the ones guiding parents in providing the appropriate healthcare to their children. However, these professionals on many occasions they require additional and appropriate support to conduct a thorough assessment and provide tailor-made diet and lifestyle optimisation advice to families with children in need of weight management [12,13]. As such, the objective of this study was to examine the effectiveness of a computerised decision-support tool (DST), developed to assist paediatricians and paediatric endocrinologists in delivering personalised nutrition and lifestyle optimisation advice to children and their families, as a means of childhood obesity management. Development of the Decision Support Tool The development of the computerised DST is based on decision-tree algorithms (Supplementary Figure S1 provides an example of these algorithms), which include five different levels, namely the "assessment of children's current weight status" (level 1), the "assessment of the likelihood for the future manifestation of obesity in normal-weight children" (level 2), the "evaluation of the most appropriate body weight management goal" (level 3), the "estimation of children's dietary energy and macronutrients intake needs" (level 4) and the delivery of "personalised diet and lifestyle optimisation advice" (level 5). The first level of the decision tree algorithms ("assessment of children's current weight status") is based on the measurement of body weight in all age groups from infancy to adolescence and of the recumbent length in infants and children until the age of 2 years or standing height in all children and adolescents after the age of 2 years. The international Body Mass Index (BMI)-for-age growth curves and the relevant reference values proposed by the WHO are further used to finalise the assessment of children's weight status [14] and categorise them into "underweight" (BMI-for-age < 5th percentile), "normal weight" (5th percentile ≤ BMI-for-age < 85th percentile), "overweight" (85th percentile ≤ BMI-for-age < 95th percentile) and "obese" (BMI-for-age ≥ 95th percentile). The second level of the decision tree algorithms ("assessment of the likelihood for the future manifestation of obesity in normal-weight children") is important because even if a child's current body weight is normal, this does not exclude the likelihood of the future manifestation of obesity, especially in children that are subjected to the combined effect of obesity risk factors. In an attempt to examine the likelihood for the future occurrence of obesity in normal weight children, due to the combined effect of individual obesity risk factors, including socio-demographic and perinatal ones, the CORE (Childhood Obesity Risk Evaluation) index [15], was included as another component of the DST. More specifically, the CORE index represents a simple, easy-to-use and valid score [16], which provides an estimation of the future likelihood of obesity manifestation as early as the age of 6 months. This estimation achieved through the combined use and scoring of easily collected data on specific perinatal risk factors, such as maternal pre-pregnancy weight status, maternal smoking during pregnancy, infant's weight gain during the first 6 months of life, as well as simple socio-demographic indices, namely the child's gender and mother's educational level. In the third level ("evaluation of the most appropriate body weight management goal") the decision tree algorithms use the recommendations of the American Pediatric Association as a basis for the prevention and treatment of child and adolescent overweight and obesity [17]. More specifically, data collected on children's age and current weight status, as well as on the presence of obesity-related comorbidities (i.e., hyperglycaemia, insulin resistance, dyslipidaemia, hypertension) in children and of obesity in one or both parents, are combined to inform each one of the following weight management pathways: (i) body weight maintenance, which aims to the progressive reduction of BMI due to the increase in height stemming from children's growth, or (ii) body weight loss, whenever this is deemed appropriate, such as in cases where comorbidities and/or parental obesity co-exist with childhood obesity. Following the evaluation of the most appropriate weight management goal, the fourth level of the decision tree algorithms ("estimation of children's dietary energy and macronutrients intake needs") is necessary to facilitate weight maintenance or weight loss as well as children's growth. The mathematical formulas provided by the Institute of Medicine (IOM) for infants, children and adolescents [18] were used to assess estimated energy requirements (EER). After the estimation of dietary energy intake requirements, the DST calculates the percent distribution of energy into macronutrients, within the Acceptable Macronutrient Distribution Ranges (AMDRs) proposed by the IOM for carbohydrates, fat and protein for infants, children and adolescents [18]. In the fifth level ("personalised diet and lifestyle optimisation advice"), the decision tree algorithms analyse all aforementioned data and deliver a report providing the assessment of the examined child, as well as body weight, diet and lifestyle recommendations that will support the decision of health professionals. The report includes (a) the assessment of children's current weight status and the need for body weight maintenance or loss, (b) the assessment of the likelihood for the future manifestation of obesity in normal-weight children, (c) children's total dietary energy requirements based on the anticipated body weight management (i.e., weight maintenance or loss) target, (d) children's dietary needs in carbohydrates, total fat and protein, (e) personalised meal plans, as well as (f) diet and lifestyle optimisation recommendations, tailored to the specific needs and weight management goals set for each child. The recommendations include practical advice to the family on how (i) to achieve an energy and nutrients' balanced diet, via an increase in the consumption of foods that are rich sources of dietary fibre and complex carbohydrates and a reduction in the consumption of foods that have a high content of simple sugars, total and saturated dietary fat, cholesterol and sodium, (ii) to become more physically active, (iii) to reduce sedentary activities and (iv) to improve children's sleep patterns [19]. Operational Components of the DST The DST comprises of two operational components, namely the data entry and the data processing component. Regarding data entry, paediatric healthcare professionals collect information on the child's gender and birth date and conduct anthropometric measurements of body weight, recumbent length or standing height (depending on the child's age). Healthcare professionals also collect perinatal, socio-demographic and parental data, as well as some additional information on characteristics related to the child. In terms of perinatal factors, data is collected on maternal pre-pregnancy body weight (in kg), maternal smoking habits during pregnancy, while the child's health record is used to copy information with regards to the child's weight (in kg) at birth and at six months of age. Regarding socio-demographic and parental data, information is collected on self-reported mother's educational level (in years of education), and on measured mother's and father's body weight (in kg) and height (in cm). Furthermore, healthcare professionals use a set of validated questions [20] to collect appropriate data that will allow them to categorise the child's physical activity level, into light (<4 METs), moderate (4-7 METs) or vigorous (>7 METs). Lastly, information on the presence of obesity-related comorbidity indices, such as insulin resistance, dyslipidaemias and hypertension is also collected, either based on the child's physical examination or based on biochemical or clinical indices from the child's medical record that is available to the paediatric healthcare professionals. As far as data processing is concerned, all data are uploaded to the DST, which processes them and extracts a report with the child's assessment and the personalised diet and lifestyle optimisation recommendations. More specifically, the DST uses the birth and examination dates to calculate child's age (in months and years), it then calculates child's BMI (in kg/m 2 ) and consequently estimates the child's weight status, through its categorisation into underweight, normal-weight, overweight or obese. In normal-weight children, the DST also calculates the CORE index score, based on which children with a higher (i.e., CORE index score ≥ 4) likelihood for obesity manifestation in childhood or adolescence are identified [16]. In addition, the DST calculates the estimated dietary energy requirement (in kcals per day) for the child, so as to achieve the desired body weight management (i.e., weight maintenance or loss) goal, while relevant calculations are also made with regards to dietary protein, carbohydrates and fat needs (in grams per day). Furthermore, the DST processes the data uploaded for parents, thus calculating parental BMI (in kg/m 2 ) and categorising parents as non-obese or obese (i.e., BMI > 30 kg/m 2 ). Finally, the DST proposes diet and lifestyle optimisation advice recommendations for the child and/or the entire family (Supplementary Table S1 provides examples of the recommendations), as well as personalised weekly meal plans adjusted to the estimated energy requirements calculated for each child (Supplementary Table S2 provides examples of the meal plans). Personalised Lifestyle Optimisations Recommendations and Weekly Meal Plans The DST follows five steps dictated by the decision tree algorithms (Supplementary Figure S1 provides the relevant steps) to propose personalised lifestyle optimisation recommendations and weekly meal plans. In step 1, children are categorised based on their BMI into normal-weight, overweight or obese, while in step 2 the CORE index score is calculated for normal-weight children. In normal-weight children with a lower likelihood for the future manifestation of obesity, the DST proposes diet and physical activity recommendations, which support the maintenance of normal body weight and growth (recommendation 1). In step 3, the DST focuses on normal-weight children with a higher likelihood for the future obesity manifestation and evaluates the co-existence of clinical disorders (i.e., hyperglycaemia, insulin resistance, dyslipidaemia and/or hypertension). In normal-weight children with no clinical disorders and with non-obese parents, the DST advises health professionals to provide recommendation 1 (i.e., similar to step 2 above). In normal-weight children with no clinical disorders but with at least one obese parent, the DST advises health professionals to provide specialised recommendations, aiming to improve diet and physical activity habits for the entire family (recommendation 2). In normal-weight children with at least one clinical disorder but with non-obese parents, the DST provides recommendations, aiming at maintaining the child's normal body weight, but also delivering practical advice that supports the consumption of foods rich in dietary fibre and complex carbohydrates, but simultaneously the reduction in the consumption of foods high in simple sugars, total and saturated fat, dietary cholesterol and sodium (recommendation 3). Finally, in normal-weight children with at least one clinical disorder and with at least one obese parent, the DST provides recommendations targeting the entire family and aiming to improve physical activity and dietary habits for all family members (recommendation 4). The DST also proposes a periodic re-evaluation every 6 months for high-risk normal-weight children with at least one clinical disorder and/or at least one obese parent and every 12 months for children with no clinical disorders and/or non-obese parents. In step 4 the DST focuses on overweight children. In overweight children with no clinical disorders and with non-obese parents, the DST advises health professionals to provide recommendation 1, but also an isocaloric weekly meal plan, aiming to maintain the child's body weight (meal plan 1) and consequently to progressively decrease its BMI (as the child grows and height increases), ideally below the 85th percentile. In overweight children with no clinical disorders and at least one obese parent, the DST provides recommendation 2, that targets the entire family, as well as the isocaloric meal plan 1, which aims for the maintenance of the child's body weight. In overweight children with at least one clinical disorder and with non-obese parents, the DST proposes recommendation 3, as well as an isocaloric meal plan (meal plan 2), aiming for the maintenance of the child's body weight via the consumption of foods rich in dietary fibre and complex carbohydrates, but also with a lower content of simple sugars, total and saturated fat, dietary cholesterol and sodium, compared to meal plan 1. Finally, in overweight children with at least one clinical disorder and with at least one obese parent, the DST advises health professionals to provide recommendation 4 to the entire family, as well as meal plan 2. The DST also suggests a periodic re-evaluation every 3 months for overweight children with at least one clinical disorder and/or at least one obese parent and every 6 months for children with no clinical disorders and/or non-obese parents. If the re-evaluation shows no reduction of BMI below the 85th percentile, the DST follows the same process described under Step 4. If the re-evaluation shows a reduction of BMI below the 85th percentile, the DST follows the process described under Step 3. In step 5, the DST focuses on obese children. In the case of 2-5-year-old obese children, the DST follows exactly the same approach dictated by Step 4 for overweight children. The main differentiation occurs in 6-15-year-old obese children to whom mild weight loss is also prescribed. In this regard, in 6-15-year-old obese children with no clinical disorders and at least one obese parent, the DST targets the family and proposes recommendation 2 and a hypocaloric meal plan (meal plan 3). In 6-15-year-old obese children with at least one clinical disorder and non-obese parents, the DST proposes recommendation 3, as well as a hypocaloric meal plan (meal plan 4), via the consumption of foods rich in dietary fibre and complex carbohydrates, but also the decrease in the consumption of foods rich in simple sugars, total and saturated fat, dietary cholesterol and sodium. Finally, in 6-15-year-old obese children with at least one clinical disorder and with at least one obese parent, the DST targets the family and proposes recommendation 4 and a hypocaloric meal plan 4. The DST also proposes a periodic re-evaluation every 3 months for obese children with at least one clinical disorder and/or at least one obese parent and every 6 months for children with no clinical disorders and/or non-obese parents. If the re-evaluation shows no reduction of BMI below the 95th percentile, the DST follows the same approach described under Step 4 or Step 5, depending the child's age (i.e., 2-5 or 6-15 years old). If the re-evaluation shows a reduction of BMI below the 95th percentile, but BMI remains higher than the 85th percentile, the DST follows the pathway dictated by Step 4. If the re-evaluation shows a reduction of BMI below the 85th percentile, the DST proposes the process described under Step 3. Table 1 summarises the target population and the behavioural change goals and lifestyle optimisation advice provided by each level of recommendations through the DST. Randomised Controlled Trial to Assess the Effectiveness of the Computerised DST The effectiveness of the DST was assessed through a pilot randomised controlled intervention trial (RCT). The RCT was initiated on May 2018 and was conducted in the Endocrinology Department of the "P. and A. Kyriakou" Children's Hospital and in the Division of Endocrinology, Metabolism, and Diabetes of the "Aghia Sophia" Children's Hospital in Athens, Greece. Before the study initiation, a statistical power calculation indicated that a total sample size of 64 children (50% females) would be adequate to observe a mean BMI difference of 1.5 kg/m 2 between the two study groups (statistical power of 80% and level of statistical significance at 5%). Taking into account an attrition rate of 20%, a screening conducted in the premises of the aforementioned settings managed to recruit a total sample of 80 children, who were identified as eligible to be included in the RCT. The main eligibility criteria for inclusion in the RCT were children aged 6-12 years old, as well as overweight or obese status (i.e., BMI-for-age ≥ 85th percentile). Signed informed consent forms were obtained from all parents of eligible children, before their participation to the study. The study was conducted in accordance with the rules of the Declaration of Helsinki of 1975, revised in 2013 and the protocol was approved by the Bioethics Committee of Harokopio University, Athens (approval no.: 61/30-3-2018). Finally, the RCT was registered to clinicaltrials.gov (NCT03819673). Study Groups The 80 overweight or obese children that were eligible to participate in the RCT, were randomly and equally allocated to two study groups. Those children that were randomly allocated to the intervention group (IG), were examined by paediatricians (i.e., general paediatricians and paediatric endocrinologists) and a dietitian, who were all trained in the use of the DST. A manual of operation with detailed instructions on the use of the DST was prepared and distributed to medical practitioners prior to the commencement of the study. The dietitian also assisted the paediatricians to assess children's weight status, to set appropriate weight management goals and to provide personalised meal plans and/or recommendations to children and their families. In contrast, those families whose children were randomly allocated to the control group (CG), were provided with general recommendations of diet and physical activity and follow-up appointments were made for weight checks. The effectiveness of the intervention was evaluated through the collection of data at baseline and at a follow-up examination after 3 months. Data Collection: Parental Socio-Demographic and Anthropometric Characteristics Data on specific socio-demographic characteristics were collected from parents (most preferably from the mother) during the scheduled face-to-face interviews. All interviews were conducted by the paediatricians or the dietitian with the use of a standardized questionnaire. The socio-demographic data collected by parents included father's and mother's age, educational level (years of education) and occupation. In addition, parents also reported or had their body weight and height measured, from which BMI was calculated and used to categorise each parent based on their weight status. Dietary Intake Dietary intake data were obtained by the dietitian with the use of a 24-h recall of one typical day in terms of children's dietary intake and with a short food frequency questionnaire (FFQ), via interviews conducted with parents of children younger than 10 years of age or directly with children older than 10 years old. According to the data recorded from the 24h-recall, all study participants were asked to describe the type and amount of foods and beverages consumed, during the previous day, provided that it was a typical day according to the participant's perception. To improve the accuracy of food description, standard household measures (cups, tablespoons, etc.) and food models were used to define amounts. At the end of each interview, the dietitian reviewed the collected data with the respondent in order to clarify entries, servings and possible forgotten foods. Dietary intake data were analysed using the Nutritionist V diet analysis software (version 2.1, 1999, First Databank, San Bruno, CA, USA), which was modified to include traditional Greek dishes and recipes [18]. Furthermore, the database was updated with nutritional information of processed foods provided by independent research institutes, food companies and fast-food chains. In addition, a short semi-quantitative valid FFQ [21] was used to collect data on children's dietary intake of foods representing all main food groups (i.e., fruits, vegetables, grains, dairy and protein foods). The FFQ included questions that evaluate the consumption frequency of foods during the previous 3 months in frequencies ranging from less than 1 portion/month to more than 4 portions per day. Perinatal Data Regarding perinatal data, mothers were asked to recall information on their pre-pregnancy body weight and smoking practices during pregnancy. Additionally, mothers were asked to report their child's body weight and recumbent length at birth and 6 months of age, as this was recorded at their child's health record. Physical Activity Levels Organised and leisure time physical activities were assessed using a standardized questionnaire, that was also used and validated in the multicentre Feel4Diabetes study that was conducted in six European countries, including Greece [20]. Respondents reported the type, time (in minutes) and frequency (in times per week) spent by children on organised and/or leisure time physical activities. Anthropometric Data Body weight was measured to the nearest 0.1 kg using a digital weight scale (Seca Alpha, Model 770, Hamburg, Germany). Subjects were weighed without shoes in minimal clothing. Height was measured to the nearest 0.1 cm using a commercial stadiometer with subjects not wearing shoes, their shoulders in a relaxed position, their arms hanging freely and their head aligned according to the Frankfort plane. Weight and height were converted to BMI using Quetelet's equation (weight (kg)/height 2 (m 2 )), while the international BMI-for-age growth curves and the relevant reference values proposed by the WHO [14] were issued to calculate BMI z-score. Waist circumference (WC) was also measured to the nearest 0.1 cm with the use of a non-elastic tape and with the child standing, at the end of a gentle expiration. The measuring tape was placed around the trunk, at the level of the umbilicus, midway between the lower rib margin and the iliac crest. Statistical Analysis Normality of the distribution of continuous variables was analysed using the Kolmogorov-Smirnov test. Normally distributed continuous variables were expressed as Mean values (+/−Standard Error of the Mean: SEM) and categorical variables were reported as frequencies (%). Associations between continuous and categorical variables were examined using Student's t-test for normally distributed variables or the non-parametric Mann-Whitney test for skewed variables even though logarithmic transformations were made. The associations between categorical variables were assessed using the chi square (χ 2 ) test. Repeated-measures ANOVA was used to evaluate the significance of the differences among study groups at baseline and at the 3-month follow-up (treatment effect), the significance of the change from baseline to follow-up observed within each group (time effect) and the treatment × time interaction effect. The between-group factor was the study groups (i.e., IG compared to CG) and the within-group factor was the time point of measurement. Adjustments were also made for potential possible confounding factors. All reported p-values were based on two-sided tests. The level of statistical significance in all analyses was set at p < 0.05. The SPSS vs. 24.0 (SPSS Inc., Chicago, IL, USA) software was used for all statistical analyses. Results From the initial total sample of 80 children randomly allocated to the two study groups, 15 children (5 from the IG and 10 from the CG) could not be re-examined at follow-up. Figure 1 provides the flow diagram of the study according to the CONSORT guidelines. Table 2. Regarding demographic indices, the mean age of children participating in the study was 9.7 (0.2) years, while the mean age of The attrition resulted in a total sample of 65 children (35 in the IG and 30 in the CG) with full data at baseline and follow-up. The descriptive characteristics of these children and their parents at baseline are summarised as mean (+/−SEM) or as percentages in Table 2. Regarding demographic indices, the mean age of children participating in the study was 9.7 (0.2) years, while the mean age of fathers and mothers was 46.1 (0.3) and 41.2 (0.3) years, respectively. Furthermore, 24.6% of mothers had <9 years of education, which is the compulsory education level in Greece, while 42.6% had a higher education of >12 years. Regarding behavioural indices, the mean dietary energy intake recorded for children was 1535.6 (81.3) kcal per day with the percentage of energy coming in a descending order from carbohydrates (47.4%), fat (35.4%) and protein (18.5%), while the mean daily time spent by children on physical activity was 21.6 (2.3) min. As far as perinatal indices were concerned, the mean birth weight and recumbent length of children was 3.2 (0.1) kg and 50.7 (0.4) cm, respectively, while mean maternal pre-pregnancy BMI was 24.9 (0.4) kg/m 2 , with 15.5% of mothers being obese before conception. Regarding anthropometric indices, children's mean body weight, height, BMI and WC was 51.9 (1.9) kg, 142.4 (1.4) cm, 25.1 (0.5) kg/m 2 and 79.9 (1.5) cm, respectively, with 60.7% of children being obese. In addition, the mean BMI of fathers was 28.6 (0.4) kg/m 2 , with 27.6% of them being obese, while the mean BMI of mothers was 27.3 (0.4) kg/m 2 , with 31.6% of them being obese. Regarding differences between study groups, the mean BMI of mothers of children in the CG was higher than that of mothers of children in the IG (28.9 (1.2) vs. 26.0 (0.8) kg/m 2 ; p = 0.045). No other statistically significant differences were observed between study groups. The mean (SEM) values at baseline and follow-up examination, as well as the mean (95% CI) changes from baseline to follow-up, for both study groups with regards to children's dietary intake of energy, macro-and micro-nutrients are presented in Table 3. Regarding dietary energy intake, no significant differences were observed between groups regarding the changes from baseline to follow-up, despite the decrease observed in the IG and the increase in the CG. As far as macronutrient intake was concerned, the increase observed in the IG for dietary fibre intake (4.1, 95% CI: 1.4 to 6.8) was higher than the non-significant change recorded in the CG (p = 0.005). In addition, sucrose intake decreased significantly only in the IG (−4.6, 95% CI: −8.8 to −0.3), although no significant differences were observed between study groups. Regarding micronutrient intake, significant increases were observed in the IG for iron (2.6, 95% CI: 0.2 to 5.0), zinc (1.7, 95% CI: 0.1 to 3.3) and magnesium intake (36.6, 95% CI: 9.4 to 63.8). In the case of magnesium, the significant increase observed in the IG was also higher than the change observed in the CG (p = 0.011). Lastly, a significant decrease was observed for vitamin C intake in the CG (−28.4, 95% CI: −53.6 to −3.1), although no group difference was found with regards to the changes from baseline to follow-up. No other significant changes within groups or differences between study groups were observed in the dietary intake of the rest of macro-and micro-nutrients, despite the fact that some of the changes were more favourable in the IG than the CG (e.g., for calcium, potassium, sodium, vitamin A and vitamin D). Table 4 depicts the changes in the consumption of specific food items and the relevant differences between the two study groups. More specifically, children in the IG had a higher mean consumption of cereals at follow-up than children in the CG (0.78 (0.11) vs. 0.43 (0.12), p = 0.041). In addition, the consumption of yogurt decreased significantly only in the CG (−0.23, 95% CI: −0.42 to −0.50), while the consumption of chocolates (−0.32, 95% CI: −0.52 to −0.11), cakes (−0.13, 95% CI: −0.23 to −0.02) and chips (−0.08, 95% CI: −0.13 to −0.03) decreased significantly only in the IG. The changes observed for the consumption of yogurt (p = 0.005) and chocolates (p = 0.025) were significantly different between the two study groups. The changes from baseline to follow-up, as well as the differences between study groups with regards to anthropometric indices are presented in Table 5. Body weight and WC increased significantly only in the CG by 1.4 kg (95% CI 0.3 to 2.6) and 2.1 cm (95% CI 0.7 to 3.5), respectively, height increased significantly in both study groups by 2.0 cm (95% CI 1.5 to 2.5) in the IG and by 1.6 cm (95% CI 1.0 to 2.1) in the CG, while BMI and BMI z-score decreased significantly only in the IG by 0.4 kg (95% CI −0.9 to −0.1) and 0.2 standard deviations (−0.3 to 0.05). Nevertheless, these changes were not found to differentiate significantly between the two study groups. Discussion The current randomised controlled trial showed that a computerised DST designed to assist paediatric healthcare professionals in providing personalised nutrition and lifestyle optimisation recommendations to overweight or obese children and their parents, can result in favourable changes to certain dietary intake and anthropometric indices in the children that received the intervention. The findings of this study support the growing, although still limited, body of evidence regarding the effectiveness of computerised or eHealth DSTs used in primary care settings for improving clinicians' performance on childhood obesity management outcomes [22,23]. Health professionals have the potential to influence large numbers of patients. Up to date there has been little evidence on how clinical practice can be enhanced in order to assist children (and their parents) in achieving appropriate to their weight status and sustainable weight management. The role of new technology, through the development of appropriate computerised or e-Health tools, seems to be the way forward. Although there are currently several computerised or e-Health tools designed to promote personalised advice on weight management in children, the vast majority of those do not involve health professionals in the implementation process [24]. Even in the case of e-Health tools that are targeting health professionals, in most of the occasions their usability has been described as difficult [22,24]. As such, in the HopSCOTCH Shared-Care Obesity Trial in Australia, the general practitioners (GPs) that used the relevant e-Health tool to deliver the personalised intervention to children and their parents, characterised implementation as challenging and usability of the tool as poor, mainly due to technical reasons, such as out-dated hardware, software installation difficulties and poor internet connections [22]. Despite the scarcity of tools supporting paediatric healthcare professionals on children's weight management, Taveras et al. [23,25] developed a computerised tool very similar to the DST developed in the current study. The effectiveness of this tool was examined in the "Study of Technology to Accelerate Research" (STAR), which was a three-arm, cluster-randomised controlled trial that was implemented in 14 paediatric offices in Massachusetts and on 800, 6 to 12-year-old, obese children [25]. After 12 months of intervention, the STAR trial reported a lower increase in BMI in children randomised in the study group that received the personalised advice via the use of the DST by paediatric healthcare professionals compared to the control group that received the usual care offered in the participating paediatric offices (mean adjusted BMI change difference: −0.51 kg/m 2 ; 95% CI −0.91 to −0.11) [23]. The aforementioned results of the STAR study agree with the findings of our study, which -although they included a smaller sample size of 65 children and had a shorter duration of 3 months-reported a mean adjusted BMI change difference of −0.6 kg/m 2 in the IG, compared to the CG. Similarly to the STAR trial, the effect of the intervention implemented in the current study on BMI also exceeded the mean adjusted change difference observed in other primary-care intervention trials, such as the "Live, Eat and Play" (LEAP) study (mean adjusted BMI change difference: −0.20 after 9 months) [26], the LEAP-2 study (mean adjusted BMI change difference: −0.11 after 12 months) [27] and the "Shared-Care Obesity Trial in Children" (HopSCOTCH) study (mean adjusted BMI change difference: (−0.10 after 12 months) [28]. In addition to BMI, the significant increase in waist circumference observed only in the CG is another indication of the effectiveness of the current RCT in controlling children's central body fat deposition more effectively than in the CG. The mean adjusted difference of −1.5 cm observed in this study, in the changes of WC between the IG and the CG, is similar to the relevant difference of −1.7 cm, observed in the HopSCOTCH study. However, considering that the HopSCOTCH study was also conducted with a greater sample size (i.e., 107 children) and had a longer duration (i.e., 12 months), this probably highlights the promising potential of the tools that were developed and tested in this study, with regards to the effective management of childhood obesity. The changes observed in the IG on BMI and WC, could be partly a reflection of the relevant favourable dietary changes recorded for the IG, compared to the CG. In this regard, the higher increase in dietary fibre intake in the IG than the CG and the significant decrease of dietary sucrose intake only in the IG are probably indicative of the effectiveness of the intervention in increasing the consumption of high-fibre foods that promote satiety and at the same time in decreasing the consumption of foods with a high sugar and, thus, high energy content. The aforementioned changes were also evidenced by the higher consumption of cereals at follow-up in the IG than the CG, as well as the significant decrease in the consumption of chocolates and cakes only in the IG. The above, in conjunction with the decrease in the consumption of chips in the IG, could possibly provide a basis that supports a lower dietary energy intake and consequently the favourable anthropometric changes observed for children in the IG. In line with the findings of the present study, the HopSCOTCH study also reported a higher diet quality score (reflected by the higher consumption of fruit, vegetables and water and by the lower consumption of fatty/sugary foods and non-diet sweet drinks) among 3-10-year-old obese children that received dietary and lifestyle optimisation advice for their weight management through a web-based software [28]. The fact that the HopSCOTCH study reported no significant differences between groups in the change of children's physical activity levels from baseline to follow-up, indicates that any favourable changes observed in this study on the examined anthropometric indices are mainly attributed to the improvement of dietary habits in the intervention compared to the control treatment arm. To some extent, the same also applies in our study, as physical activity levels did not differentiate between the IG and the CG (data not shown). Obesity in children has been strongly linked to important micronutrient insufficiencies, which is usually the outcome of a chronic, low-grade inflammation induced by the elevated levels of visceral adipose tissue [2]. As such, the DST was designed to assist children that received the personalised advice to achieve, not only a better management of their body weight, but also a higher intake of several essential micronutrients. This was evidenced by the significant increases in the dietary intakes of iron, magnesium and zinc observed only in the IG, which can correct potential obesity-related insufficiencies [3] and can subsequently support children's growth, motor and cognitive function [29][30][31]. In addition, since hypertension is another common comorbidity of obesity in children [32], the dietary recommendations provided to children (particularly to those diagnosed with elevated blood pressure) and their parents via the DST, were also aiming to reduce the use of table salt, as well as the consumption of foods that are rich sources of salt in the diet. The significant decrease in dietary sodium intake observed in the present study only in the IG provides evidence that this additional aim of the intervention was partially achieved. Our study has both strengths and limitations. The main strength was its randomised controlled design resulting in a homogeneity of children's characteristics at baseline in both treatment arms. Another strength was the use of the DST to guide clinicians on effectively managing children's elevated body weight, by accurately assessing their nutritional status and needs and by providing appropriate dietary and lifestyle optimisation advice to children and their families, encouraging family self-management of behavioural changes. As evidenced by the current and the STAR study [23], intervention approaches that involve self-guided behavioural changes by families may be better suited to sustain the intensity required for effective behavioural change than those that primarily rely on healthcare professionals to deliver the main bulk of the intervention [27]. In this context, the meal plans delivered by the health professionals to the families in the present study were only a guide for healthier eating and not a prescriptive pathway that was compulsory for the children and their families to follow. The emphasis was given mainly to the recommendations and how families can adopt and embed as many of these suggestions as possible to their daily life. Regarding additional strengths, according to qualitative feedback collected from the clinicians that used the DST, the paediatricians reported that the tool was quite easy to use (it runs with Microsoft Excel and/or Access) and represented a well-structured and quick procedure that helped them provide tailored advice to children and families. As far as limitations are concerned, although the study initially recruited 80 children, only 65 were examined at follow-up, resulting in a drop-out rate of approximately 19%. Nevertheless, the fact that only 5 out of 15 study participants that dropped out were originally allocated to the IG is an indication that the intervention was better accepted, increasing retention rates in the IG children and their families, compared to the CG that received only generic advice. Conclusions The current study showed that a computerised DST, designed to support paediatric healthcare professionals in the delivery of personalised diet and lifestyle optimisation advice to overweight or obese children and their families, resulted in improvement of the children's dietary intake and BMI. These changes are indicative of the dynamics of the tool in supporting clinicians to improve the effectiveness of care. Interventions of longer duration and larger sample sizes are needed to confirm the findings of our study and to demonstrate their long-term sustainability.
9,385.6
2019-03-01T00:00:00.000
[ "Medicine", "Computer Science" ]
A Reliable and Efficient Time Synchronization Protocol for Heterogeneous Wireless Sensor Network L-SYNC is a synchronization protocol for Wireless Sensor Networks which is based on larger degree clustering providing efficiency in homogeneous topologies. In L-SYNC, the effectiveness of the routing algorithm for the synchronization precision of two remote nodes was considered. Clustering in L-SYNC is according to larger degree techniques. These techniques reduce cluster overlapping, resulting in the routing algorithm requiring fewer hops to move from one cluster to another remote cluster. Even though L-SYNC offers higher precision compared to other algorithms, it does not support heterogeneous topologies and its synchronization algorithm can be influenced by unreliable data. In this paper, we present the L-SYNCng (L-SYNC next generation) protocol, working in heterogeneous topologies. Our proposed protocol is scalable in unreliable and noisy environments. Simulation results illustrate that L-SYNCng has better precision in synchronization and scalability. Introduction In recent years, wireless sensor networks have been used in wide range of applications including oil industry, medical, and military services.They can be used in such environments to collect data from movements of objects, to measure the speed and flow direction of oil spills, or to control and track goods in a warehouse.Clustering can be used in wireless sensor networks to implement these networks and has some advantages such as extending the lifetime of the network, decreasing consumption of energy, reducing routing overhead, and calculating route path.Selecting less overlapped clusters in wireless sensor networks results in better performance for high level network functions such as routing, query processing, data aggregation and broadcasting [1].In the past few years, several algorithms have been suggested for time synchronization of sensor networks.In this paper, we propose a time synchronization protocol for heterogeneous and homogeneous sensor network topologies.As we use the convex hull synchronization algorithm between sensors, the result is better efficiency in unreliable noisy environments. The rest of the paper is organized as follows: in Section 2, we investigate earlier work and several clustering methods.Comparison between convex hull and regression techniques will be presented in Section 3. In Section 4, the proposed protocol will be discussed.Experimental results are given in Section 5.The last section concludes this paper by outlining future work that will follow. Related Work In this section, we mention some recent synchronization algorithms and present an overview on common clustering methods.Generally, time synchronization protocols are categorized into two main techniques: 1) Synthetic: In this technique, time estimations are done several times to get the local time of a node and eventually generate a function for each node.The more data, the more precise approximations.2) Non-synthetic: In this technique, a sample of time estimation (less overhead) is considered as the foundation of synchronization.Essentially, this technique is faster but less precise than the synthetic technique.Table 1 shows these techniques and the related algorithms [2]. In the following, we explain the prevalent time synchronization protocols which have used the mentioned PCTS: This protocol uses ID-based method (passive) and considers node clustering [7].Cluster head alternatively gathers local clocks of its cluster members and computes the average.Afterward, the cluster head will broadcast the average time.  CHTS: In this protocol, nodes can change their radio domain [8].Some of the nodes are high performance nodes and others are low performance nodes.This protocol also uses ID-based method for clustering the nodes.Cluster heads are selected among high performance nodes.In this protocol, firstly cluster heads are synchronized in Pair-Wise technique with reference node and then cluster head will announce the time to all cluster members.  SLTP: This protocol uses ID-based method for node clustering [11].The cluster head sends its local time continuously to cluster members at specified time intervals.Using linear regression method, cluster members calculate time interval and velocity changes of its clock with cluster head clock.SLTP method is same as RBS in precision; however for wide areas and long life clusters, SLTP operates more efficiently. Three criteria are used to select cluster head (CH): 1) ID-based method: This method assigns a unique ID to each node.One strategy is the selection of nodes having lower ID as the cluster head. 2) Degree-based method: In this method (the degree of a node is the number of its neighbors) nodes with higher degrees can be selected as cluster head [13].These methods attempt to minimize the number of cluster heads, minimizing clusters overlap.Fewer clusters and overlaps will decrease the channel competition between clusters and also will improve the algorithm efficiency [1]. 3) Weight-based method: In this method, several parameters may be considered for CH selection.These parameters include remaining energy, degree, dynamicity, and average distance to neighbors [10,14]. Convex Hull vs. Linear Regression When a message is exchanged between a pair of nodes, the receiving and sending times will not be reliably comparable because the clocks of two nodes are not synchronized.By the principle of causality, the received time must be after the sent time.This constraint is used to compute the clock drift between two nodes. Two proposed synchronization algorithms are Linear Regression and Convex Hull [15].Both algorithms try to estimate a linear conversion function between the clocks in a pair of nodes.The drift and offset of the two clocks are extracted from linear function.In a two dimensional space, based on timestamps of node A and B, the Linear Regression algorithm tries to find a fitted line among points.Each point impacts the position of the fitted line.In the synchronization process, network latency and related problems between two nodes cause erratic, delayed, time values.Ideally, these points should not influence the fitted line and they should be ignored in the calculation to increase synchronization accuracy.The Convex Hull is an algorithm that assumes minimum sent timestamps and maximum received timestamps.It finds the area that has minimum latency and ignores far points.Hence the estimated line is more accurate than Linear Regression. Proposed Method In our proposed method, the synchronization is performed between cluster members and the cluster head.It is not necessary for cluster members to exchange and analyze synchronization data.However, each pair of nodes (within the same cluster or even in two different clusters) may be synchronized if needed.The synchronization is not affected by the fact that nodes may differ in strength, ability and radio domain.Each node is able to change its role from cluster head to cluster member and vice-versa.The proposed algorithm does not change the clock time of the nodes, instead the clock offset and clock skew of each node will be calculated with respect to cluster head clock.To compare synchronization accuracy, nodes local clock in different clusters are compared together.The proposed algorithm proceeds in two phases: configuration and synchronization.We explain these phases in the next section.Figure 1 illustrates the pseudo-code of L-SYNCng protocol. Configuration Phase As mentioned, the SLTP protocol uses passive clustering method for homogeneous and heterogeneous topologies [11].L-SYNC is only efficient in homogeneous environments.In this work, we have applied some strong clustering methods such as DCA (weight-based) and ACE (degree-based) to our model (L-SYNCng) in order to address the L-SYNC shortcomings in heterogeneous distributions.After investigation of the mentioned clustering methods, we have retained the method providing better results.We explain the DCA and ACE clustering methods in the following. The ACE algorithm [13] is based on adjacency degree.This algorithm results in highly uniform clustering and achieves an efficient cluster topology, nearly hexagonal.Using self-organizing characteristics within clusters, the algorithm creates well-separated clusters.Indeed, results show that clusters are less overlapping than other algorithms.ACE has two steps, spawning new clusters and migrating existing clusters.To prevent collisions, each node chooses a random interval.This algorithm is iterative.When a node's turn comes, it starts processing to determine its role.At the beginning, each node is in the unclustered condition, so each node starts to calculate its number based on loyal followers (identified as l ).A loyal follower is a neighbor which is the member of at most one cluster.In the initiation phase, this number is equal to the number of unclustered neighbors of a node.As clustering proceeds, each node, for example node  , knows the time elapsed since the beginning of the protocol (identified as t ).Afterward, node  starts to compute the cluster spawning threshold function , node A can spawn a new cluster.Each node executes the protocol at least for CI time, where C is the desired average number of iterations per node and I is the expected length of each iteration.   min t f function is an inverse exponential.At the beginning, it is equal to the average number of neighbors in the graph.The equation is as follows: In this equation, t is the elapsed time from the start of the protocol, CI is the duration of the protocol, d represents the average number of neighbors in the network.This average is computed in a pre-processing step. K and 2 K are constants determining the shape of the exponential function. The algorithm designers have selected empirically 1 K = 2.3 and 2 K = 0.1 to obtain a good compromise between the clustering quality and the execution time.Using these values, min f starts at 0.9d and decreases to zero on the last iteration.This insures that each unclustered node chooses itself as a cluster head at the end of the protocol.When a node is already a cluster head, at the following iterations it checks whether neighbor nodes are better candidates as cluster head.The node polls all its neighbors to find the best candidate for being cluster head.Therefore, it sends a POLL message to all neighbors.The best candidate is the one containing the largest potential number of loyal followers in its neighbors' set.It means that each node which receives a POLL message starts counting neighbors that are unclustered or are only part of the cluster headed by node A. By counting loyal followers except nodes that are in two or more overlapping clusters, the best candidate node generally provides the least overlap with other clusters.If the best candidate to become cluster head is node A, then A does nothing.If the best candidate is another node (B), A migrates to the new cluster head B. A performs this migration by propagating a PROMOTE message to node B. When receiving the PROMOTE message, B propagates a RECRUIT message to form a cluster with A's cluster ID. The Distributed Clustering Algorithm (DCA) [16] is a weight-based algorithm.In this approach, one node decides to become CH or join a cluster depending on information from its neighbors that are in one hop distance.This technique is essentially iterative. In iterative clustering techniques, a series of nodes wait for a specific event, and other nodes decide their own role (for instance to become CH or not).In DCA, before deciding, one node waits until all its neighbors which have more weight make their decision, and change to CH or join existing clusters.Nodes that have largest weight among neighbors with one hop distance will be selected as CH.A problematic issue in most iterative approaches is that convergence speed is dependent on network diameter (the path that includes the largest number of hops). In a two dimensional field with n distributed nodes, the DCA algorithm needs   O n iterations to finalize the solution.Generally, probabilistic approaches for clustering insure rapid convergence and provide desirable features such as balanced cluster size.This approach causes activation of each node independently to decide for its role within a clustered network, while keeping the message overhead low [13]. In the following, we explain how the mentioned clustering methods are used in L-SYNCng under heterogeneous topologies.Figures 2-3 illustrate the execution of ACE and DCA algorithms for 100 nodes distributed randomly (the path between nodes 0 and 99 is considered).As Figures 2 illustrates, after execution of ACE algorithm, clusters contain less overlap.In this execution, 13 nodes are selected as cluster heads.Once clustering has been done, routing algorithm was executed to find a route from node 99 to node 0. After execution of the DCA algorithm, as Figure 3 shows, 14 nodes are selected as cluster heads.Once clustering has been done, the routing algorithm was executed to find a route from node 99 to node 0. To compare the times of nodes 0 and 99, 8 hops are required where the conversion of time stamp information based on equation 2 takes place.Results of executing the two methods passing through hops from node 99 to node 0 are shown in Table 3. As the results of DCA algorithm illustrate, the number of hops remains the same.As this algorithm needs fewer time conversions than ACE, better accuracy can be achieved. Based on the above comparison, we can conclude that DCA is a better clustering algorithm for heterogeneous topologies.Therefore, this algorithm can be used by L-SYNCng in heterogeneous topologies. Synchronization Phase In this phase, each cluster head starts to broadcast a synchronization packet including its identity number and local time.Each cluster member receives this packet and sends an acknowledgment to the cluster head.The cluster head waits to receive all ACK messages, as shown in Figure 4. Thus four timestamps (t cs , t mr , t ms , t cr ) are generated for each ACK.If a cluster member re- plies immediately to the cluster head, t mr would be equal to t ms .A cluster member can delay as long as it wants.The precision will decrease if the delay between t cs and t cr increases.Cluster head has t cs , t ms and t cr. To determine t mr it is enough to have the minimum delay between t mr and t ms .The minimum delay can be estimated by the cluster head for each cluster member at the end of broadcasting, based on the history of each cluster member's behavior [17]. The cluster head will broadcast a next synchronization packet.After sending m packets, CH will derive an equation of the form Y = aX +b where a and b are spe- cific for each cluster member.Eventually, the cluster head sends all a and b to each cluster member. As mentioned, Figure 5 shows that the convex hull technique can be used to derive lower and upper bounds on the local time of a remote node.In this case, each cluster head can generate a two dimensional graph for each cluster member after sending m packets.The x-axis and y-axis dimensions are local times of cluster member and cluster head repetitively.Thus, each cluster member has two clouds formed by lower-bound and upper-bond samples.Unlike liner regression that tends to average all the individual samples, this technique ignores average values and accounts for the samples with minimal or maximal error [2,14]. Table 4 shows the number of messages used in the synchronization phase for SLTP, L-SYNC and L-SYNCng, m , c and n indicates number of synchronization pack- ets , number of cluster heads and number of cluster members respectively.It is obvious that the linear regression used in SLTP and L-SYNC has fewer messages.Another solution that can be discussed is to start sending synchronization packets by cluster members.Figure 6 shows this solution.With this solution, each cluster member sends m synchronization packet to cluster head and then receives m ACK messages.However, the number of messages in this method is 2*m*n and has more overhead rather than first solution, although this solution has better scalability. Between clusters, there are some nodes that receive timing packets from more than one group head; these are called gateways.Synchronization can be performed periodically at specific time intervals.But, if a node needs to be synchronized within these time intervals, it can broadcast a message for synchronization. Any group head that receives this message will start to send its local time and ID.In the example shown in Figure 7, after executing the algorithm in cluster heads CH1 to CH2, nodes belonging to each group head will receive a and b .If two nodes such as CM1 and CM2, that are the members of a cluster, want to communicate with each other, it is sufficient to send their a and b parameters to each other, and using the following equations they can convert their clocks.In a case where two nodes are not members of a cluster, they can also be synchronized.For instance, if nodes CM1 and CM3 want to be synchronized, it is enough to calculate a and b parameters using Equation (2), in the path between each two nodes, in an appropriate route between nodes CM1 and CM3.We have proposed a routing algorithm between these nodes.Afterward, clock conversion will be done in each existing hop in the route path.Since conversion error in each hop will be added to the total error rate, the synchronization error increases with the number of hops.Consequently, clustering based on larger number of neighbors helps to shorten the route between two nodes, in order to perform fewer conversions and reduce the synchronization error rate. 1 1 (2) Results To assess our synchronization protocol, we used the NS 2.31 simulator under the Linux operating system.We describe the simulation setup and configuration in detail in Subsection 5.1.Afterward, we selected SLTP [2] and L-SYNC [12] to compare our simulation results based on accuracy in terms of time and number of hops.We discuss our comparison in detail in Subsection 5.2. Simulation Setup To evaluate the proposed algorithm, it is required to consider several clocks, one for each node.To simulate several nodes' clocks on a system, a real time system was used, so that for each node's clock, one specific drift and one specific offset were selected.Drift and offset are determined by a random function, between two identified values of max_drift and max_ offset, such that nodes' clocks are computed with the following equations.In our simulations, the worst cases for drift and offset have been considered.Simulation parameters are shown in Table 5. When a node wants to be synchronized with another node, a packet is broadcast to form an optimized route between two nodes.In the return route, the required conversions are done.In each execution turn, different routes between source and target will be selected, each experiment will be iterated 10 times and the results are averaged.Our simulation is done for two different topologies, heterogeneous and homogeneous.In homogeneous topology we use 100 nodes in 1000*1000 square meters where each node has 100 meters range in a regular layout, and in heterogeneous topology the configuration is similar except that the nodes coordinates are randomly uniform.   * 3 time node current time drift offset   Simulation Results and Comparison We simulated our work in two environments: noisy and noiseless.Each environment was tested with two topologies: homogeneous and heterogeneous.Figure 8 depicts the average error versus simulation time in noiseless homogeneous environment.It shows that as time passes the average error rate of L-SYNCng increases less than with the others.Figure 9 depicts our second simulation in noiseless heterogeneous topology.It shows that as time passes the average error rate of L-SYNCng increases gradually, while for the other protocols it increases more rapidly. We have repeated the two previous simulations innoisy environment where packets may arrive to destination with considerable delay. Conclusions and Future Work L-SYNCng is a reliable and efficient protocol for time synchronization in Wireless Sensor Networks.This protocol uses degree-based and weight-based clustering in heterogeneous and homogeneous topologies respectively.Using these clustering methods, L-SYNCng can reduce the number of hops in time synchronization process of two specific nodes which are in different clusters. Moreover, L-SYNCng uses convex hull method to calculate clock offset and skew in each cluster.Therefore, it is capable to compute skew and offset intervals between each node and its corresponding head cluster.In other words, it can estimate the local time of remote nodes in the future and past.To estimate the local time for remote nodes, firstly a routing algorithm is used and afterward a conversion is performed in each hop.Simulation results illustrate that the convex hull method can increase efficiently the synchronization accuracy in noisy environments.As dynamic sensor networks might be indispensable in the future, we are going to apply LSYNCng to these environments where nodes change Figure 2 . Figure 2. Using ACE as a clustering in L-SYNCng. Figure 3 . Figure 3. Using DCA as a clustering in L-SYNCng. Figure 4 . Figure 4. Synchronization packets between CH and CMs.CH starts synchronization. Figure 7 . Figure 7. Synchronization of two nodes of two far clusters. Figures 10 - 11 depict our simulation results.As we mentioned, in L-SYNC the protocol synchronization algorithm is based on linear regression technique.Linear regression can be influenced particularly by unreliable data which are far from the fitted line.Figures 12-13 summarize L-SYNC and L-SYNCng general behavior in each different environment. Figure 8 . Figure 8.Comparison of L-SYNCng, L-SYNC and SLTP error versus time in homogeneous topology and noiseless environment. Figure 9 . Figure 9.Comparison of L-SYNCng, L-SYNC and SLTP error versus time for heterogeneous topology and noiseless environment. Figure 10 . Figure 10.Comparison of L-SYNCng, L-SYNC and SLTP error versus time forhomogeneous topology and noisy environment. Figure 11 . Figure 11.Comparison of L-SYNCng, L-SYNC and SLTP error versus time for heterogeneous topology and noisy environment. Figure 13 . Figure 13.L-SYNC behavior in different environment. Table 1 . Time synchronization techniques. [3]this protocol, non-synthesized Reference Broadcasting method is used to compute the difference between nodes' offset[3].According to nonsynchronous clock ticks, linear regression is used such that each node determines the best fitted line from its local time and its neighbor local time.Slope of this line is the velocity of clock changes. RBS:
4,956
2010-12-31T00:00:00.000
[ "Computer Science", "Engineering" ]
Sketch Retrieval based on Qualitative Shape Similarity Matching: Towards a Tool for Teaching Geometry to Children An approach for a query-by-sketch system on qualitative shape information for image retrieval in databases is proposed and evaluated. The use of qualitative methods for shape description allows the gathering of semantic information from the sketches. The qualitative description and recognition of sketches are evaluated in order to verify that it is possible to use the proposed qualitative method for the de-velopment of a learning application for children. Introduction A sketch is a freehand drawing which is commonly employed to represent the essentials of an idea. Sketches are used every day in design, architecture, arts and software engineering, and also in nontechnical situations such as providing orientation instructions in a city, etc. Previous work was successful in using sketches as spatial abstractions to represent maps in geographic information [11] and in robot navigation [14]. In the liter-* Corresponding author: Lledó Museros, Universitat Jaume I, Avg. Vicent Sos Baynat s/n, E-12071 Castelló de la Plana, Spain. ature, query-by-sketch approaches based on qualitative representations are popular for retrieval in geographic databases. Using a sketch containing spatial relations as a means of querying a geographic database was first proposed by Egenhofer [4]. Ferguson et al. [9] developed a sketch interface for military course-of-action diagrams, which supported queries using spatial relationships. Fogliaroni et al. [10] proposed several approaches to reduce the relation space and enable qualitative spatial queries in spatial databases to support query-by-sketch. Al-Salman et al. [1] developed an intuitive sketching tool for users to contribute and query information in disaster scenarios via their mobile devices. Since the schematic nature of sketches makes qualitative representation methods fit naturally, some approaches appear in the literature which use qualitative techniques to describe the shape of a sketch [16,15]. Kuijpers et al. [16] developed an algorithm for polyline (and polygon) similarity calculus based on the doublecross (DC) [12] orientation model which was applied to query-by-sketch polyline databases and classification of terrain features. Gottfried [15] obtained sketches of images containing objects and river maps and used qualitative relations of orientation, inspired also by the DC model, between the line segments of the contour of a shape to calculate a similarity measure between the images. Thus, qualitative approaches have proved their effectiveness to image query-by-sketch retrieval. In this paper, a novel approach for query-by-sketch based on qualitative shape information for image retrieval in databases is proposed and evaluated. The pro-posed qualitative shape similarity model is not based on the DC orientation model, as the previous approaches, but on qualitative features of shape given by Falomir et al. [7] which has shown good performance in fields such as mosaic assembling [8] or icon retrieval [19]. A crucial benefit of using a qualitative approach to process the sketches is the possibility to gather semantic rich information out of the sketches, which can be exploited in useful ways. In order to describe and recognize a sketch, a similarity measure is presented which is used to compare a sketch against a drawing in the database and it is also able to detect the differences between the shapes compared. Moreover, it is worth noting that the qualitative descriptions presented can be translated to natural language and a narrative description can be provided to an end-user for reading or listening to by means of a speech synthesizer program [6]. These advantages allow the use of qualitative shape description techniques for the implementation of a learning system to support the teaching of geometric shapes to children. The proposed approach may be implemented in an Android system and then be used in a tablet, embedded in an application used to teach children how to draw a geometric shape. The sketch made by a child can be qualitatively described and compared with the ones already described in a database, and the differences can be described using natural language in order to explain the differences between the target geometric shape and the sketch. In this context, the main aim of this paper is to present the techniques that makes such an application feasible. The steps of the approach are summarized in Fig. 1. First, the sketch is qualitatively described and its description is matched against a database of images of drawings using a similarity calculus. The resulting list of similar images is then presented to the user both graphically and using an automatically generated natural language description. The remainder of this paper is organized as follows. The qualitative model for shape description used is outlined in Section 2. Section 3 explains how to obtain natural language descriptions from QSDs specifications. The shape similarity calculus is given in Section 4. The scenario, the performed tests and results obtained are shown in Section 5. Finally, conclusions and future work are drawn. Qualitative Shape Description (QSD) The Qualitative Shape Description (QSD) method [7] is based on the relevant points of the boundary of a shape. For bitmap images, this boundary is obtained using standard image segmentation algorithms, and then the slope of the pixels at the object boundary is analysed. For vectorial images, the relevant points are obtained by interpreting the drawing primitives. Each of these relevant points is described by a set of four qualitative features as follows: -Compared Length (L) of the two edges connected by P, described as: {much shorter (msh), halflength (hl), a bit shorter (absh), similar length (sl), a bit longer (abl), double length (dl), much longer (ml)}; -Convexity (C), described as: {convex, concave}. An example of the qualitative shape description of an object composed of 6 relevant points which connects straight lines and curves and defines different angles and lengths is given in Fig. 2. Qualitative Object Description in Natural Language (QODNL) According to geometric principles, objects described by qualitative features are characterized by a set of three elements: [Name, Regularity, Convexity] Regarding objects without curves, these elements are defined as follows. Name is given by the quantity of relevant points of the object (triangle, quadrilateral, pentagon, hexagon, heptagon, octagon, polygon); Regularity indicates if the object has all the same qualitative angles and all the edges of similar length (regular), or not (irregular); and Convexity indicates if the object has a concave angle (then it is concave) or not (then it is convex). Triangular objects are characterized as right/obtuse/acute according to the kind of angles, and as equilateral/isosceles/scalene according to the relation of length between the edges. Quadrilateral objects are characterized more accurately as square, rect-angle or rhombus depending on the compared length between the edges and on the kind of angles. The characterization of the objects with curves are defined as follows. Name takes the next options depending on its properties: curved-polygon (it has at least one curvature-point and at least one line-line), polycurve (all the relevant points are curvature-points, curve-curve, curve-line or line-curve points), circle (a polycurve with four relevant points, two of them defined as semicircular) and ellipse (a polycurve with four relevant points, two of them defined as points of curvature). Regularity: circles and ellipses are considered regular and other objects irregular. Convexity of objects with curves is defined in the same way as for objects with straight edges. A more detailed description can be seen in [5]. In order to obtain a Qualitative Object Description in natural Language, the qualitative descriptors defined by the QSD approach are used and organized in a context-free grammar (G) built on the following parameters: where -V is an alphabet of symbols that are non-terminals; -Σ is an alphabet of terminal symbols (qualitative labels or words), disjoint with V ; -P ⊆ V × (V ∪ Σ) * is the set of production rules 1 ; -QODNL ∈ V is the initial symbol of the grammar; The grammar G(QODNL) [6], simplified here to show only the features of shape where λ is the empty string, is as follows: QODNL → Ob jID is a Regularity Convexity Name. Ob jectQSD Ob jectQSD → Its shape has M RegularEdges defining Amplitude | Its shape has M relevant points. RPsQSD | λ RegularEdges → equal edges | curves EC → a line to a curve | a curve to a line 2Curves → Point M joins two curves in a C and TC angle. The language generated by the G(QODNL) grammar is defined as follows: The G(QODNL) language describes objects in two levels of detail: (1) a sentence describing the main features of an object within the image or (2) a detailed description including both the general details of the object and also all its features of shape: angles, curvature, length, etc. An illustrative example of the more general level of detail is given in Table 1. Qualitative Shape Similarity Freksa [13] determined that two qualitative terms are conceptual neighbours if "one can be directly transformed into another by continuous deformation". Therefore, angles acute and right are conceptual neighbours since an extension of the angle acute causes a direct transition to the angle right. Hence, Conceptual Neighbourhood Diagrams (CNDs) can be described as graphs containing: (i) nodes that map to a set of individual relations defined on intervals and (ii) paths connecting pairs of adjacent nodes that map to continuous transformations which can have weights assigned to them in order to establish priorities. For each of the features in QSD, a CND is defined in Fig. 3. Dissimilarity matrices in Tables 2-5 map the pairs of nodes in each CND to the minimal path distance between them. A Similarity between Qualitative Shape Descriptions (QSDs) As explained in Section 2, the qualitative shape of an object is described by means of all its relevant points (RPs). Therefore, in order to define a similarity measure between shapes, first a similarity between relevant points must be obtained. Hence, given two relevant points, denoted by RP A and RP B , belonging to the shapes of the objects A and B respectively, a similarity between them, denoted by SimRP(RP A , RP B ), is defined as: where ds(i) and Ds(i) denote the dissimilarity between relevant points and the maximum dissimilarity with respect to the feature obtained from the dissimilarity matrix with I = {EC, A ∨ TC,C, L}, respectively. Hence, by dividing ds(i) and Ds(i) the proportion of dissimilarity related to feature of RP A and RP B is obtained, which is between 0 and 1. Furthermore, the parameter w i is the weight assigned to this feature, and it holds that w EC + w A + w L + w C = 1, w A = w TC and w i ≥ 0 for each f eature. In order to compare two shapes A and B whose QSDs have the same number of relevant points (denoted by m), the similarity between A and B, denoted by SimQSD (A, B), is calculated from (1) as follows: Fixed an relevant point of A, RP i A , i = 1, ·· ·, m, the similarities between the pairs of relevant points of the set are calculated. Thus, In general, if the number of relevant points of the shapes A and B are n and m respectively, and assuming without loss of generality that n ≥ m, then there are n − m relevant points of A shape with no corresponding points in the B shape. Let C the set of all possible way (combinations) to chose n − m relevant points of A. Hence, if c ∈ C, a new shape A c is considered such that A c is given by all the relevant points of A minus the n − m relevant points of A given by the c combination. Hence A c and B have the same number of relevant points and its similarity can be calculated as in the previous case. Thus, the similarity between A and B is obtained as: More details and properties of this shape similarity calculus are given in [20]. Importance of the points of a sketch Let F = {F i } i∈I a sketch set with I an index set, and the similarity SimQSD : F × F −→ R + between two sketches defined from the Qualitative Shape Description. Let A ∈ F be a sketch with n relevant points, RP A i n i=1 . Given a fixed RP A i point, a new sketch A i is considered which is the same A sketch without the RP A i point. Removing a point can create a very difference between QSD(A) and QSD(A i ) (see Table 6). Hence, a value s A i is defined as follows: This value is straightforward to interpret since i is high, that is, close to one, then QSD(A) and QSD(A i ) are very different. Hence, the elimination of the point RP A i has significantly modified A sketch, which implies that the RP A i point is very important in the Qualitative Shape Descriptions of A. -If s A i is low, that is, close to zero, then QSD(A) and QSD(A i ) are not significantly different, which implies that the RP A i point is not very important in the QSD of A. have been obtained. Hence, a weight of the RP A i point of A sketch, denoted by w A i , is given as follows: can be interpreted as being the same as the values s A i n i=1 , the only difference being that ∑ n i=1 w i = 1. an example of these weights is given in Table 6. Cognitive Saliency of Each Qualitative Feature at a Relevant Point The value ds(i) Ds(i) in (1) can be seen as the importance of changes in each feature of shape. Hence, from the dissimilarity matrices obtained from CNDs, the following maximums (Ds(i)) are obtained: for Convexity, 1; for Edge Connection, 2; for Angle and Type of Curvature, 4; and for Length, 6. As the value assigned to each change is 1, this means that each change in each feature has a different importance I in equation (1) and the following priorities among features are given: These priorities can be justified as being suitable for comparing shapes intuitively [7,5]. In Fig. 4 five shapes are shown (S1, S2, S3, S4 and S5) that exemplify these priorities. Convexity (C) is the feature that has the greatest priority because, when it changes, not only the boundary of the object changes, but also its interior (i.e. compare shapes S1 to S2 in which only the convexity of relevant point 2 changes). The Edge Connection (EC) is the second most important feature because it differentiates between curves and straight lines, which is also an important difference. For example, if we compare shapes S1 to S3, in which only the EC of relevant point 2 changes, we will see that they are more similar than S1 and S2 and than S2 and S3, in which both the EC and the C of 2 is different. The next most important feature is the Angle or Type of Curvature, because it characterises the shape of an object in a more significant way than the lengths of the edges, which usually depend on the angle they define. If we compare S3 and S4, the most perceptible difference is that the Angle of 2 is different, but the compared length between relevant points 3-4 and 4-0 is also different in both shapes, and this is less perceptible. Finally, note that it is also true that the more similar the number of relevant points between shapes, the higher the similarity, since S1-S4 are more similar to each other than any of them are to S5, which has one relevant point less than them. Detecting the Differences in Shape by Correspondences of Relevant Points The developed method, apart from calculating the similarity between two objects A and B with a different number of vertices, it also finds the correspondence of as many equivalent vertices of both shapes as possible. Therefore this method is able to detect the differences between the two shapes [7]. An example where the presented approach detects the 'extra' relevant points of a shape intuitively is given in Fig. 5. Hence, given the shapes Bone-1 and Bone-7 which have a similar shape, the calculation of the SimQSD provides the following results: -The SimQSD between shapes is 0.88. A high similarity is obtained since Bone-7 is exactly the same as Bone-1 with a bend in it. Image Retrieval using Query-by-Sketch In order to show the feasibility of the presented techniques, a prototype query-by-sketch tool have been implemented which: -Provides a user interface for simple sketching, which is suitable for future implementation in touch-based devices; -Uses the qualitative description and similarity approach presented to search in a database for images similar to the sketch; -In addition to the resulting list of ranked images, it presents an automatically obtained natural language description of each one, as well as of the sketch itself. The user interface of the application is shown in Fig. 6. The drawing area where a user can draw an image by clicking and moving the mouse is placed in the left panel of the interface. The tool provides facilities for loading and saving sketches in a variety of formats. The sketch made by the user is stored as an XML file in vector format (SVG). Clicking the button Compare and describe shows in the right panel the automatically obtained textual description of the sketch, and the images in the database in descending order of similarity to the sketch. Beside each image in the result list, the similarity between the sketch and the image appears, as well as its automatically obtained description in natural language. In order to test the effectiveness of the query-bysketch approach, a database of 90 bitmap images is used, some of which are shown in Fig. 7. As a simplification, in this experiment the curves in the sketch are approximated to straight lines. The approach works equally well in two potentially problematic cases: open shapes, as shown in Fig. 8, and shapes whose segments are sloppily joined, as shown in Fig. 9. The experiments show that the application finds the most similar shape with a success rate of 90%. The algorithm also gets false positives, in the sense that the first image classified as more similar to the sketch is in fact not the most similar one; this is particularly the case if a figure has a strong semantics associated with it. However, in every test the most semantically similar image has been always positioned among the three first results, which is encouraging. Conclusions and Future Work The qualitative shape description scheme and the similarity calculus presented are promising approaches for calculating the similarity between a sketch and an image database. It serves, therefore, as proof-ofconcept for the idea of using a high level qualitative representation as the basis for a learning application for children. Specifically, the aim is to develop a tablet application for teaching geometric shapes to children. However, as explained in the introduction, to be able to describe and compare cognitively a sketch, this is only the beginning. To be able to fully address the issues involved in creating this learning application, further research is required. Our plans for future work include: -The extension of the presented application in order to be able to work with curves and not only straight lines. -The natural language generator must be enhanced in several ways. We are focusing in (i) enriching the generated language to be more suitable for a children-oriented application and (ii) to support the description of differences between shapes. -To introduce the concept of point importance into the similarity approach, and evaluate its effects. -Define an appropriate approach for teaching geometric shapes and implement it in a tablet application. -Test the results with real users. Input Image Qualitative Description Natural Language Description
4,634
2015-01-01T00:00:00.000
[ "Computer Science", "Education", "Mathematics" ]
Altered Treg Infiltration after Discoidin Domain Receptor 1 (DDR1) Inhibition and Knockout Promotes Tumor Growth in Lung Adenocarcinoma Simple Summary In Europe, seven out of eight patients with lung cancer die within 5 years after diagnosis. DDR1, a tyrosine kinase receptor, has emerged as a potential new therapeutic target for non-small cell lung cancer given its association with poor prognosis among affected patients. This study investigates the impact of DDR1 on tumor burden and immune cell infiltration into the tumor microenvironment. We found that pharmacological inhibition and knockout of DDR1 increased the tumor burden in an immunocompetent mouse model of lung adenocarcinoma. The absence of DDR1 reduced CD8+ cytotoxic T-cell infiltration but increased CD4+ helper and regulatory T-cell infiltration. Regulatory T cells, which promote tumorigenesis by suppressing the immune system, were also more common among The Cancer Genome Atlas (TCGA) lung adenocarcinoma patients with low DDR1 expression. These findings suggest that therapeutic inhibition of DDR1, in certain circumstances, might even have negative effects, although further studies are needed to confirm these findings. Abstract Lung cancer is the leading cause of cancer-related death worldwide. Discoidin domain receptor 1 (DDR1), a tyrosine kinase receptor, has been associated with poor prognosis in patients with non-small cell lung cancer (NSCLC). However, its role in tumorigenesis remains poorly understood. This work aimed to explore the impact of DDR1 expression on immune cell infiltration in lung adenocarcinoma. Pharmacological inhibition and knockout of DDR1 were used in an immunocompetent mouse model of KRAS/p53-driven lung adenocarcinoma (LUAD). Tumor cells were engrafted subcutaneously, after which tumors were harvested for investigation of immune cell composition via flow cytometry. The Cancer Genome Atlas (TCGA) cohort was used to perform gene expression analysis of 509 patients with LUAD. Pharmacological inhibition and knockout of DDR1 increased the tumor burden, with DDR1 knockout tumors showing a decrease in CD8+ cytotoxic T cells and an increase in CD4+ helper T cells and regulatory T cells. TCGA analysis revealed that low-DDR1-expressing tumors showed higher FoxP3 (regulatory T-cell marker) expression than high-DDR1-expressing tumors. Our study showed that under certain conditions, the inhibition of DDR1, a potential therapeutic target in cancer treatment, might have negative effects, such as inducing a pro-tumorigenic tumor microenvironment. As such, further investigations are necessary. Introduction Among the 2.2 million new cancer cases worldwide in 2020, those diagnosed with lung cancer were the second most common, surpassed only by those diagnosed with breast cancer [1].In Europe, lung cancer has a 5-year survival rate of only 12.6% [2], suggesting that lung cancer patients have yet to benefit from the advances in cancer diagnostics and treatment. The tumor microenvironment (TME) contains not only cancer cells but also several other infiltrated non-malignant cells like stromal cells or immune cells, which all play a certain role in tumorigenesis.In non-small cell lung cancer (NSCLC), CD45 + leukocytes account for over 50% of all viable cells present within the tumor [3].However, these leukocytes have been found to exhibit not only anti-tumorigenic but also pro-tumorigenic phenotypes.To evade the immune system, tumors create an immunosuppressive microenvironment by attracting immunosuppressive immune cells like regulatory T cells (Tregs), which can impact treatment outcomes, especially immunotherapy.Tregs, which are usually known for maintaining immune homeostasis by suppressing the immune system's self-reactive immune responses, are the main tumor-promoting CD4 + helper T-cell subpopulation [4], and are associated with poor clinical prognosis in patients with NSCLC [5]. Discoidin domain receptor 1 (DDR1), a tyrosine kinase receptor, has been found to function as a sensor for the extracellular matrix (ECM) by regulating several cell functions, such as migration, adhesion, proliferation, cytokine secretion, and ECM homeostasis/remodeling.DDR is mostly expressed in epithelial cells, and its phosphorylation via collagen binding leads to the activation of various signaling pathways such as MAPK, integrin or Notch (reviewed in [6][7][8]).Nonetheless, several studies have suggested that DDR1 also exhibits collagen-independent activities [9].In T cells, DDR1 expression has also been reported [10,11]; however, this expression is much lower compared to other cell types. Studies show that DDR1 is overexpressed in several malignancies, such as lung [12], breast [13], brain [14], and gynecological cancers [15].Over the last couple of years, several studies have shown an association between DDR1 and poor prognosis in patients with NSCLC [5,16], highlighting the potential of DDR1 as a new therapeutic target.Recently, multiple studies on breast [17][18][19] and colorectal cancers [20] have shown initial evidence that DDR1 could affect T-cell infiltration into the TME, mostly due to its interaction with collagen.To date, however, no link between DDR1 and T-cell infiltration in NSCLC has been established. To investigate the role of DDR1 in promoting T-cell abundance in NSCLC, we used an immunocompetent mouse model of KRAS/p53-mutated lung adenocarcinoma (LUAD).Pharmacological inhibition and knockout of DDR1 increased the tumor burden and altered the T-cell composition.Increased CD4 + and Tregs and decreased CD8 + T-cell infiltration into DDR1-knockout tumors were observed.The Cancer Genome Atlas (TCGA) analysis of LUAD patients revealed that DDR1 low samples showed higher Forkhead box protein 3 (FoxP3, a Treg marker) expression than DDR1 high samples. DDR1-Knockout KP Cell Lines Using the CRISPR/Cas9 Lentivirus System DDR1 oligos (Supplementary Table S1) (Eurofins, Louisville, KY, USA) were first cloned into the lentiCRISPR v2 (#52961; Addgene, Watertown, MA, USA) according to the "Target Guide Sequence Cloning Protocol" [21,22].The plasmids were then transformed into Stbl3 bacteria (ThermoFisher Scientific, Waltham, MA, USA) with heat shock for 90 s at 42 • C. Bacteria were then cultured on agar plates with ampicillin (100 µg/mL; Sigma-Aldrich, St., Saint Louis, MO, USA) overnight at 37 • C. The next day, single clones were picked and cultured in 5 mL of LB medium (Sigma-Aldrich, St., Saint Louis, MO, USA) with 100 µg/mL ampicillin at 37 • C for 8 h with shaking.Thereafter, 1 mL of bacteria preculture was inoculated into 10 mL of LB medium with 100 µg/mL ampicillin and incubated overnight at 37 • C with shaking.On the next day, plasmids were isolated using the QIAGEN Plasmid Plus Mini Kit (Qiagen, Venlo, The Netherlands) and sequenced for right ligation (Eurofins, Louisville, KY, USA). Lentivirus was produced using Lipofectamine™ 3000 (ThermoFisher Scientific, Waltham, MA, USA) in HEK293T cells according to the manufacturer's instructions.The CRISPR plasmids described earlier were transfected together with packaging and envelope plasmids psPAX2 (#12260; Addgene, Watertown, MA, USA) and pMD2.G (#12259; Addgene, Watertown, MA, USA), respectively.HEK293T cells were incubated for 4 h with the DNA-lipid complex before the medium was changed.After 48 and 72 h, the supernatant containing the lentivirus was collected.KP cells were then seeded into a 6-well plate and incubated with the lentivirus for 48 h.Afterwards, cells were recovered for 48 h in DMEM + 10% FBS + 1% PS followed by selection with 20 µg/mL puromycin (ThermoFisher Scientific, Waltham, MA, USA).Single-cell clones were then selected and grown by seeding 1 cell into a 96-well plate, with each well containing 200 µL of the medium.After expansion, knockout clones were validated via Western blotting and the quantitative polymerase chain reaction (qPCR). Murine Tumor Models Age-matched male C57BL/6J mice, purchased from Charles River, Germany, were used for in vivo experiments.Experiments were approved by the Austrian Federal Ministry of Science and Research (BMBWF-66.010/0041-V/3b/2018).For the pharmacological inhibition of DDR1, 0.5 × 10 6 KP cells were injected, and mice were treated daily with either 7rh inhibitor (Sigma-Aldrich, St., Saint Louis, MO, USA) or the vehicle intraperitoneally at a dosage of 8 mg/kg/day.After 14 days, mice were sacrificed, and tumors were harvested.For DDR1 knockout experiments, 0.325 × 10 6 KP DDR1 knockout KO2 and KO6 or control (endogenous DDR1) were subcutaneously injected into the flank of mice.After 19 days, mice were sacrificed, and tumors were harvested.Tumors were weighed and measured for size.Tumor volume was calculated using the following formula: v = length × width × height × π/6.Tumor pieces were frozen for protein and RNA analysis or were used to create single-cell suspensions for flow cytometry. Tumor Single-Cell Suspension Tumors were minced into small pieces and transferred into 0.5-1 mL of a digestion medium (RPMI + 40 U/mL DNase I and 150 U/mL collagenase type 1 (both from Worthington Biochemical Corporation, Lakewood, NJ, USA)).Digestion was performed for a total of 25 min on a thermoshaker (37 • C with shaking).After 10 min, tissues were triturated with a pipette.Homogenization was performed via passage through a 16-gauge needle followed by a 40 µm cell strainer (Greiner, Kremsmünster, Austria) and by washing with SB (staining buffer: phosphate-buffered saline (PBS) + 2% FBS).After washing with PBS, the cells were resuspended and counted on an EVE automated cell counter (NanoEntek, Seoul, Republic of Korea), and 2 × 10 6 cells were used for flow cytometry staining. RNA Extraction and qPCR RNA extraction was performed using the RNeasy Mini Kit (Qiagen, Venlo, The Netherlands) according to the manufacturer's protocol.Tumor pieces were previously homogenized using a Precelly homogenizer (VWR, Radnor, PA, USA) and 1.4 mm ceramic beads (VWR, Radnor, PA, USA).Reverse transcription was performed using the High-Capacity cDNA Reverse Transcription Kit (ThermoFisher Scientific, Waltham, MA, USA) according to the manufacturer's protocol on a Thermal Cycler (BioRad, Hercules, CA, USA).Real-time PCR was performed using TaqMan Gene Expression Master Mix (ThermoFisher Scientific, Waltham, MA, USA) according to the manufacturer's protocol and probes for DDR1 (Mm01273496_m1) and GAPDH (Mm99999915_g1) (ThermoFisher Scientific, Waltham, MA, USA) on a CFX Connect Real-Time System (BioRad, Hercules, CA, USA). BrdU Proliferation Assay Cells were seeded in a 6-well plate and grown for 24 h.Thereafter, 10 µM of BrdU was added and incubated for 1 h at 37 • C. Cells were detached and stained with BrdU-FITC antibody as described in the manufacturer's protocol (FITC BrdU Flow Kit, BD Bioscience, Franklin Lakes, NJ, USA). Analysis of TCGA DDR1 Expression Data Gene expression and mutation data were obtained from the GDC TCGA cohort, downloaded via the Xena Browser (https://xenabrowser.net/, accessed on 7 February 2022).The gene expression data (HTSeq-FPKM-UQ) comprised a cohort of 877 clinical LUAD samples, with 597 primary tumor data from 509 patients.Subsequently, one sample per patient was randomly selected, and these samples were categorized based on DDR1 expression (high vs. low).The selected genes were then compared between these groups using an unpaired Student's t-test.Furthermore, patients were grouped based on their mutation status of selected genes, and DDR1 expression was compared using an unpaired Student's t-test.The analysis was performed using R software (v4.1.2). Statistical Analysis Statistical analysis was performed using GraphPad Prism 9 (GraphPad Software, La Jolla, CA, USA).Data were tested for Gaussian distribution using the Shapiro-Wilk test of normality.Significant outliers were calculated using the GraphPad Outlier calculator (Grubb's test) and excluded from statistical analysis.For parametric data, Student's ttest with Welch correction was performed using data from the two groups.Ordinary one-way ANOVA with Dunnett's multiple comparison test was used to analyze data of three different groups.For non-parametric data, the Mann-Whitney test or Kruskal-Wallis test with Dunn's multiple comparisons test was performed.Results are presented as mean + SD.A p value of <0.05 indicated statistical significance. Inhibition and Knockout of DDR1 Drives Tumor Growth in Mice Recent studies on in vivo models of breast and colorectal cancers have shown that DDR1 affects immune cell composition and infiltration into the TME [17][18][19][20].To date, however, it remains unknown whether DDR1 also impacts T-cell infiltration in NSCLC.To address this, we used an in vivo model of immunocompetent LUAD and pharmacologically inhibited DDR1 (Figure 1).The TCGA lung cancer patient dataset revealed an upregulation of DDR1 in patients with a KRAS or EGFR mutation, but not in those with STK11/LKB1 or TP53 mutations (Supplementary Figure S1).Due to the absence of an available EGFR mutant mouse model, we chose to utilize a KRAS/p53-driven model.Accordingly, male C57BL/6J mice were subcutaneously injected with the KP cell line (isolated from LUAD derived from a Kras LSL-G12D Trp53 Fl/Fl mouse with a C57BL/6 background [23,24]) and were treated with 8 mg/kg of the DDR1 inhibitor 7rh or the vehicle via daily intraperitoneal injections (Figure 1A).After 13 days, inhibitor-treated mice surprisingly showed increased ex vivo tumor volume (Figure 1B) and weight (Figure 1C).To further investigate the results from the pharmacological inhibition of DDR1, we next created DDR1-knockout KP cell lines using the CRISPR/Cas9 lentivirus system.Different guide RNA sequences (Supplementary Table S1) were used to create KO cell lines.Knockout was validated using Western blotting (Supplementary Figure S2A,B) and qPCR To further investigate the results from the pharmacological inhibition of DDR1, we next created DDR1-knockout KP cell lines using the CRISPR/Cas9 lentivirus system.Different guide RNA sequences (Supplementary Table S1) were used to create KO cell lines.Knockout was validated using Western blotting (Supplementary Figure S2A,B) and qPCR (Supplementary Figure S2C).We also assessed whether the knockout altered proliferation in vitro using a bromodeoxyuridine (BrdU) flow cytometry assay.However, the different cell lines showed no difference in proliferation in vitro compared to the parental KP cell line (Supplementary Figure S2D). Based on knockout validation, we injected KO2, KO6, and the control (ctrl, endogenous DDR1) subcutaneously into the flank of male C57BL/6J mice (Figure 2A).After 19 days, DDR1-knockout tumors showed a significant increase in ex vivo volume (Figure 2B) and weight (Figure 2C) compared to the control, which supports the inhibitor data. Ex vivo measured tumor volume (B) and weight (C) (n = 8 to 9).* p < 0.05 using Student's t-test.Data are presented as mean + SD. To further investigate the results from the pharmacological inhibition of DDR1, we next created DDR1-knockout KP cell lines using the CRISPR/Cas9 lentivirus system.Different guide RNA sequences (Supplementary Table S1) were used to create KO cell lines.Knockout was validated using Western blotting (Supplementary Figure S2A,B) and qPCR (Supplementary Figure S2C).We also assessed whether the knockout altered proliferation in vitro using a bromodeoxyuridine (BrdU) flow cytometry assay.However, the different cell lines showed no difference in proliferation in vitro compared to the parental KP cell line (Supplementary Figure S2D). Based on knockout validation, we injected KO2, KO6, and the control (ctrl, endogenous DDR1) subcutaneously into the flank of male C57BL/6J mice (Figure 2A).After 19 days, DDR1-knockout tumors showed a significant increase in ex vivo volume (Figure 2B) and weight (Figure 2C) compared to the control, which supports the inhibitor data.To test whether DDR1 knockout is stable in vivo and whether DDR1 is mainly expressed by the cancer cells, Western blotting and qPCR were performed on homogenized frozen tumor tissues.GAPDH was used as the control housekeeping gene in both protein and RNA analyses.DDR1-knockout tumors showed decreased DDR1 expression in vivo at the protein (Figure 2D,E) and RNA levels (Figure 2F). DDR1 Knockout Leads to a Pro-Tumorigenic T-Cell Profile In Vivo To investigate whether DDR1 knockout altered immune cell infiltration into the TME, we created tumor single-cell suspensions and performed staining for a lymphoid flow cytometry panel (Supplementary Table S2).The representative gating strategy is shown in Figure 3A.DDR1-knockout tumors show no changes in live cells (% of singlets) or infiltrated leukocytes (%CD45 + of live) (Figure 3B).and RNA analyses.DDR1-knockout tumors showed decreased DDR1 expression in vivo at the protein (Figure 2D,E) and RNA levels (Figure 2F). DDR1 Knockout Leads to a Pro-Tumorigenic T-Cell Profile In Vivo To investigate whether DDR1 knockout altered immune cell infiltration into the TME, we created tumor single-cell suspensions and performed staining for a lymphoid flow cytometry panel (Supplementary Table S2).The representative gating strategy is shown in Figure 3A.DDR1-knockout tumors show no changes in live cells (% of singlets) or infiltrated leukocytes (%CD45 + of live) (Figure 3B).Furthermore, no changes were observed in total T-or B-cell infiltration (Figure 4A).Regarding T-cell (CD3 + ) composition, however, differences in CD4 + helper T-cell and CD8 + cytotoxic T-cell abundance were observed among tumors.Both KO2 and KO6 tumors showed an increase in CD4 + helper T cells but a decrease in CD8 + T cells compared to the control (Figure 4B). Moreover, no changes were observed in the median fluorescence intensity of inhibitory checkpoint receptor PD-1, which indicates T-cell activation status (Figure 4C), or in effector, memory, or naive T-cell distribution of CD4 + helper T cells or CD8 + cytotoxic T cells (Figure 4D). Within the CD4 + helper T-cell population, we found that Treg infiltration was increased in the DDR1-knockout tumors (Figure 5A).FoxP3, which is a Treg transcription factor, was elevated in CD4 + helper T cells, CD3 + T cells, CD45 + leukocytes, and live cells.An investigation into the Treg-to-CD8 ratio showed a significant increase in the ratio among DDR1-knockout tumors (Figure 5B). Low DDR1 Expression in LUAD Patients Shows Higher FoxP3 + Treg Expression Given that knockout tumors showed an increase in Tregs, we next investigated a gene expression patient dataset of LUAD samples from TCGA.A cohort of 877 clinical samples with 597 primary tumor data from 509 patients were analyzed.Samples were grouped according to DDR1 expression (low, <50% vs. high, ≥50% or low, <25% vs. high, ≥75%) and were compared to FoxP3, a nuclear transcription factor present in Tregs that represents the main tumor-promoting CD4 + T-cell population.Notably, the DDR1 low group showed higher FoxP3 expression than the DDR1 high group (Figure 5C,D).Given that we employed a KRAS/p53-driven mouse model, we also investigated FoxP3 expression in patients with a KRAS/p53 double mutation (Supplementary Figure S5), revealing a significant increase in the <50% vs. ≥50% group.However, it is essential to note that the sample numbers are relatively low. Discussion Over the last couple of years, T-cell-targeted immunotherapy has gained more and more attention in cancer treatment research.The infiltration of T cells into the TME is a crucial step in obtaining efficient immune response against cancer cells and is consequently important for the success of cancer therapy. The tyrosine kinase receptor DDR1, which regulates various cell functions such as migration, adhesion, proliferation, cytokine secretion, and ECM homeostasis/remodeling (reviewed in [6,7]), has been discussed as a possible new target for cancer treatment given that it is not only overexpressed in lung cancer [12] but also associated with poor prognosis in patients with NSCLC [5,16].Ambrogio et al. showed that the combined inhibition of DDR1 and Notch signaling decreased tumor growth in a mouse model of KRAS-driven LUAD [25].Moreover, DDR1 inhibition enhanced the in vivo chemosensitivity of KRASmutant LUAD [26].Unfortunately, no study has yet investigated whether DDR1 affects immune cell invasion into the TME of NSCLC.However, initial evidence from recent studies on breast [17][18][19] and colorectal cancers [20] have shown that DDR1 could play a role in T-cell infiltration.In fact, a study on breast cancer patients by Sun et al. revealed a negative correlation between DDR1 expression and CD8 + T cells in breast cancer [17].Another study showed that targeting DDR1 with a humanized monoclonal antibody reversed immune exclusion by increasing T-cell infiltration and significantly increased antitumor efficacy in a mouse model of immunocompetent breast cancer [18].Another breast cancer study recently revealed that collagen-induced DDR1 upregulated CXCL5, which promoted the formation of NETs and enhanced Treg infiltration, thereby facilitating the growth and metastasis of breast cancer [19].Duan et al., who studied colorectal cancer, demonstrated that DDR1 promoted tumor growth in vivo by inhibiting IL-18 synthesis, leading to the decreased infiltration of CD4 + and CD8 + T cells [20].The lack of studies investigating the potential involvement of DDR1 in immune cell infiltration in NSCLC sparked our interest on this matter. Surprisingly, the current study showed that in an immunocompetent mouse model of KRAS/p53 LUAD, disturbing DDR1 increased the tumor burden.Both pharmacological inhibition (i.e., through DDR1 inhibitor 7rh) and genetic knockout of DDR1 promoted an increase in tumor volume in a KP mouse model.Our analysis of the infiltrated immune cells found no changes in leucocyte, general T-cell, or B-cell abundance; however, it did show differences in the presence of CD8 + and CD4 + T-cell subsets.In the DDR1-knockout tumors, we observed a decrease in CD8 + cytotoxic T cells and an increase in CD4 + helper and Tregs.The decrease in CD8 + cytotoxic T cells, which play a vital role in directly attacking and eliminating cancer cells, suggests that the absence of DDR1 might impact the recruitment or proliferation of CD8 + cytotoxic T cells, leading to reduced tumor cell killing. The changes in Tregs with DDR1 knockout were also observed in the TCGA gene expression dataset on lung adenocarcinoma patients.We observed that the expression of FoxP3, a marker for Tregs, was increased in tumors with low DDR1 expression and decreased in those with high DDR1 expression.The increase in Tregs could have a protumorigenic effect in tumor growth given that their presence can limit the effectiveness of antitumor immunity and contribute to immune evasion.These changes in T-cell subsets might hint at a disturbed balance between immune-activating and immunosuppressive T cells, leading to a pro-tumorigenic TME in this immunocompetent model. Overall, our study highlights the potential for DDR1 to play an important role in modulating the composition and function of T cells within the TME of NSCLC. Conclusions The findings presented herein show that DDR1 might exert some anti-tumorigenic effects in immunogenic lung cancer models.However, further investigations are needed to assess the potential of DDR1 as a therapeutic target given that it might induce Treg infiltration and/or differentiation, causing immunosuppression in hot tumors. 12 Figure 1 . Figure 1.Pharmacological inhibition of DDR1 drives tumor growth in an in vivo model of lung adenocarcinoma.(A) Experimental protocol of the in vivo syngeneic mouse model.KP lung adenocarcinoma cells were injected subcutaneously (s.c.) into the flank of C57BL/6J wild-type mice on day 0. Furthermore, mice were treated with 8 mg/kg of the DDR1 inhibitor 7rh or the vehicle via daily intraperitoneal (i.p.) injections.On day 13, mice were sacrificed, and tumors were harvested.(B,C) Ex vivo measured tumor volume (B) and weight (C) (n = 8 to 9).* p < 0.05 using Student's t-test.Data are presented as mean + SD. Figure 1 . Figure 1.Pharmacological inhibition of DDR1 drives tumor growth in an in vivo model of lung adenocarcinoma.(A) Experimental protocol of the in vivo syngeneic mouse model.KP lung adenocarcinoma cells were injected subcutaneously (s.c.) into the flank of C57BL/6J wild-type mice on day 0. Furthermore, mice were treated with 8 mg/kg of the DDR1 inhibitor 7rh or the vehicle via daily intraperitoneal (i.p.) injections.On day 13, mice were sacrificed, and tumors were harvested.(B,C) Ex vivo measured tumor volume (B) and weight (C) (n = 8 to 9).* p < 0.05 using Student's t-test.Data are presented as mean + SD. Figure 2 . Figure 2. DDR1 knockout increases tumor growth in vivo.(A) Schematic representation of the experimental procedure performed on the in vivo syngeneic mouse model.C57BL/6J wild-type mice were injected subcutaneously (s.c.) with KP DDR1 knockout (KO2 and KO6) and control (ctrl) cells.After 19 days, mice were sacrificed, and tumors were harvested.Three independent experiments were then performed.(B,C) Ex vivo tumor volume (B) and weight (C) were measured at the end of the experiment.(n = 28-30).(D,E) Western blotting showing the DDR1 expression of lysed tumor tissues.GAPDH was used as loading control.Original Western blots and intensity ratios are shown in Supplementary Figure S3.(F) DDR1 RNA levels of lysed tumor tissues.Samples were normalized to GAPDH.*** p < 0.0005, **** p < 0.0001 using one-way ANOVA.Data shown are mean + SD. Figure 2 . Figure 2. DDR1 knockout increases tumor growth in vivo.(A) Schematic representation of the experimental procedure performed on the in vivo syngeneic mouse model.C57BL/6J wild-type mice were injected subcutaneously (s.c.) with KP DDR1 knockout (KO2 and KO6) and control (ctrl) cells.After 19 days, mice were sacrificed, and tumors were harvested.Three independent experiments were then performed.(B,C) Ex vivo tumor volume (B) and weight (C) were measured at the end of the experiment.(n = 28-30).(D,E) Western blotting showing the DDR1 expression of lysed tumor tissues.GAPDH was used as loading control.Original Western blots and intensity ratios are shown in Supplementary Figure S3.(F) DDR1 RNA levels of lysed tumor tissues.Samples were normalized to GAPDH.*** p < 0.0005, **** p < 0.0001 using one-way ANOVA.Data shown are mean + SD. Figure 5 . Figure 5. Low DDR1 expression increases regulatory T-cell abundance in vivo and in TCGA human lung adenocarcinoma data.(A,B) Flow cytometry analysis of tumor single-cell suspensions pooled from two independent experiments (n = 14-15).The gating strategy is shown in Figure 3. (A) In vivo Figure 5 . Figure 5. Low DDR1 expression increases regulatory T-cell abundance in vivo and in TCGA human lung adenocarcinoma data.(A,B) Flow cytometry analysis of tumor single-cell suspensions pooled from two independent experiments (n = 14-15).The gating strategy is shown in Figure 3. (A) In vivo Figure 5 . Figure 5. Low DDR1 expression increases regulatory T-cell abundance in vivo and in TCGA human lung adenocarcinoma data.(A,B) Flow cytometry analysis of tumor single-cell suspensions pooled : Antibodies used for flow cytometry staining of tumor single-cell suspensions; Supplementary Figure S1: DDR1 expression in lung adenocarcinoma (LUAD) patients with different mutations (MUT) compared to wildtype (WT).DDR1 expression is shown as FPKM-UQ (fragments per kilobase of transcript per million mapped reads upper quartile) using TCGA (The Cancer Genome Atlas) data.; Supplementary Figure S2: Validation of DDR1 knockout cell lines.(A) The Western blot shows the DDR1 expression of knockout KP cell lines in vitro.Ctrl was used as the control, whereas KO2 and KO6 were used as knockout cell lines.GAPDH was used as a loading control.Original Western blots and intensity ratios are shown in Supplementary Figure S6.(B) Western blots were normalized to ctrl within each experiment (n = 3).(C) In vitro DDR1 RNA levels in each cell line.(D) In vitro proliferation of DDR1 knockout cell lines using bromodeoxyuridine (BrdU) flow cytometry analysis.* p < 0.05, **** p < 0.0001 using one-way ANOVA.Data shown are mean + SD; Supplementary Figure S3: Intensity ratios and Western blots for Figure 2D,E.(A) Values indicate the intensity ratio of DDR1/GAPDH.Ctrl samples on Gel 1 and 2 are identical.Ctrl values in Figure 2E are the mean of each ctrl of both + , CD3 + , CD45 + , and live cells.(B) Tregs/CD8 ratio in tumors in vivo.* p < 0.05, ** p < 0.005, *** p < 0.0005 using one-way ANOVA.Data are presented as mean + SD.
5,996.2
2023-12-01T00:00:00.000
[ "Medicine", "Biology" ]
Ocular tentacle malformation in Deroceras reticulatum (Mollusca: Gastropoda: Agriolimacidae) Malformations in animals have long been known. In gastropod, natural and induced malformations are mentioned in different systems and in ocular tentacles, mainly linked to cases of parasitism and exposure to pollutants (molluscicides and chemicals). In this study we present a new malformation not documented in the ocular tentacles of slug Deroceras reticulatum that could be due to the action of pesticides. This malformation in D. reticulatum is the first malformation to be mentioned for South America in nursery gardens. Key-Words. Anomalies; Nursery gardens; Pesticides; Slugs. INTRODUCTION Malformations in animals have been cited in several cases, particularly in gastropod molluscs, mainly linked to cases of parasitism and exposure to pollutants (molluscicides and chemicals). These malformations can affect their reproductive system, mantle, foot or ocular tentacles (Simroth, 1905;Boettger, 1956;Wiktorowa, 1962;Jackiewicz et al., 1998;Barroso et al., 2000;Lahbib et al., 2008;Sawasdee & Kohler, 2009;El Ayari et al., 2018). The abnormal variations developed in the tentacles of gastropods are remarkable, and occur under both natural and experimental conditions. Among the natural anomalies observed are: a simple tentacle with a central cephalic position (Wächtler, 1929;Chetail, 1958;Jackiewicz, 1969), two tentacles arising from the same base (Techow, 1910;Hofmann, 1912), partially fused tentacles located in the center of the head (Römer, 1903) or bifurcated tentacles (Jackiewicz, 1969;Jackiewicz et al., 1998). Other anomalies present in tentacles have been induced experimentally by the amputation of the tentacle (Techow, 1910;Hofmann 1912 Despite of the existence of several anomalies all over the globe, in South America, more precisely in Argentina, references about the presence of malformations in terrestrial molluscs are scarce, associating them to studies on molluscicides (Clemente et al., 2008), in which under laboratory conditions the lethality and the impossibility of reproducing are proved as consequences of these agents. In the framework of identifying pathways of dispersion of invasive slugs in commercial nursery gardens in Buenos Aires province (Argentina), we present the first report of ocular tentacle malformation for Deroceras reticulatum (Müller, 1774). MATERIAL AND METHODS Sampling was carried out during the month of November 2016 in four commercial nursery gardens located in the district of San Pedro (33°41'29"S, 59°40'36"W), Buenos Aires province, Argentina. The molluscs were obtained by manual collection, in natural areas, in plantations and "pots". The molluscs collected were photographed while living in their environment and then relaxed in menthol solution for one day, to be later preserved in alcohol, and deposited in the Malacological Collection at the La Plata Museum of the La Plata National University, Buenos Aires Province, Argentina (MLP-Ma). Determination of the specimens was based on Barker (1999). RESULTS AND DISCUSSION The sampling resulted in 77 molluscs collected, 35 are native slugs of the family Veronicellidae and 42 are exotic slugs of the family Agriolimacidae. Between the exotic slugs, 41 belong to Deroceras laeve and 1 to Deroceras reticulatum (MLP-Ma 14550). The origin of Deroceras reticulatum is believed to be Europe (Wiktor, 1996) and currently has a worldwide distribution and is a pest for some plant species as well as vector of various harmful parasitic organisms for both humans and other animals (Berg, 1997). This specimen of D. reticulatum (total length 24 mm, Fig. 1) has a malformation, in which the ocular tentacles are fused along their entire length, giving the appearance of presenting a single ocular tentacle. The fusion takes place only between the body walls, whereas the two ocular nerve cords run separately, within the same tentacular structure, so that ocular sensors are situated side by side (Fig. 1). The malformation observed in D. reticulatum of San Pedro is not the most common type, since amputations, bifurcations or partial fusions of the tentacles are usually detected as mentioned above (Römer, 1903;Wächtler, 1929;Chetail, 1958;Jackiewicz, 1969;El Ayari et al., 2018). Although only one individual with this malformation was registered, it is important to document their registration, which is the first to be mentioned in South America. This specimen was found inside a commercial nursery, this being the first record of malformation in this type of anthropogenic environment. Which could indicate that its malformation was related to pesticides used to keep crops and ornamental plants free of pests. We will continue to sample these sites to evaluate if more specimens with malformations are found. ACKNOWLEDGMENTS This study was financially supported by project N727 of the Facultad de Ciencias Naturales y Museo (UNLP). We thank specially to proprietaries of commercial nursery gardens by facilitate access to them.
1,034.6
2019-08-15T00:00:00.000
[ "Biology" ]
Design Method of Intelligent Touchpoint: Intelligent Auto-Loading Cargo Transport Vehicle for Automobile Passenger Transportation e integration of passenger and cargo transportation has become a new pro‚t development point in the transformation of automotive passenger transport services, and cargo vehicles are a major service touchpoint for the integrated passenger and cargo transport system. is study presents a set of intelligent service touchpoint design principles with intelligent perception, connection, analysis, decision-making, and execution, to innovate the integrated intelligent service system of automobile passenger and cargo transportation, and intelligent self-loading and unloading cargo transport. Based on the ‚eld data of cargo transport services of ‚ve automobile passenger stations in China, the problems are analyzed and summarized from the aspects of service facilities, content, price, and process, and ultimately form an intelligent service system integrating automobile passenger and cargo transportation. Next, the proposed system is analyzed and compared with ‚ve existing intelligent consignment products, and target tasks and requirements of cargo transport vehicles are extracted. e system realizes the usability evaluation of the functional prototype through the combination of open-source hardware, Arduino platform, and mechanical transmission structure, and veri‚es the availability of the design principles of intelligent service touchpoints. Introduction Service touchpoints arise from the interactions between service providers and service recipients and are also important to in uence the user experience in the service process which include Physical TouchPoint, Digital TouchPoint, and Personal Touchpoint [1]. Physical TouchPoint is the tangible, physical, and touchable touchpoint between service providers and service recipients, such as product and space environment [2]. Digital Touch Point comes from the interaction between smartphone applications or PC webs and a digital system. Personal TouchPoint arises from the direct or indirect interaction between people, such as information consultation with waiters. From the perspective of the intelligent development trend, the service touchpoints integrate with intelligent technology smoothly, which will improve the overall-service e ciency and user experience; intelligent service touchpoints can improve service e ciency and user experience without a doubt [3]. Unlike traditional physical touchpoints, digital touchpoints, and personal touch pint, intelligent touchpoints can assist environmental perception, senior identi cation, information reception, and behavioral decision-making. Currently, the focus of intelligent service touchpoints is primarily on locating speci c intelligent service touchpoints in the service process using the method of service design; few people are interested in the intelligent service interaction ontology of intelligent service touchpoints, and the majority of them are primarily interested in discovering the basic functions of intelligent cargo transport vehicles [4]. In recent years, intelligent technologies such as intelligent hardware, robots, and 3D visual perception have assisted in the research and development of intelligent logistics and the integration of passenger and cargo transportation, while also assisting in the development of intelligent cargo transport vehicles [5]. e design and development of intelligent cargo transport vehicles have become a tool for increasing cargo transport service efficiency and improving customer experience. As a result, intelligent vehicles have a large market opportunity. Various manufacturers are pouring into the diverse new sectors of intelligent cargo sorting and intelligent handling of different sorts of goods using current technology and features of intelligent cargo transport vehicles [6]. In this case, the intelligent cargo transport vehicle is the most important service touchpoint in the transformation of the integrated intelligent system of passenger and cargo transportation in the automobile passenger station, and its design and research, and development are of great importance in solving the problem of the last kilometer between the goods and the automobile storage [7]. is study presents an integrated intelligent service system of automobile passenger and cargo transportation, as well as the new function of intelligent self-loading and unloading cargo transport vehicles in automobile passenger stations, to fill the gap in cross-platform transportation of intelligent transportation, which may be valuable guidance for other designers. e design elements and principles of intelligent service touchpoints are verified through the Arduino prototype. Considering the users' needs for the same intelligent service touchpoints in different scenarios and products, the results can be applied to the operation and research of other relevant service touchpoints. e rest of the manuscript is organized as: Section 2 is about related works. Section is about material and methods and provides a detailed description of the proposed method. Section 4 is about results and Section 5 provides discussion on the obtained results. Finally, the conclusion is presented in Section 6. Related Works Service touchpoint, also known as a customer contact point (customer touchpoint, CTP), is one of the three pillars of service design research. Touchpoints are the individual contacts or interactions between an organization and a person, which occur in interactions with places, people, products, or marketing initiatives. Intelligent touchpoints are mainly the products, facilities, or intelligent systems between service providers and service recipients [8]. Various models have been proposed to explore the requirements and design of intelligent products, services, or systems. For instance, Wang and He [9] proposed a design method model of smart home product touchpoints based on user behavior. Miao et al. [10] proposed a design strategy of service touchpoint of intelligent home product portfolio based on a scenario. e author in Ref. [11] proposed that intelligent products should have the ability to record data and information, learn and think. Zheng [12] presented a hexahedral component model of an intelligent product-service system. Only a few people began to explore intelligent service touchpoints directly but only put forward this statement: Wang et al. [13] directly apply customer touchpoints to the development of intelligent product touchpoints in service design. As a medium for users to interact directly with the enterprise, service touchpoints play an important role in the direct perception and evaluation of the whole service. With the continuous development of intelligent transportation and intelligent transportation system and the increasing discussion on the service mode of integration of passenger and cargo transport, scholars have discussed the operation organization and the last mile of the integrated passenger and cargo transport system from the point of view of all modes of transport (air, sea, land) [14][15][16]. From the point of view of logistics management, some scholars have discussed the integrated warehousing management system based on RFID software and hardware, temperature monitoring and tracking system of frozen goods, intelligent logistics management model based on RFID technology and geographic fence algorithm, and iTape intelligent tape for cargo damage and theft monitoring [17,18]. e research on self-loading and unloading cargo mainly focuses on lifting mechanisms, hydraulic systems, synchronous control systems, and cargo frames, and its application fields are bulk container logistics, waste transportation, and field transportation. Lee et al. [19] put forward the process of logistics equipment system research. Iwan and Stanissimaw [20] introduced the availability of cellular automata as a traffic simulation tool for urban freight transport systems to analyze the efficiency of this measure and its potential impact on the urban environment. Rahman and Nielsen [21] developed a method for scheduling automatic transport vehicles to ensure the integrated operation of multiple automatic transport vehicles in production and container station environments. Shen et al. [22] presented the framework of an intelligent logistics system for parallel loading and unloading. Although researchers and enterprises have made some exploration with different emphasis on intelligent self-loading and unloading cargo transport vehicles, in the context of automobile passenger stations, few researchers and enterprises study and explore the intelligent links between automobile warehouses, goods, and intelligent self-loading and unloading cargo transport vehicles. As a result, greater research into the design and development of intelligent self-loading and unloading freight transport in the context of an automotive passenger station is required. With the rise of the integration of passenger and cargo transportation, various advanced technologies such as artificial intelligence (AI) and digital twinning are integrated into the design and research of transportation and logistics, and the intelligence of automobile passenger transport service touchpoints is becoming a trend [23,24]. However, there is little research on intelligent self-loading and unloading cargo transportation of automobile passenger transport. e research on the integration of passenger and cargo transportation and intelligent logistics transportation has some inspiration for the research and design of automobile intelligent self-loading and unloading cargo transportation service touchpoints. Traditional bus stations gain commercial benefits by transporting passengers, luggage, and a small amount of private bulk cargo [25]. With the change in technology and demand, without the mode transformation of intelligent passenger and cargo transportation integration, the profit will continue to decline and bus stations may eventually be banned. Traditional intelligent transportation tools are used in warehouse logistics sorting and cargo handling, mainly to complete single-plane vertical lifting [26,27]. However, in the service situation of the automobile passenger station, it will be impossible to transport goods to the automobile storage area, so cross-platform transportation tools are needed. Handling technology is changing from labor-intensive to technology-intensive, but most of them are developed for specific areas, not for the unique environment of bus passenger stations [28][29][30]. erefore, combined with the analysis results of field investigation, this study puts forward the intelligent service of automobile passenger and cargo integrated transportation and a kind of service touchpoint of intelligent self-loading and unloading transport, to solve the problems of low efficiency and high labor cost in the field of loading and unloading. Field Data Collection of Automobile Passenger Station. In this study, the data are collected from a field survey of Beijing's large-scale automobile passenger station, aiming to obtain the current situation of cargo service of the automobile passenger station. e information on automobile passenger stations in Beijing is selected from the Internet, and the appropriate stations are chosen according to the cargo transportation content, station level, station type, and station configuration to determine the final list. Next, indepth field research is conducted one by one, and five automobile passenger stations were investigated, including Ba Wangfen automobile passenger station (PS1), Sihui longdistance passenger station (PS2), Liuliqiao long-distance passenger station (PS3), Zhao Hongkou long-distance passenger station (PS4), and Muxiyuan long-distance passenger station (ps5). is study aims to complete two tasks. Firstly, an element analysis model of service touchpoint is established using service content, service facilities, service cost, and service process to understand the attitude and status of automobile passenger stations towards cargo transportation service, to put forward the design principles of intelligent service touchpoints. Next an intelligent cargo transportation service system is built using the design principles and cargo demand. During the field investigation, the service observation method and personal experience method were used to record the service process and service experience of cargo transportation in an automobile passenger station, and we get the typical scenes and factors affecting cargo transportation. e core content of the field research tried to answer the questions: (i) Does the existing automobile passenger station provide integrated passenger and cargo transport. (ii) If so, what specific types of goods are provided for transportation? (iii) What are the charges for transportation services? (iv) What is the service flow of existing goods transportation? (5) How is the experience of the service touchpoints in the service process? Element Analysis Model of Intelligent Service Touchpoint. To understand the service status of cargo transportation in automobile passenger stations, the service provision, service facilities, and service process elements are used in the service design to record and add the price factors affecting the decision-making; explore the demand and touchpoint of intelligent cargo transportation to find the most appropriate solution and find out the core service touchpoint finally. e data of five stations were collected and the basic information was input and sorted out the service content, service facilities, service cost, and service process according to on-site photos and on-site records of the station. e basic information of the five stations is shown in Table 1. Next, the duplicate data were eliminated according to the sorted documents to determine the cargo transportation of the automobile passenger station. Finally, the data were entered into the service status matrix list. Data Qualitative Analysis. Based on the grounded theory and qualitative analysis of service design theory, an analysis model of service facilities, service content, service cost, and service process was constructed, analyzed, and recorded for the 5 stations one by one (as shown in Table 2), and the existing problems of each station were analyzed according to the analysis records (as shown in Table 3). rough the above demand analysis, it is found that there are four common problems in the existing consignment service system of automobile passenger transport: first, there are many private order shipments because there is no standardized and complete service system as the service part of the automobile passenger station; second, there is no whole process service system, only set process stages without corresponding products and equipment as support; third, there is no standardized service space to implement services and there is consignment space without pick-up, storage point; fourth, there is no platform-professional operation services to brand promotion. Design Guidelines. Finally, by integrating the above sections, the design framework is obtained for intelligent service touchpoints: D1: IntelliSense. Using the sensing technology, real-time information and data such as spatial environment, path lines, running status, and target recognition can be obtained. D2: Intelliconnection. In the intelligent service system, different intelligent touchpoints are connected and interact with each other, such as the transmission of information and data. D3: Intellianalysis. rough the intelligent service system, conduct a comprehensive analysis of the task requirements, spatial environment, and running status of different intelligent service touchpoints. D4: Intelligent decision-making. rough the intelligent analysis of the data and information of the intelligent service system, the optimal service scheme is output, such as the best route, process, and resource allocation. D5: Intelligent execution. According to the optimal decision-making scheme, the intelligent service touchpoint completes the automatic and intelligent process and task execution. According to the field research of automobile passenger stations and the comprehensive analysis of similar cargo transport service tools, the demand for integrated services of bus passengers and cargo transportation is derived, and the solution of an intelligent cargo transportation service system is designed. e combination of cargo transport baskets and intelligent cargo transportation tools solves the most important problems, i.e., the problem of the last kilometer between goods and automobile storage. Design of Intelligent Transportation System. According to the above analysis results and the suggestions of design principles, an integrated passenger and cargo transport intelligent service system is constructed with shared storage space for automobile passenger transport, as shown in Figure 1. e basic idea is to provide the service capacity of free transportation for automobiles without affecting the existing normal passenger transportation, to exploit the potential of the current transportation service of the automobile passenger station. e purpose of this system is to fully share and utilize idle space resources. e overall process of this system is as follows. (1) e users take the goods to the automobile passenger station. (2) e user goes to the intelligent service cargo transportation machine to fill in the mailing information and deliver the goods. (3) e intelligent transport machine passes through the security check and sorting machine, sorts the target location, and throws the goods into the target transport basket (4) When the goods in the transportation basket reach a certain weight, volume reaches a certain amount, or reaches the first 10 minutes of the car departure time, the intelligent cargo transport vehicle will execute the consignment luggage transport basket. (5) e intelligent transport vehicle will transport along the path of the target vehicle and transport to the car storage area for loading and unloading. (6) e intelligent cargo transport vehicle will exit. Bawangfen bus station e Bawangfen long-distance bus station is a passenger joint venture jointly established by the Spanish Alsa Group and the Beijing Transportation Company. is station has its own logistics express center (subordinate to ALSA). Ps2 Sihui long-distance bus station e passenger transportation route of Sihui long-distance passenger station covers 9 provinces and one city, including Jilin, Liaoning, Hebei, Anhui, Henan, Inner Mongolia, and Tianjin, and is sent to the three northeastern provinces, Tangshan area of Hebei province, Tianjin area, and parts of the south. integrated passenger transport hubs integrating inter-provincial passenger transport, public transport, rental, social vehicles, subways, etc., the first to achieve a seamless connection mode in the station. is station has a package consignment service. (7) When the car reaches the destination, the intelligent cargo transport vehicle of the destination will transport the cargo transport basket to the cargo storage area again, waiting for the users to pick up the goods. In practice, this means that, in addition to logistics enterprises, companies and individuals in other industries can also use the system to deliver their goods (as shown in Figure 1). Design of Cargo Transport Basket. e cargo transport basket is developed for organizing the cargo to facilitate the transportation of intelligent transport vehicles, which is mainly divided into four steps: automobile information data collection, baggage compartment size determination, transport basket size determination, and conceptual design of transport baskets. According to the shipping weight, distance, and item category, the bigger the order, the more favorable Inbound check-in/telephone reservation/on-site serviceweighing-fill out the order (information specification is detailed) and pay the feeissuing-loading 1. e phenomenon of self-communication between waiter and driver occurs, rather than a standardized and complete service system, services are random 2. Poor service attitude, random service price, each waiter has different opinion 3 Ps3 1. e passenger coach unloads the goods to the coach parking place, causing the goods to be placed in disorder, and the consignment service personnel are required to use manpower trailers to consign one by one to the pick-up place 2. ere is no special storage area if goods are not taken take immediately, or goods will be placed in the outdoor venue 3. Customers bring their trailers to take goods away from the pick-up place 4. Private order consignment, random operation, no complete specifications 4 Ps4 1. e package consignment is located next to the ticket office, but there is no storage place 2. Passengers bring their transport cargo trailers, no consignment tools 3. Only consignment shipping service, but no receiving storage area 5 Ps5 1. Only consignment shipping service, but no receiving storage area 2. e passenger coach unloads the goods to the coach parking place, causing the goods to be placed in disorder, and the consignment service personnel are required to use manpower trailers to consign one by one to the pick-up place. Step 2: Luggage compartment size determination. en, due to the wide variety of vehicle types in the market, it was found that the 41-50-seat model is the main transport vehicle, so we selected a typical Yutong ZK6908H1Y model as the reference sample, which belongs to the neutral high class, length 8995 mm, width 2550 mm, height 3450 mm, vehicle weight 10200 kg, seat capacity 24-41 people (as shown in Figure 2). en, the actual measurement of the baggage compartment of this model was carried out by field research, and it was found that it is divided into two cabins, the length of a single compartment is 1250 mm, and the width is 2550 mm, and the height is 1000 mm. en, according to the measured data, the baggage compartment volume/lift volume is calculated, and it is concluded that the maximum volume is 6120000 mm 3 (length width height). To ensure the feasibility of car baggage compartment sharing service, it is necessary to satisfy whether there is still idle space to ensure the execution and uninterrupted service when the automobile is full. Taking each passenger carrying a maximum 28-inch suitcase (450 mm × 280 mm × 680 mm) as the benchmark, it can be derived whether there are free space resources; that is, by dividing the volume of the baggage compartment by the volume of the largest suitcase, it is concluded that the maximum number of suitcases in the baggage compartment is 72 pieces (6375000000 mm3/ 87584000 mm3) of 28-inch suitcases, subtracting the maximum number of passengers 41 people with one piece of luggage; it comes out that there is still 31 free space of 28inch luggage volume, which means that there is free space in the luggage compartment to provide cargo transportation service. Step 3: Determine the size of transport baskets. To reduce the consignment times of intelligent cargo transport, it was divided into 4 storage areas according to the two-compartment luggage compartment, and 4 transport boxes were set according to the volume size of the storage area, which is divided into two categories, namely, consignment luggage transport boxes and consignment item luggage boxes, and distinguished by color; and the number of transport boxes can be intelligently allocated according to the number of luggage and consignment items, and the number and the quantity can be adjusted (1:3, 2:2, 3:1). rough the above measurements, the length, width, and height of the four storage areas are 1250 mm, 1275 mm, and 1000 mm, which are used to find the maximum size of the transport basket. In this study, owing to the intelligent operation, the psychological correction is the error correction, and the length of the transport box can be calculated by adding the functional correction to the psychological correction. e maximum number of suitcases that can be accommodated by the transport box length-� luggage compartment length/luggage box length (take the maximum integer) 1250 mm/280 mm ≈ 4.6, that is, the maximum integer 4, and the transport box length is 4 × 280 mm � 1120 mm (functional correction amount), taking 80 mm (the isolation frame between the two cabin + error) as the error correction, that is, the size correction of the transport box length � function correction ± error correction � 1040 mm ∼ 1200 mm. Next, calculate the width of the transport box; first find out the width of the transport box can accommodate; the maximum number of suitcases multiplied by the width of the suitcase can be derived, the width of the transport box can accommodate the maximum number of suitcases � luggage compartment width/luggage width (take the largest integer) 1275 mm / 460 mm ≈ 2.8; take the largest integer 2, resulting in the transport box width of 2 × 460 mm � 920 mm (functional correction amount), and we can also accommodate a side put luggage that should be added to the length of its luggage 280 mm, resulting in a functional correction amount of 920 mm + 280 mm � 1200 mm; its error correction amount of 50 mm (luggage compartment for through type, so the actual error correction amount of two transport box error correction amount of 50 mm × 2 � 100 mm), that is, the resulting transport box width of the size correction amount � functional correction amount ± error correction amount � 1200 mm ∼ 1250 mm. Calculate the height of the transport box; first find out the maximum number of suitcases that can be accommodated in the transport box high multiplied by the luggage box high can be derived, the maximum number of suitcases that can be accommodated in the transport box high � luggage compartment high/luggage box high (take the largest integer) 1000 mm/680 mm ≈ 1.4, take the largest integer 1. e resulting transport box width is 1680 mm � 680 mm (it can also accommodate side luggage, that is, the luggage should be added to the length of 280 mm, resulting in a functional correction of 680 mm + 280 mm � 960 mm), the error correction of 30 mm, and the size of the transport box width correction � functional correction ± error correction � 930 mm ∼ 990 mm so that the transport box length and width of the height is 1040 mm ∼ 1200 mm, 1200 mm ∼ 1250 mm. 1200 mm, 1200 mm ∼ 1250 mm, and 930 mm ∼ 990 mm. Step 4: Conceptual design of transport baskets. According to the reasoning of the above transport basket size, that is, the length, width, and height are 1040 mm 1200 mm, 1200 mm 1250 mm, and 930 mm -990 mm. In addition, coupled with the demand analysis of the transport basket in the early stage, it is necessary to facilitate the insertion, lifting, and lowering of the lifting platform of the intelligent transport vehicle. Based on this pair, the modeling scheme of the transport basket was designed, as shown in Figure 3. Intelligent Transport Vehicle Design. rough the previous analysis, it can be seen that the design of intelligent self-loading and unloading cargo transport is divided into four different parts, data collection and analysis of the same type of products, balance calculation, conceptual design, and process design. Step 1: Data collection and analysis of similar products of intelligent cargo carriers: We analyzed the type of cargo, operation modes, route identification, and cargo acquisition of five existing intelligent transportation tools (such as Table 4). It is found that the existing means of cargo transport are mainly used in warehouse logistics sorting and cargo handling. From the point of view of the mode of operation, the existing products are mainly single-plane vertical lifting, a small amount of grasping cross-platform transportation, and there are immature heavy objects; from the way of route identification, it is mainly one-point, optics, image, inertial magneto-optic, and magnetic tape teaching coordinate path guidance, and so on. Based on the previous analysis and the analysis of similar products, for the intelligent service system, the intelligent cargo transportation of the automobile passenger station needs to meet the service behaviors such as cargo transportation, cargo loading and unloading truck, route identification, fast platform, and transportation. Step 2: Calculation of the balance of the intelligent cargo vehicle: To find out whether the intelligent auto-loading cargo transport vehicle can balance and carry the goods to complete the service, it is necessary to calculate the vehicle's self-weight, maximum cargo weight, motor power, rechargeable battery capacity, and charging time. Suppose the self-weight of the vehicle is M, the self-weight of the mechanical arm carrying the luggage on it is M1, and the other parts of it are M2, which include the battery, motor, counterweight, housing, and other structural parts of the vehicle, M � M1 + M2. To save cost and achieve the purpose of lightweight, M2 should be as small as possible while satisfying the function. e design should satisfy three functions: Stability function: e center of gravity will change when the vehicle is moving and handling operations, so the horizontal projection of the center of gravity of trolleys should be under the car body to prevent the car body from overturning for maintaining stability. Battery duration: e battery of the vehicle should meet the energy consumed by its continuous operation for a certain period. Improve efficiency: e working time of the vehicle should maintain a good ratio with its charging time, to facilitate it to carry out reasonable operation time arrangements. To solve the problem of self-weight and balance stability of the vehicle. Suppose the weight of the luggage carried by the vehicle is M. According to the "Automobile Passenger Transport Rules of the Ministry of Transport of the People's Republic of China", the overall weight of the luggage carried by each passenger cannot exceed 40 kg. e weight of a single package must not exceed 30 kg. According to the conditions set above, taking one passenger-carrying luggage as a reference, we set the maximum value of m as 30 kg. Under the condition of full load, the distance between the center of gravity of the vehicle and the center of gravity of the load is l 1 , and the distance between the center of gravity of the vehicle and the edge of the bottom surface is l. To meet the stability function of the vehicle, it is necessary to satisfy the following equation: Step 3: Conceptual design of an intelligent cargo carrier. According to the previous demand analysis, it is concluded that the intelligent auto-loading cargo transport vehicle needs to be able to load a transport box with a length, width, and height of 1040 mm ∼ 1200 mm, 1200 mm ∼ 1250 mm and 930 mm ∼ 990 mm, respectively, and can automatically load and unload the transport box from the passenger transport station and the luggage compartment of the automobile, with the functions of tracking, loading, unloading, information receiving and identification, and infrared shielding, thus defining the form and structural design of the intelligent autoloading cargo transport vehicle. After comprehensive thinking on structure and function, it was finally determined that the intelligent auto-loading cargo transport vehicle has three functions: information identification, battery placement, and cargo lifting. e first part is the front-end recognition function area, that is, infrared obstacle avoidance and path recognition; the second part is that the middle part places the battery; the third part is that the back-end meets the functions of auxiliary lifting, stable counterweight, and auxiliary positioning, to complete service provision and intelligent consignment. According to the three functional partitions obtained from the previous requirements analysis, the functional scheme comparison analysis is performed on the final scheme, as shown in Figure 4. Step 4: Conceptual design of an intelligent cargo vehicle. When the departure time is up or the transport basket reaches the maximum weight, the basket will send a signal to the intelligent cargo transport vehicle; after receiving the signal, the vehicle moves to the corresponding position, and transports the basket along the path to the corresponding passenger luggage compartment; by identifying the height, the vehicle lifts the transport basket and uses the transport rack to transport the basket into the luggage compartment; then the transport rack descends and the vehicle falls back, which means the vehicle completes this task. An intelligent auto-loading cargo transportation system uses the technology of tractor turbine vortex to realize the lifting of the handling platform and innovates in the lifting structure. e new lifting structure enables the chassis of the intelligent auto-loading cargo vehicle to partially penetrate under the luggage compartment of the passenger automobile, and the transport platform carrying the transport basket can be directly inserted into the luggage compartment; after the platform is slightly lowered, the intelligent auto-loading cargo vehicle is backed up, so that the platform can be withdrawn from the bottom of the transport box, leaving the transport box in the luggage compartment of the bus (as shown in Figure 5). 8 Mobile Information Systems Evaluation. Arduino is an open-source software and hardware electronic prototyping design and development platform [28]. Its application field mainly focuses on the prototype test design of intelligent electronic products, including software and hardware [29]. e software part refers to the integrated development environment (IDE) and the hardware part is available for various types of Arduino programmable control circuit boards for circuit connection. e sensors suitable for this study can be selected for the intelligent hardware interaction design of intelligent self- loading and unloading trolleys. eir convenient, flexible, and easy-to-use operation characteristics have made them global leaders in the fields of maker prototype development, design prototype experiments, and electronic product research [30]. We used the Arduino open-source hardware to cooperate with the mechanical transmission structure, to complete the intelligent self-loading and unloading cargo transportation trolley experiment; then we use the tracking sensing technology to identify the transportation path in the whole service system, including the checked baggage transport path and the mailing logistics path of the mail item. Transportation baskets carry luggage to the pick-up path. It uses stepper motor drive technology and gear drive sliders. Based on the slide rod fixed auxiliary mechanical principle, transportation baskets can carry out services which include the loading and unloading of the transport baskets corresponding to the consignment center and transport baggage at the mailing station. Eventually, the transportation and the luggage compartment will be unloaded at the pick-up rack. Experimental Method and Material. e experiment was performed on a flat wooden board with a size of 841 mm × 1189 mm in the model laboratory of the Beijing University of Technology. e model of the cargo loading and unloading platform was proportionally reduced according to the height of the chassis and luggage compartment of the passenger vehicle. e experiment stuck black tape on the wooden board to identify the experiment path, and the path is S-shaped since there will be straight lines and twisted lines in real-time application. Although high-precision sensing enables car right-angle turns; it was decided to avoid right-angle turns because of the sensitivity of sensors in this experiment. e experiment used a three-way tracking probe of Arduino to identify the path, stepper motor, and self-made flat lifting platform to complete the loading and unloading simulation, as shown in Figure 6. Secondary Tasks. is experiment needs to complete two other experiments, namely, the path recognition experiment, and the loading and unloading lifting experiment, as shown in Figure 7. ere are three cases in the path recognition task: the first task is straight path recognition; the second task is curved path recognition; the last is the recognition of path return after reaching the destination coordinates. e loading and unloading elevating experiment includes: the first one is determining the target coordinate A and then descending and then loading; the second is determining the target coordinate B and descending and unloading, exiting the route, and returning. Figure 8. is experiment uses three infrared lamps of the three-way tracking probe to identify the black line; the car turned left when the left probe recognized the black line, the car went straight when the middle probe recognized the black line, and the car turned right when the right probe recognized the black line, and the car stopped when the three probes recognized the black line at the same time. Next, when the three-way tracking probe stops at the same time as identifying the black line, it means starting the stepping motor, and the stepping motor rotates counter-clockwise upwards; similarly, when the three-way tracking probe stops identified the destination, the stepping motor closed down. e stepping motor rotates clockwise to descend and then exits. Path Recognition Experiment Performance. According to the requirements of the previous experimental design of path recognition, a total of four iterations of path recognition line iterations are shown in Figure 9 to meet the experiments of smart freight cars returning on straight, curved, and single-line paths return. In the first path experiment, the black line recognized by the middle probe meets the straight-line path recognition. In the second path experiment, the straight line and the curve are used for experimental purposes. e middle probe is used to realize the forward recognition, and the left and right probes identify the turning path. Straight-line plus curve plus single-line path return is used for the third experimental purpose. e three-way tracking probe is used to complete straight travel, direction determination, and stop; by controlling the left front motor to suspend operation, the inertia brought by the rotation of the front right, rear right, and left rear motors is realized. Plus the real-time identification of the three-way tracking probe to determine the direction to achieve a single-line path return experiment; path experiment four is a circular single-line path return experiment based on the single determination task performed in experiment three. e comparative analysis of the four rounds of experiments found that experiments 3 and 4 are to complete the overall objective of the experiment. Experiments 1 and 2 are mainly based on the completion of a single task, as shown in Table 5. Loading and Unloading Lifting Experiment Performance. e service prototype is updated according to the problems found in the actual prototype of the service prototype (as shown in Figure 10). e main update part is the part of the transport lifting platform, which changes the gear and rack drive into a ruler. e reason for the change is that the gear and the rack cannot completely fit, resulting in the drive jam, and inability to carry out plane lifting, as shown in Figure 11. Discussion is research demonstrates the feasibility of an intelligent service system for automobile passenger and cargo integrated transportation, as well as the feasibility of an intelligent cargo self-loading and unloading transport vehicle validated by open-source hardware. Compared with other studies, the transportation scenario of automobile passenger and cargo integrated transportation is expanded and developed from the single-platform transportation of intelligent cargo carriers to cross-platform transportation of cargo. It was estimated that the baggage compartment can accommodate up to 72 28-inch suitcases, minus 41 28-inch suitcases at full capacity, ((72-41)/ 72 ≈ 43%), increasing the utilization rate of idle space by at least 43%. At the same time, supporting tools were added to our field research to enhance our analysis and understanding of services, which helped us to be able to discuss future trends in smart service touchpoints. is study is based on the service touchpoint of service design, in terms of consignment service of the automobile passenger station, where service content, service facilities, service cost, and service process were chosen to construct our sub- service analysis model. According to the vertical comprehensive analysis of the four dimensions, an intelligent cargo transport service system of automobile passenger and cargo integrated transportation is constructed, and the intelligent auto-loading cargo transport vehicle is designed to solve the problem of the last kilometer between the goods and the automobile storage warehouse. In terms of theory and method, based on an artificial intelligence system, this study presented a product size calculation method of dimension correction � function correction ± error correction. e analysis of the data and information of the field research by auxiliary tools helps us to record the service status of the real scenarios and ensure the authenticity of the data. On this basis, the problems and demand transformation of cargo transportation are analyzed to promote the five design principles and suggestions for intelligent service touchpoints. e biggest limitation of this study is that only the Yutong zk6908h1y model is selected as the data reference sample in the design stage; although there are various model numbers in the market, not all models are used as a data reference. In this case, we inevitably limit the adaptability and flexibility of different sizes of intelligent vehicles. For the prospect of the intelligent service of automobile passenger and cargo integrated transportation, this study proposed the intelligent cargo transportation service system, and designed and verified the functional prototype of the cargo transport vehicle. However, for the system, there are still many design opportunities for service touchpoints, which can enrich the intelligent services of the system. For intelligent cargo transport tools, adaptability can be explored according to different models of vehicles in the future. As future work, this study will assume that all service touchpoints exist independently based on a single existence. However, with the progress and development of the fifthgeneration technology and using the continuous development of digital twinning and depth learning, the automobile passenger and cargo transportation service will become more intelligent and humanized. Conclusion With the revolution of automotive passenger transport services, the integration of passenger and cargo transportation has become a new profit development point, and cargo vehicles are a significant service contact point for the integrated passenger and cargo transport system. is study explored the intelligent service system of automobile passenger and cargo integrated transportation and the intelligent self-loading and unloading transportation tool. e cargo design process, pain points, and demand for cargo transportation are analyzed, and the characteristics of the solution for cargo transportation are summarized. e present status of cargo transportation from the aspects of service content, service facilities, service cost, and service process is examined and the design principles of intelligent cargo transport service touchpoints are presented. Although this study provided feasible recommendations and can be used for commercial purposes, there are still limitations. First, this study selected the Beijing station as the only sample and the spatial environment of each bus passenger station is different. Secondly, only one vehicle model is selected as the sample data for reasoning. erefore, it is essential to construct a vehicle model sample database and improve the elasticity perception of vehicles. Highlights. (1) An intelligent cargo transport service system of automobile passenger and cargo integrated transportation is constructed. (2) e intelligent auto-loading cargo transport vehicle is designed to solve the problem of the last kilometer between the goods and the automobile storage warehouse. (3) e transportation scenario of automobile passenger and cargo integrated transportation is expanded and developed from the single-platform transportation of intelligent cargo carriers to cross-platform transportation of cargo. (4) e five design principles and suggestions for intelligent service touchpoints are presented. (5) A product size calculation method of dimension correction is presented based on an artificial intelligence system. Data Availability e data of vehicle dimensions, weight, and seating capacity of the Yutong ZK6908H1Y used to support the findings of this study can be obtained from the Yutong website (https:// www.yutong.com/products/ZK6908H_ky.shtml). Conflicts of Interest e authors declare no conflict of interest.
9,521.4
2022-07-12T00:00:00.000
[ "Business", "Computer Science" ]
Structured velocity field in the inner envelope of B335: ALMA observations of rare CO isotopologues Studying Class 0 objects is very important, as it allows to characterize dynamical processes at the onset of the star formation process, and to determine the physical mechanisms responsible for the outcome of the collapse. Observations of dense gas tracers allow the characterization of key kinematics of the gas directly involved in the star-formation process, such as infall, outflow or rotation. This work aims at investigating the molecular line velocity profiles of the Class 0 protostellar object B335 and attempts to put constraints on the infall motions happening in the circumstellar gas of the object.} Observations of C$^{17}$O (1-0), C$^{18}$O (1-0) and $^{12}CO$ (2-1) transitions are presented and the spectral profiles are analyzed at envelope radii between 100 and 860 au. C$^{17}$O emission presents a double peaked line profile distributed in a complex velocity field. Both peaks present an offset of 0.2 to 1 km s$^{-1}$ from the systemic velocity of the source in the probed area. The optical depth of the C$^{17}$O emission has been estimated and found to be less than 1, suggesting that the two velocity peaks trace two distinct velocity components of the gas in the inner envelope. After discarding possible motions that could produce the complex velocity pattern, such as rotation and outflow, it is concluded that infall is producing the velocity field. Because inside-out symmetric collapse cannot explain those observed profiles, it is suggested that those are produced by non-isotropic accretion from the envelope into the central source along the outflow cavity walls. Introduction Low-mass stars are known to form in dense molecular gas clouds. Class 0 objects represent the first stage of the star formation process, when most of the mass is still contained in the envelope surrounding the protostar (André et al. 1993, André 1995. Models of protostellar collapse (Shu et al. 1987) suggest that it is during this phase that the circumstellar gas is transported to the central object thanks to accretion processes. During this stage, angular momentum needs to be removed from the envelope and stored in the central object or dissipated through viscous processes to allow the formation of the star. Moreover, the accretion mode, the rate at which it happens, and the duration of possible accretion episodes during this phase will determine the final stellar mass (André 1995, Basu & Jones 2004, Bate & Bonnell 2005, Myers 2012). Therefore, studying this phase is crucial, as it allows to understand which are the kinematics and dynamics of the gas at the onset of collapse, and to determine how those affect the outcome of the star formation process. The gas making most of the protostellar envelopes is typically probed using molecular gas line profiles, which trace the gas kinematics in the dense envelope and are used to measure gas motions such as rotation or infall. Observations of the molecular line emission from embedded protostars have been suggested to trace widespread infall signatures in the inner ∼ 2000 au of some protostellar envelopes (Zhou et al. 1993, Rawlings 1996, Di Francesco et al. 2001, Mottram et al. 2013. Most of those studies rely on the detection and interpretation of the infall spectral signature known as blue asymmetry or inverse P-Cygni profile. In the center of a dense protostellar core, where gas densities become larger as the collapse proceeds (> 10 5 cm −3 , André 1995), the large optical depth of some molecular line emission produce a self-absorbed line profile, with the dip centered on the emission at systemic velocity, where most of the circumstellar gas emits. Because the core is collapsing under the effect of gravity, line emission from dense gas tracers will present the typical blue-asymmetry line profile. Most works focused on modeling these line profiles to put constraints on protostellar infall models, rely on the assumption of a symmetrical cloud collapsing, which produces such double-peaked profile on optically thick emission, to extract key information such as central protostellar mass, infall velocities and mass accretion rates (Zhou et al. 1993, Di Francesco et al. 2001, Evans et al. 2005, Evans et al. 2015. However, blue-asymmetries in line profiles are not unique to infall motions. Complex gas kinematics (Maureira et al. 2017), such as asymmetric collapse (Tokuda et al. 2014), accretion streamers , Segura-Cox et al. 2020 or outflow-entrained gas, can produce separated velocity components on the same line-of-sight which are observed as doublepeaked line profiles not caused by optical thickness. The isolated Bok globule B335, which contains an embedded Class 0 protostar (Keene et al. 1983) is located at a distance of 164.5 pc, (Watson 2020) and has been the prototypical object to test symmetrical collapse infall models, since blueasymmetries were first detected in molecular emission of the source at core scales (Zhou et al. 1993;Choi et al. 1995;Evans et al. 2005). Double-peaked line profiles have also been observed with interferometric observations of the molecular emission from the inner envelope (Chandler & Sargent 1993;Saito et al. 1999;Yen et al. 2010;Kurono et al. 2013;Evans et al. 2015). Infall models in an optically thick line emission have attempted to compute infall mass rates (Yen et al. 2010, Evans et al. 2015, Yen et al. 2015, obtaining values affected by large uncertainties that range from 10 −7 M yr −1 to ∼3×10 −6 M yr −1 at radii of 100-2000 au, and infall velocities from 1.5 km s −1 to ≈0.8 km s −1 at radii of ∼ 100 au. New models based on continuum emission and using the revised distance of 164.5 pc have determined an infall rate from the envelope to the disk of 6.2×10 −6 M yr −1 (Evans et al., in prep.). For the estimated age of 4×10 4 yr, this implies a total mass at the center (star + disk) of 0.26 M . The physical cause of these double peaked line profiles has been questioned, however. For example Kurono et al. (2013) pointed out that despite expecting the H 13 CO + emission should be optically thin, the inverse p-Cygni profile and the position-velocity diagram they observe can be reproduced with models of moderately optically thick infalling gas. B335 is associated with an east-west outflow, prominently detected in 12 CO with an inclination of 10 • on the plane of the sky and an opening angle of 45 • (Hirano et al. 1988, Hirano et al. 1992, Yen et al. 2010. The eastern lobe is slightly oriented on the near side (Stutz et al. 2008) producing blueshifted emission on the eastern side and redshifted emission on the western side. While the core has been found to be slowly rotating at large scales (> 2500 au) (Frerking et al. 1987, Saito et al. 1999, Yen et al. 2010, Yen et al. 2011, no clear rotation was found at smaller radii (< 1000 au) and no kinematic signature of a disk was reported down to ∼ 10 au (Yen et al. 2015, Yen et al. 2018. Recent observations of the hourglass shaped magnetic field at small scales have suggested that B335 is an excellent candidate for magnetically regulated collapse (Maury et al. 2018). In this work, observations of the molecular lines C 17 O (1-0) and C 18 O (1-0), which trace the dense circumstellar gas of the inner envelope of B335, are presented along with the 12 CO (2-1) line emission, tracing the outflow cavity. The molecular line profiles are analyzed and interpreted, giving new constraints on the gas kinematics close to the protostar. Observations and data reduction Observations of the Class 0 protostellar object B335 were carried out with the ALMA interferometer during the cycle 4 observation period from October 2016 to September 2017, as part of the project 2016.1.01552.S (PI. A. Maury). In all the work, it is assumed that the centroid position of B335 is at α = 19:37:00.9 and δ = +07:34:09.6 in J2000 coordinates, corresponding to the peak of dust continuum obtained from high resolution maps (Maury et al. 2018). All lines were targeted using a combination of ALMA configurations: C 17 O (1-0) and C 18 O (1-0) were targeted using two configurations, C40-2 and C40-5, and 12 CO (2-1) was targeted using C40-1 and C40-4. Technical details of the observations are shown in Table A.1. Preliminary analysis of the data was done with the product images delivered by ALMA to check if emission was detected and to check the shape of the line profiles. The C 17 O emission was only detected in the most compact configuration (C40-2), therefore only this configuration has been used to produce C 17 O and C 18 O maps, while 12 CO was detected in both configurations so a combination of the two data sets has been used to produce the maps. Calibration of the raw data was done using the standard script for cycle-4 ALMA data using the Common Astronomy Software Applications (CASA) version 5.6.1-8. Continuum emission was self-calibrated with CASA. Line emission was calibrated using the self-calibrated model derived from the continuum data when it was possible. Final images of the data were generated from the calibrated visibilities using the tCLEAN algorithm within CASA, using Briggs weighting with robust parameter set to 2 for C 17 O and C 18 O and 1 for 12 CO. After imaging, 12 CO maps were smoothed to reach the same angular resolution as C 17 O and C 18 O. The resulting map characteristics are shown in Table 1. Figure 1 shows the moment 0 of C 17 O (1-0) emission in red contours. The mean radius of the 3σ emission is 860 au, indicating that it traces the dense envelope. Top image shows in intensity the dust continuum map observed at 110 GHz, showing emission over 3σ. The bottom image shows in intensity the moment 0 of 12 CO (2-1) emission. The 12 CO emission probes the outflowing gas, therefore it is confirmed that the C 17 O emission is tracing the envelope and it is not affected by the outflow. Results and analysis In order to understand the dynamics of the gas that is being probed, the C 17 O spectra were taken at every 0.5 pixel of the emission cube, producing a spectral map shown on Fig. 2. The line profile patterns show two distinct velocity components with a dip centered around the systemic velocity (8.3 km s −1 ). Their respective intensities vary depending on the direction of the offset from the continuum peak, being the blue component more intense in the Eastern part of the core, while the red component is dominant in the Western part. This behavior is true for all the detected hyperfine components. A clear broadening of the line can be observed near the dust continuum emission peak, and in the North-East region part of the core. The former can be due to natural thermal broadening of the two velocity components as the temperature rises in the center of the object, while the latter might be the consequence of the overlapping of the two Contours show emission at −3, 3, 5, 10 and 30 σ, where σ is 10.0 mJy/beam. Top: Intensity shows dust continuum emission map at 110 GHz for emission over 3σ, where σ is 8.56×10 −2 mJy/beam. Bottom: Intensity shows 12 CO (2-1) moment 0 emission integrated over the velocity range 1.4-16.2 km s −1 , for emission over 3σ, where σ is 2016.9 mJy/beam. The two arrows show the direction of the E-W outflow. components due to other dynamical processes. The possibility of the dip being caused by interferometric filtering is discarded since the recovered emission size is of the order of the Largest Recoverable Scale (see Table 1). Moreover, because C 17 O is a rare isotopologue, it is not expected to be abundant at largest scales and therefore no emission can be filtered at the systemic velocity. Figure 3 shows the velocity channel maps of the C 17 O (1-0) emission, covering a velocity range from 7.7 to 9.1 km/s and only showing the main hyperfine component (Fig. B.1 shows a range covering from 5.0 to 9.6 km/s, showing the two observed hyperfine components). It can be seen that the blue-and redshifted emission are confined to the east and west regions, respectively, suggesting that the two components are probing gas with different dynamics. Three independent methods have been used to investigate the origins of the C 17 O spectral profiles and if the two distinct peaks could be produced by an optically thick line emission. In the following sections, the analysis of these methods and the modeling of the velocity field in the B335 inner envelope are presented. Line opacity estimation The maximum opacity at the center of the source has been estimated from the H 2 column density and assuming a standard C 17 O abundance using Eq. 1 (Jansen 1995). (Gordon et al. 2017, Müller et al. 2005, c is the speed of light, ν is the frequency of this transition, N H 2 0 = 3.1 × 10 22 cm −2 is the peak column density of H 2 in B335 at a radius of 3600 au (Launhardt et al. 2013), [C 17 O] = 5 × 10 −8 is the C 17 O abundance relative to H 2 abundance (Thomas & Fuller 2008) and ∆V ≈ 1 kms −1 is the average observed linewidth for the two components together. Using these values the obtained opacity is τ 0 = 0.77, which corresponds to an opacity typical from an optically thin line. Intensity ratio Because of their similar mass and molecular structure, C 17 O and C 18 O should probe gas under similar physical conditions. The [C 17 O]/[C 18 O] isotope ratio does not appear to be affected by fractionation and, if emission from both C 17 O and C 18 O is associated to dense gas shielded from external ultraviolet radiation, selective photo-dissociation probably does not affect the relative abundances (van Dishoeck & Black 1988). Thus, the only difference in the emission from these two isotopes should result from opacity effects, because C 18 O is a factor of 3.6-3.9 more abundant than C 17 O (Penzias 1981, Jørgensen et al. 2002. The ratio of integrated intensities is much less sensitive to linewidth effects than the ratio of peak intensity, therefore we use the latter to rule out any abundances effect on the C 17 O emission. We produced beam-matching maps for C 17 O and C 18 O to compare the gas at similar scales in both isotopes. The obtained synthesized beams are given in Table 1. Intensities are integrated over the two velocity ranges of 4.8-6.2 km s −1 and 7.4-9.4 km s −1 for C 17 O hence taking into account the two observed hyperfine components, and 7.4-9.4 km s −1 for C 18 O. The fact that both lines emit in the same range of velocities and are spatially coincident indicates that they are probing the same reservoir of circumstellar gas. The integrated intensity ratio was computed as Figure 4 shows the obtained integrated intensity ratio map, with values ranging from 0.15 to 0.50 and a mean value centered at 0.28. We note that the intensity ratio is quite homogeneous in most of the extension of the emission, but gets larger at the North and South-West regions. This is attributed to a line broadening of the C 17 O in those regions when compared to the C 18 O emission (see C 18 O spectral map, Fig. C.1). The origin of this broadening is unknown, but we attribute it to complex dynamical processes that might be taking place in these regions. The 2008). Our observations are therefore in general agreement with the expected ratio if both transitions are optically thin. No increase of the intensity ratio is seen on the observations towards the center of the source, where opacity is expected to be higher, further confirming the later hypothesis. Note that this is also in agreement with the conclusions reached by the analysis of single-dish observations of the C 18 O (2 − 1) and C 17 O (2 − 1) (τ C 18 O (2−1) ∼ 0.8, Evans et al. 2005). Modeling of the molecular line profiles The spectrum at each pixel has been modeled using a program that allows to fit the hyperfine structure of spectral lines with multiple velocity components (HfS, Estalella 2017). It also allows to compute the opacity of the line from the relative intensity of the different hyperfine components of the given transition. For every velocity component the general HfS procedure fits simultaneously four independent parameters: the linewidth assumed to be the same for each hyperfine component; the main line central velocity; the main line peak intensity and the optical depth. The fitting procedure samples the space parameters to find the minimum value of the fit residual χ 2 . Because the C 17 O (1-0) emission is expected to be optically thin, it was at-tempted to model the double-peaked profiles with two velocity components, one blue-and another red-shifted. Initial expected values were introduced and the program is allowed to proceed to fit emission with a minimum signal-to-noise ratio of 3σ. The peak velocity, the linewidth and the opacity maps obtained from the fitting are shown in Fig. 5. Velocity maps (top images of Fig. 5) show that the two different components, blue and red-shifted, occupy generally two separated regions, at East and West offsets from the center of the source respectively, with some regions overlapping, where the double peak can be observed in the spectra. The mean average, velocity for the two components are 8.1 and 8.6 km s −1 respectively. The velocity dispersion maps (middle images of Fig. 5) show a mean velocity dispersion for the two components of 0.7 and 0.5 km s −1 , which get broader, up to 0.9∼1, closer to the center of the object where the two components overlap (see central spectra in Fig. 2). The opacity maps (bottom images of Fig. 5) show that the opacity is generally less than 1, and only goes up to 3 in some specific pixels. These larger values also have a very large error associated, of the order of the value itself, so they are not significant and are not shown in the plots. Figure 6 shows the histograms for the opacity values for both fitted components. The average opacity has been estimated from the HfS modeling and is found to be τ bs = 0.464 and τ rs = 0.474, for the blue-and red-shifted components respectively, which is in concordance with the upper limit estimated before. Linewidths and kinetic temperature The main radius probed with the C 17 O emission was computed from the 3σ contour and found to be 860 au. A region enclosing radii from 100 (about half of the FWHM beam) to 860 au has been chosen to analyze the gas kinematics. The kinetic temperature of the gas has been estimated from the formula for dust temperature in an optically thin regime assuming only central heating by the B335 protostar derived by Shirley et al. (2011). The underlying assumption is that dust and gas are expected to be in thermal equilibrium, being coupled via collisions at the densities probed here (> 10 5 cm −3 ). We use Eq. 2 (Evans et al. 2015) which we adapted to the new distance of 164.5 pc. The gas kinetic temperature was computed for the two radii probed, with values in range of T k (100 au) = 46 K and T k (860 au) = 20 K. The observed linewidths obtained in the previous section have been compared with the expected thermal linewidth, given by: where T k is the gas kinetic temperature, m is the molecular mass (29.01 amu for C 17 O) and k b is the Boltzmann constant. The expected thermal linewidth for C 17 O has been computed for the temperatures at the two different radii: ∆v th (100 au) = 0.27 km s −1 and ∆v th (860 au) = 0.17 km s −1 . The observed linewidths are larger than the thermal ones for both velocity components. This indicates that the observed linewidth is the result of the thermal component plus a non-thermal contribution (e.g. turbulence and large-scale motions like infall and outflow, v 2 obs = v 2 th + v 2 non−th ). The non-thermal contribution of the line has been computed for both velocity components and the results are shown in Table 2. The non-thermal component at the inner and outer radius are indistinguishable because of the limited spectral resolution (0.2 km s −1 ). The sound speed, c s , is in the 0.2-0.3 km s −1 range for temperatures between 20 and 46 K and γ ∼ 7/5. This means that the non-thermal contribution to the linewidth is supersonic. Simulations have shown that in star forming cores systematic large-scale motions (such as infall) can contribute significantly (∼50%) to the non-thermal component of the linewidth (Guerrero-Gamboa & Vázquez-Semadeni 2020). We can do a rough estimation of the contribution from infall, by measuring the infall velocity difference from two different radii, ∆σ = , G is the gravitational constant and M B335 (r) is the mass enclosed at the considered radius r. To compute the mass we consider that the total mass at different radii is the sum of the mass in the envelope plus the mass of the central object. The mass of the gas in the envelope contained up to a certain radius is computed by integrating the 110 GHz dust continuum emission, and using standard assumptions (see Eq. 4 in Jørgensen et al. 2007). The maximum velocity difference along the line-of-sight would occur at the core's center. For the two adopted radius, 860 and 100 au, the envelope mass, M env , is ∼0.16 M and ∼0.012 M , respectively. We also adopt a mass for the central object between 0.05 and 0.26 M , which are the predicted values from infall models (see Yen et al. 2015, Evans et al. 2015. Therefore, the total mass enclosed within a 100 and 860 au radii are in the 0.06-0.16 M and 0.27-0.41 M ranges, respectively. The estimated range of ∆ν f f is 0.46 -1.01 km s −1 . Given the errors coming from the computation of all the previous parameters, the non-thermal contribution of the linewidth is of the same order as the broadening due to free-fall motions along the line of sight: it is hence possible that the observed velocity pattern is due to infall. Possible origins of the observed gas motions Our ALMA observations of the C 17 O (1-0) emission suggest an optically thin emission at all scales probed by the observations (100-860 au). Overall, the spatial extent of the C 17 O emission is similar to the one of the dust continuum emission, but C 17 O is less peaked and decreases more smoothly with decreasing density outwards: this suggests that the gas traced with C 17 O is not mostly related to the outflow cavity. The C 17 O emission maximum is not coincident with the dust continuum peak position, which might suggest slight abundance variations of the C 17 O at high densities close to the protostar. Nevertheless, the prominence of the double-peaked velocity pattern does not correlate with the intensity of the dust continuum emission, proving that those profiles are not due to red-shifted absorption against a strong continuum and/or C 17 O source. These two velocity components thus trace distinguished gas motions. A simple isotropic inside-out envelope collapse can not easily reproduce the gas motions we observe in the B335 envelope. In this section, we discuss various hypothesis for the physical origin of the gas motions observed. Despite being isolated, B335 is embedded in an extended molecular gas cloud of density ∼ 10 3 cm −3 (Frerking et al. 1987). However, C 17 O is a rare isotopologue which is mostly confined to a high-density central region and its low abundance at large-scales would prevent observing such a tenuous layer, suggesting there is no missing flux coming from large-scale C 17 O emission. To confirm that this is the case, we estimated the missing flux from the C 18 O (1-0) ALMA observations by comparing it with the 45 m Nobeyama data of the same transition presented in Saito et al. (1999). Our C 18 O map was smoothed to match the beam size of the Nobeyama telescope, which at the frequency of this transition (109.782 GHz) is 16". We obtained the spectra on a region of one beam size around the center of B335 and transformed the flux density to brightness temperature using the Rayleigh-Jeans law. We obtained a peak temperature of T MB = 0.79 ± 0.08 K and an integrated temperature of T MB dv = 0.53 K km/s. This means that our ALMA observations are recovering around 14 % of the total flux detected with single-dish data. However, because C 17 O is expected to be much more compact than C 18 O we expect the missing flux to be much less for the former. Frerking et al. (1987) presented single-dish data of the C 17 O (1-0) transition and concluded that all their emission is coming from the center of the source in a region smaller than the beam of their telescope (1'.6 for the C 17 O (1-0) transition). This extension is much smaller than the one observed for C 18 O (1-0) detected in both works, which is about 4'. This is consistent with the fact that C 17 O is much less abundant than C 18 O, especially at large scales, and that it is mainly tracing the core and not the envelope. Therefore, we expect the missing flux of C 17 O in our ALMA data to be much less, and to be recovering at least twice the recovered flux of C 18 O, i.e. around 30 %. We also note that while our observation might be missing flux, this should not be enough as to produce the huge dip in our data, and it can not explain the structured velocity pattern we observe in the spectral maps, since the missing flux will be at the systemic velocity and would not be able to completely absorb only one of the two components at different offsets. A possible cause for observing blueshifted and redshifted gas motions in protostellar envelopes could be organized core rotation. Our observations do not support this hypothesis as they do not show a clear velocity gradient in the equatorial plane where rotation motions would mostly contribute to the observed velocity field. Instead, both redshifted and blueshifted velocities are observed in both the northern and southern regions (see Fig. 4). While rotation motions have only been detected at larger envelope radii in B335 (> 2500 au, Saito et al. 1999;Yen et al. 2011), we stress that the conclusion regarding the absence of small scale rotation (e.g. Yen et al. 2010) should be further investigated using the new insights on gas motions in the envelope that our observations have uncovered. B335 has a well-studied outflow, with its axis close to the plane of the sky and with a well defined X-shaped biconical shape in 12 CO (Bjerkeli et al. 2019). Although some contamination by the gas from the outflow cannot be completely ruled out, we present here arguments supporting the hypothesis that our C 17 O maps can be used to study the envelope gas kinematics. C 17 O is a rare isotopologue which is known to trace dense envelope gas and is not expected to be detected in more tenuous outflow cavities. The morphology of C 17 O emission is very different from the one observed in typical outflow cavities tracers, such as C 2 H (Murillo et al. 2018) or 12 CO (see bottom image in Fig. 1 and Bjerkeli et al. 2019). Moreover, no spectral signature of outflow is observed, such as large wings observed in 12 CO (Bjerkeli et al. 2019), and the maximum velocity shift from the rest velocity remains quite small (± 1 km s −1 ). Therefore, the kinematic pattern observed in our C 17 O maps cannot be produced by outflow alone, and it does provide a strong evidence of distinguished velocity contributions from the gas in the inner region of the B335 protostellar envelope. The C 17 O velocity maps in Fig. 5 show that the largest gas velocities are found ∼ 1 from the center along the two northern outflow cavity walls, tracing gas at reverse velocities with respect to the outflow velocities. Considering the 10 • inclination of the system, the spatial distribution of C 17 O emission following closely that of the dust and other typical dense gas tracers, and the fact that the linewidths of the two velocity components are in general agreement with the expected linewidths from infall motions, the most likely hypothesis is that these high-velocity (± 1 km s −1 ) features trace accreting gas flowing along the outflow cavity walls, onto the central protostar. The peak velocities tentatively increase towards the central protostellar objects, for the features along the eastern outflow cavity walls, but no clear velocity gradient could be resolved in the current observations: additional observations with better spatial resolution may allow to test further this hypothesis. Finally, we note that the strongly redshifted emission at the North-East was already detected in ALMA C 18 O observations reported by Yen et al. 2015 (see Fig. 2 in their work). Dust continuum emission observed with ALMA at various millimeter and sub-millimeter wavelengths all show a striking excess of dust emission associated to the outflow cavity walls. While this could be a temperature effect due to increased heating from the central protostar of these walls, it could also be a true density increase in compact features easily picked up by interferometric observations. Magnetized models of protostellar formation (for a review see Zhao et al. 2020) suggest cavity walls could be preferential sites to develop accretion streamers, as observed in the non-ideal magneto-hydrodynamic (MHD) models of protostellar accretion and outflow launching (Machida 2014 or Figure 8 and 9 in Tomida et al. 2012). Indeed, these are locations where the poloidal magnetic field is mostly parallel to the inflow direction and therefore would exert less magnetic braking for material infalling along the walls. This hypothesis is also in agreement with the dust polarization observations of magnetic field lines in B335 (the redshifted gas feature we observe along the north-eastern cavity wall is associated to highly organized Bfield lines aligned with the tentative gas flow), and the scenario of magnetically-regulated infall proposed in Maury et al. (2018). We note that the observed non-thermal components of the linewidths are found to be supersonic. If the observed gas motions we detect indeed trace localized accretion motions, these could be supersonic. While the development of supersonic filamentary accretion features were reported in numerical models of protostellar formation (Padoan et al. 2005;Banerjee et al. 2006;Kuffmeier et al. 2019), and observations suggested supersonic infall is occurring in a few protostellar envelopes at larger scales (> 1000 au, Tobin et al. 2010;Mottram et al. 2013), it is the first time such anisotropic supersonic infall motions are tentatively reported in the B335 inner envelope. Impact on the characterization of protostellar mass accretion rates In the following, we briefly discuss the implications of our work, if the localized accretion features detected in B335 are common while remaining mostly unresolved in many observations of accreting protostars. Self-similar solutions for analytical models of the collapse of an isothermal sphere, including only thermal pressure and gravity, predict typical mass accretion rates of the order ∼ 10 −4 M yr −1 (Larson 1969;Penston 1969;Shu 1977). Turbulent models and MHD numerical models have produced slightly lower mass accretion rates ∼ 10 −6 -10 −5 M yr −1 . Episodic accretion with highly variable rates (from aṀ ∼ 10 −5 M yr −1 down toṀ < 10 −6 M yr −1 ) is often observed in both hydro and MHD numerical models of protostellar formation, in the accretion of envelope material onto the disk and the protostar itself (Lee et al. 2021), and of disk material to the central growing protostar (Dunham & Vorobyov 2012;Vorobyov & Basu 2015). Robust observational estimates of protostellar accretion rates are crucial to distinguish between models, but also to shed light on several open questions on star formation, since they are key quantities for our interpretation of the protostellar luminosities and of the typical duration of the main protostellar accretion phase (Evans et al. 2009;Maury et al. 2011). Indeed, observations may have revealed a discrepancy between the observed protostellar bolometric luminosities, and the protostellar accretions rates: this is the so-called 'luminosity problem' (Kenyon et al. 1990). Protostellar accretion rates derived from molecular line emission, and more particularly from the modeling of inverse p-Cygni profiles with analytical infall models, should produce luminosities 10-100 times larger than the typically observed bolometric luminosities (for a review, see Dunham et al. 2014). Observations of the molecular line profiles in B335 (Evans et al. 2005(Evans et al. , 2015 have been used to fit models of protostellar infall suggestingṀ ∼ 6.2 × 10 −6 M yr −1 (assuming an effective sound speed of 0.3 km/s, Evans et al. in prep.), although arguably these estimates are associated to large error bars. The observed bolometric luminosity of B335 lies an order of magnitude below the accretion luminosity L acc such accretion rates should produce (Evans et al. 2015): despite being a prototype for protostellar infall models, B335 also suffers from the luminosity problem. If the 'true' protostellar mass accretion rates stems from localized collapsing gas at small scales, potentially affected by unresolved multiple velocity components, the true linewidths associated to infalling gas feeding the growth of the protostar would be quadratically smaller than the ones measured at larger scales where these components would be entangled (or if the individual velocity components are interpreted as being part of a single velocity component with a central dip due to optically thickness). Smaller intrinsic linewidths of the infalling gas, at small radii, may result in smaller effective sound speed and hence lower mass accretion rate derived from the analytical infall models, sinceṀ ∝ c 3 s /G. It is therefore possible that the observed bolometric luminosity of B335 is, ultimately, compatible with its accretion luminosity L acc . In the revised B335 scenario we propose here, the mass accretion rate on the protostar could be dictated by localized supersonic infall rather than by the large scale infall rate of the envelope: this may open a window to partially solve the 'luminosity problem', although episodic vigorous accretion would probably still remain necessary to explain the relatively short statistical lifetimes for the Class 0 phase. Future observations should be used to carry out a more detailed characterization of whether a significant fraction of the final stellar mass could be fed to the central object through highly localized anisotropic infall. Recently, Pineda et al. (2020) reported the detection of an 'accretion streamer', connecting the dense core to disk scales, and found a streamer infall rate ∼ 10 −6 M yr-1, of the same order of magnitude as the global mass accretion rate inferred from molecular line observations in B335. Hence, it is possible that many previous studies have failed to grasp the full complexity of the gas motions making up the accretion onto the central protostars. It is the large angular and spectral resolution, along with the great sensitivity of the presented ALMA observations which allowed to detect the two distinct velocity components on the line emission profiles in B335. More observations at the same small-scales and spectral resolution of optically thin emission in different protostars are needed in order to determine if these localized gas motions are common. Moreover, more refined protostellar infall models will have to be carried out in the future, to take this new small-scales into account and include more complex geometries with, e.g., asymmetric structures and preferential accretion along outflow cavities. Summary ALMA observations of the C 17 O emission tracing gas kinematics in the B335 envelope have been presented in this work. It is shown that the line emission exhibits widespread double-peaked profiles. From the analysis, the following conclusions have been obtained: -Derivations of the line opacity have shown that the emission of the line is optically thin and therefore the observed double-peaked profiles cannot be produced by selfabsorption. Therefore, inverse p-Cygni profiles coming from symmetrical inside-out collapse cannot fully explain the observed complex velocity field. -After discarding filtering of large-scale emission or other types of motions, such as rotation or outflow, it is determined that only distinct gas motions contributing to the same lineof-sight could explain the observed line profiles pattern. -Linewidth analysis have determined that the two velocity components are compatible with infall motions, and could be due to localized infall in preferential directions. The main hypothesis presented is that the collapse of the envelope onto the protostar is occurring along the equatorial plane but also along the outflow cavity walls, where the magnetic field topology is more favorable. More observations at similar scales and spectral resolution are needed to determine if these double peaked profiles are common in protostellar objects at similar evolutionary states. Moreover, further modeling of the B335 envelope with more complex collapse models, such as anisotropic collapse, are needed to determine which is the exact physical origin of the observed velocity field. (1-0) emission in the inner 900 au, centered on the dust continuum emission peak. The whole map is 5.5" × 5.5" and each pixel correspond to 0.5" (∼ 82 au). The green spectra identifies the spectrum at the peak of the continuum emission and the blue line indicates the systemic velocity (8.3 km s −1 ).
8,857.8
2021-07-05T00:00:00.000
[ "Physics" ]
Physico-chemical investigation of a polyherbal formulation - Vidangatandulaadi choorna Vidangatandulaadi Choorna is a polyherbal formulation consisting of seven ingredients. Trivrith (Operculina turpethum (Linn.) Silva Manso) is the chief ingredient responsible for the purgative action of the formulation. This yoga ismentioned inKalpasthana of Ashtangahridaya, intended for virechana (Purgation). It is useful in Kapha-vatha disorders. Even though many kinds of research have been done to identify the physicochemical constituents of individual drugs in the formulation, no studies were done to identify the physicochemical properties of the formulation. This analysis helps in understanding the mechanism for different pharmacological actions of the formulation. Hence, Physico-chemical study of Vidangatandulaadi Choorna along with high-performance thin-layer chromatography (HPTLC) ingerprinting is done to ix the standards. All the drugs included in the formulation is identi ied by the botanist and is prepared according to Standards mentioned for the preparation of Choornamentioned in Ayurvedic Pharmacopoeia of India. The formulation is least encountered, but it has shown its signi icant action in Dyslipidemia in folklore practices. As there are no Standards mentioned for this formulation, the result observed in the present study may be considered suitable. The data obtained from Physicochemical investigation, highperformance thin-layer chromatography pro ile and ICP-MS (Inductively Coupled Plasma Mass Spectrometry) could be used as the standards for the present formulation under study. High-performance thin-layer chromatography, Inductively Coupled Plasma -Mass Spectrometry, Physico-chemical, Vidangatandulaadi Choorna A Vidangatandulaadi Choorna is a polyherbal formulation consisting of seven ingredients. Trivrith (Operculina turpethum (Linn.) Silva Manso) is the chief ingredient responsible for the purgative action of the formulation. This yoga is mentioned in Kalpasthana of Ashtangahridaya, intended for virechana (Purgation). It is useful in Kapha-vatha disorders. Even though many kinds of research have been done to identify the physicochemical constituents of individual drugs in the formulation, no studies were done to identify the physicochemical properties of the formulation. This analysis helps in understanding the mechanism for different pharmacological actions of the formulation. Hence, Physico-chemical study of Vidangatandulaadi Choorna along with high-performance thin-layer chromatography (HPTLC) ingerprinting is done to ix the standards. All the drugs included in the formulation is identi ied by the botanist and is prepared according to Standards mentioned for the preparation of Choorna mentioned in Ayurvedic Pharmacopoeia of India. The formulation is least encountered, but it has shown its signi icant action in Dyslipidemia in folklore practices. As there are no Standards mentioned for this formulation, the result observed in the present study may be considered suitable. The data obtained from Physicochemical investigation, highperformance thin-layer chromatography pro ile and ICP-MS (Inductively Coupled Plasma -Mass Spectrometry) could be used as the standards for the present formulation under study. INTRODUCTION Vidangatandulaadi Choorna is one of the clinically signi icant formulations used in the management of Dyslipidemia. It is mentioned in Ashtangahridaya, kalpasthana as a Nityavirechaka (mild laxative) (Ashtangahridaya of Vagbhata, 2002). The said medicine is used as a traditional method in managing Dyslipidemia. No studies have been published related to Vidangatandulaadi Choorna exploring its Physico -Chemical properties and Pharmacological effects. So an attempt is made to understand the Physico -Chemical properties of the drug along with HPTLC pro ile to ix the standards of the drug, which may lay a future scope for further studies related to this drug. The result of the study can be used to explain the therapeutic bene its of the medicinal formulation. MATERIALS AND METHODS All the drugs were procured from Amrita Life, the manufacturing unit under Amrita School of Ayurveda, Vallikavu, Kollam. The authenticity of the drug was con irmed by botanist and experts in Department of Dravyaguna, Amrita School of Ayurveda, Vallikavu, Kollam. Choorna was prepared based on API. Ingredients of the Choorna is mentioned in Table 1. Physico-Chemical parameters like a loss on drying, Water-soluble extractive, Alcohol soluble extractive, pH of 10% solution of Total ash, Acid insoluble ash were done as per API standard guidelines (Department Of Ayush, 2007b). HPTLC ingerprinting was done. ICP -MS was used to estimate the heavy metal contents in the prepared drug. Determination of pH Procedure -Ten gram of total ash was dissolved in 100ml of demineralised water. The pH of this 10% solution was measured with a digital pH meter. Determination of Water-soluble extractive Procedure -Five gram powdered drug was placed in a round bottom lask and mixed with 100 ml chloroform water. It is kept for 24 hrs with occasional shaking. After that, it was iltered and the iltrate collected in a tarred clean beaker. The residue was weighed after evaporating to dryness. Determination of Alcohol soluble extractive Procedure -Five gram powdered drug was placed in a round bottom lask. It was mixed with 100ml alcohol. It was kept for 24 hours with occasional shaking. It was iltered and the iltrate collected in a tarred clean beaker. The iltrate was evaporated to complete dryness, and then the remnant was weighed. Determination of Total ash Procedure -Two-gram air-dried drug was weighed with accuracy. It was placed in a tared crucible. It was heated in an incinerator gently, and then the drug was incinerated to ash until it was free from any organic matter. The crucible was kept in a desiccator and cooled and weighed with the contents. Percentage of ash concerning the air-dried drug was calculated. Determination of Acid insoluble ash Procedure -Total ash was prepared with two gram dried drug. The whole total ash was dissolved in 25ml diluted HCl. It was boiled for 5minutes. The insoluble portion was iltered through an ashless ilter paper. It was washed with demineralised water. Dried ilter paper with the contents was selected. It was incinerated in a tarred silica crucible with the contents. The incinerated content with the crucible was weighed. Percentage of acid-insoluble ash was calculated. ICP-MS (Inductively Coupled Plasma -Mass Spectrometry) Procedure -About 200-500 mg of the sample was accurately weighed and transformed in a cleaned microwave digesting system(MDS)tube. An adequate amount of Con.HNO 3, Con. HCl and a few drops of H 2 O 2 were added to the MDS tube. The MDS tube was further kept in microwave digestive system for complete digestion of solid samples to liquid for one hour at a temperature of 180 0 C. The resultant liquid sample was carefully transferred to a 50ml standard lask and diluted to 50ml. The diluted sample was directly aspirated into the ICP-MS instrument, and the result was obtained (Wilschefski and Baxter, 2019). Observation on Physicochemical parameters of Vidangatandulaadi Choorna The analysis of Physicochemical parameters of Vidangatandulaadi Choorna revealed the following observations. It appears as a dark brown powder. The loss on drying is estimated to 3%w/w. The pH of 10% solution of content is 5.23. The water-soluble extractive is 28.23% w/w, Alcohol soluble extractive is 31.58% w/w, Ash value is 5.80% w/w and acid insoluble ash is below the detection limit. Observation on HPTLC analysis of Vidangatandulaadi Choorna Peak display (Densitogram) of Vidangatandulaadi Choorna sample at 254nm is shown in Figure 1. Peak display (Densitogram) of Vidangatandulaadi Choorna sample at 366nm is shown in Figure 2 Table 3. Observations on ICP-MS (Inductively Coupled Plasma -Mass Spectrometer) The presence of heavy metals observed by Inductively Coupled Plasma -Mass Spectrometer is given as follows. Lead accounts the maximum value reaching about 1.36mg/kg and Mercury about 0.65mg/kg. The presence of Arsenic is 0.18mg/kg and Cadmium is 0.12mg/kg. Physico -Chemical study Degradation time of the plant material indicates the quantity of moisture content in it. The powdered plant material degrades quickly due to the growth of microbes and fungus if the moisture content is high. The loss on drying was only 3% for the present sample, which ensure a reasonable period of shelf life for it. Presence of minerals and silica in the plant material was indicated by the Total ash value, which was obtained as 5.80%. The amount of acid-insoluble siliceous matter present in the plant was under the detectable limit. 28.23% w/w water-soluble extractive value indicated that Vidangatandulaadi Choorna HPTLC Analysis HPTLC ingerprinting pro ile of Vidangatandulaadi Choorna was developed in Toluene: Ethyl acetate: Formic acid: Methanol (7:5:1:0.5) solvent system. This was taken as the standard Densitogram of Vidangatandulaadi Choorna. Densitogram showed ten peaks at 254nm and eight peaks at 366nm. Each peak represents a chemical entity. Almost four spots are seen repeated in 254 and 366nm. There is a scope for further analysis for inding out the chemical compounds represented by the peaks. To ind out the chemical nature represented by the peaks, TLC-MS may be utilised, in the present solvent system, Toluene: Ethyl acetate: Formic acid: Methanol (7:5:1:0.5). At 366nm there are luorescent spots with green, blue, red and pink colours. Three distinct brown spots are there; it may indicate compound with unsaturation. ICP-MS (Inductively Coupled Plasma -Mass Spectrometer) The procedure estimates the number of heavy metals present in the given sample. As per WHO permissible limit of heavy metals like Mercury is 1ppm, lead is 10 ppm, Cadmium is 0.3 ppm, and arsenic is 3 ppm (Department Of Ayush, 2007c). Here in our study, we got all these values under the normal limit, which increases the authenticity of the sample. CONCLUSION Vidangatandulaadi Choorna was studied for understanding its physicochemical parameters and HPTLC pro ile. As there are no standards mentioned for this formulation, the physicochemical data and HPTLC pro ile evolved from the present study could be used as standardisation parameters of the formulation.
2,172.8
2020-12-21T00:00:00.000
[ "Chemistry", "Medicine", "Materials Science" ]
Evolution in Time of Radiation Defects Induced by Negative Pions and Muons in Crystals with a Diamond Structure Evolution in time of radiation defects induced by negatively-charged pions and muons in crystals with diamond structures is considered. Negative pions and muons are captured by the nucleus and ionize an appropriate host atom, forming a positively-charged radiation defect in a lattice. As a result of an evolution in time, this radiation defect transforms into the acceptor center. An analysis of the full evolution process is considered for the first time. Formation of this acceptor center can be divided into three stages. At the first stage, the radiation defect interacts with a radiation trace and captures electrons. The radiation defect is neutralized completely in Si and Ge for a short time t ≤ 10−11 s, but in diamond, the complete neutralization time is very large t ≥ 10−6 s. At the second stage, broken chemical bonds of the radiation defect are restored. In Si and Ge, this process takes place for the neutral radiation defect, but in diamond, it goes for a positively-charged state. The characteristic time of this stage is t < 10−8 s for Si and Ge and t < 10−11 s for diamond. After the chemical bonds’ restoration, the positively-charged, but chemically-bound radiation defect in diamond is quickly neutralized because of the electron density redistribution. The neutralization process is characterized by the lattice relaxation time. At the third stage, a neutral chemically-bound radiation defect captures an additional electron to saturate all chemical bonds and forms an ionized acceptor center. The existence of a sufficiently big electric dipolar moment leads to the electron capture. Qualitative estimates for the time of this process were obtained for diamond, silicon and germanium crystals. It was sown that this time is the shortest for diamond (≤10−8 s) and the longest for silicon (≤10−7 s) Introduction Radiation defects in diamond and silicon are examined actively because these semiconductors are widely used as detectors and some other devices in high energy physics.The main problem of these investigations is connected with their radiation hardness (see e.g., some recent works [1][2][3][4][5][6]).Radiation defects induced by protons, neutrons and heavier particles with kinetic energies E ≥ 100 MeV are studied in most works.After slowing down to kinetic energies E less than ionization energies of host atoms in crystals, these impinging particles stop in the lattice, damaging it.If the recoil energy is higher than the lattice binding energy, a host atom will be displaced from its site.Numerical modeling of these processes is carried out, e.g., in [3,5].In [7], many types of these radiation defects in diamond are well described.The second problem is implantation of ions in crystals for preparing necessary impurity atoms.Ion implantation is a commonly-used method for modifying properties of materials in the field of microelectronics.The application for diamond is represented, e.g., in [8]. For a number of reasons, radiation defects induced by light negatively-charged particles used to be out of interest in high energy physics experiments.First, these particles are secondary particles, as a rule, and, second, they cannot inflict many lattice damages compared to protons.Nevertheless, the interaction of these particles with crystals can be very important for many different applications of electronic devices. We will consider in this article radiation defects induced by light negatively-charged particles like pions (π-mesons) and muons in crystals with a diamond structure.These particles do not destroy the crystal structure, like heavy particles, but can create specific defects in the lattice.Indeed, they are captured by a nucleus creating an impurity atom and, thus, can change the electronic properties of a crystal.Negative pions and muons are respectively long-lived particles: the lifetimes of charged pion and muon are τ π ± ≈ 2.6 × 10 −8 s and τ µ ≈ 2.2 × 10 −6 s, respectively.Pions are born as usual when high energy protons are stopped in a target, and muons are born after the decay of pions: where ν µ is a muon antineutrino for the negative muon and a neutrino for the positive one.This picture can be observed in cosmic rays.Negatively-charged pions and muons are stopped in a media very effectively because of the capture by nuclei. The capture mechanism differs for pions and muons, but the result manifests in the same way in electronic properties, because they form finally the same acceptor impurity.Consider a capture of negative pions by stable nuclei of the main semiconductors: C, Si and Ge.In diamond, we have only one stable isotope C 12 , and a capture of a negative pion gives rise to the boron acceptor: where n is a decay neutron.The boron nucleus spin is I = 3/2. Processes in germanium are more complicated, because it has only two stable isotopes with an atomic number equal to 70 and 72 with 21.2% and 22% in nature, respectively [9], which can capture a negative pion and decay to an appropriate gallium isotope.Therefore, we have: Ge 70 + π − → Ga 69 + n and Ge 72 + π − → Ga 71 + n. Both gallium isotopes possess a spin equal to 3/2.All isotopes B 11 , Al 27 , Ga 69 and Ga 71 are stable, and the capture of the negative pions irreversibly changes the concentration of the main acceptor impurities in semiconductors. A capture process of negatively-charged muon in crystals strongly differs from the negative pion capture process.Consider this difference in more detail.Positively-and negatively-charged muons (µ + and µ − ) are widely used for research of condensed matter in many different areas, for the simulation of the behavior of hydrogen-like light element impurities and chemical processes with atomic hydrogen (see e.g., [10]).The application of muons for materials' investigation has become possible due to a well-developed µSR-technique based on the possibilities of supervision for a muon magnetic moment in the sample.Negatively-and positively-charged muons (µ ∓ ) are unstable leptons with spin 1/2. The negatively-charged muon (µ − ) decays according to the scheme: where ν µ and νe are muonic neutrino and electronic antineutrino, respectively.The escape probability of a decay electron depends on the angle between the electron momentum direction and the average muon spin s, due to what appears to be the possibility to study local fields of a target.A muon has a relatively high decay time of τ µ ≈ 2.2 × 10 −6 s.The large lifetime allows investigating with a high precision the processes with a characteristic time t < 10 −5 s, which provides the opportunity for a µSR-technique for the material property studies, well comparable with the possibilities of the widely-applied methods of NMR and ESR.The behavior of µ + and µ − in a medium is radically different.From the chemical point of view, the positively-charged muon is a light element impurity modeling a light hydrogen isotope.The negatively-charged muon cascades into the ground 1s-state forming a muonic atom (µ-atom).The mass of a muon equals 207-times the mass of an electron, and therefore, its binding energy with an atomic nucleus is 207-times larger than that of the electron.After a muon capture, much energy is released, leading to a high ionization of a target atom due to the emission of Auger electrons.Further, the target Auger electrons are captured by the positively-charged radiation-induced defect.Due to a high muon mass value, the negative muon screens a nuclear charge Z, which is effectively becoming Z − 1.After defect neutralization, a replacement impurity is formed, or a muonic atom, similar to an atom isotope with a nuclear charge Z − 1. This fact was well known since the initial stage of the muon research (see e.g., [11]) and gave rise to the foundation of the muon method of materials research (µSR).Systematic study of impurities' formation with a nuclear charge equal to Z − 1 in condensed matter was carried out at the early stages of the µSR-research [12][13][14].The muonic atom formed inside a semiconductor lattice models an acceptor center.For example, in diamond (Z = 6), the negative muon, as a result of capture by a nucleus, forms a pseudo-boron, or muonic boron, which can be designated as µ B. In Si (Z = 14) and Ge (Z = 32), the negative muon is captured by a nucleus forming the pseudo-aluminum µ Al with a nuclear charge equal to Z = 13 and pseudo-gallium µ Ga with a nuclear charge equal to Z = 31, respectively.These chemical elements are the main acceptor impurities in silicon and germanium semiconductors.Therefore, a radiation defect induced by a negative muon is unstable and disappears after its decay.However, nevertheless, this kind of defect is very interesting because it provides the possibility to study the evolution in time of the processes considered above. The study of acceptor center properties using µ − was suggested in [15].The possibility to extract valuable information about the hyperfine structure and interactions with a lattice of acceptor centers in different semiconductors with the help of negative muons was shown in the works [16][17][18][19].Recently, µ − SRresearch of synthetic diamond crystals were carried out in [20][21][22][23].We will examine evolution in time of radiation defects induced by negative muons in the following, taking in mind that they are the same for negative pions, as well. The total process of this kind of radiation defect formation can be separated into two principally different stages.At the first stage, a center with a large positive charge appears after the stopping of a negative muon or pion.This center interacts with electrons of a trace, created by the charged particle, when it is decelerated in a crystal.As a result of this interaction, the positively-charged center partially compensates its charge or becomes neutralized if it is possible.At the second stage, the center with a compensated charge restores broken chemical bonds with a lattice and then forms an acceptor center.Now, we will outline briefly the main results of the first stage and show the difference between diamond and other diamond structure crystals following [24].After that, we will consider the second stage in more detail. Interaction with the Trace When a negative pion is captured by a nucleus or a negative muon is captured to the K-shell of the muonic atom, substantial energy (E ≥ 1 keV) is released.A totally ionized positively-charged center and Auger electron ionization environment appear, creating secondary electrons.This process takes short a time, t ∼ 10 −14 s.After the ionization, free electrons lose their energy; for a while, it will be of the order of the forbidden gap energy.This is a diffusion process, when ionized impurities and host atoms are neutralized, and it takes respectively a long time t ≤ 10 −10 s.All of these findings were obtained as a result of a numerical modeling of a neutralization process of muonic atoms in a kinetic approximation for diamond and silicon crystals with different concentrations of impurities [24]. A different situation is observed in diamond and silicon already at this stage.The capture of a negative muon on a silicon nucleus creates the number of free charge carriers approximately at two orders more with respect to diamond.This result is connected with a difference in the number of Auger electrons, in ionization energies of impurities and a forbidden energy band for these two crystals. Numerical calculations have shown that the recombination frequency of electrons with positively-charged ions in diamond reaches approximately 10 7 s −1 only at a respectively short interval of time 10 −10 s.In silicon, the recombination frequency reaches approximately 10 11 s −1 for the same interval of time.As a result, all ionized impurities in silicon including a muonic atom µ Al are neutralized for a very short time t ≈ 10 −11 s.The probability of the neutralization of a muonic atom µ B is less than 10 −3 for the interval t ≈ 10 −10 s.The recombination frequency in diamond sharply falls for t > 10 −10 s, and the neutralization time in this process becomes more than both a muon lifetime and a characteristic time for chemical bonds' restoration. Thus numerical modeling has shown that a positively-charged radiation defect, created by a negative muon in silicon and germanium, must be quickly neutralized before chemical bonds with the lattice can be restored.In diamond, we observe other behavior.Namely, the radiation defect must restore chemical bonds with the lattice to be positively charged.Therefore, we need to consider different initial states of the radiation defect at the second stage for diamond and other crystals with the diamond structure. The second stage of the radiation defect formation consists of a few steps that lead to an acceptor center formation.A neutral defect with restored chemical bonds is not an acceptor center yet, because there exist unsaturated chemical bonds.Therefore, we need to consider at least three steps of an acceptor center formation: (1) restoration of broken chemical bonds and neutralization of the radiation defect; (2) capture of a missing electron and saturation of chemical bonds of a neutral radiation defect in the lattice; formation of an ionized acceptor center; (3) formation of an acceptor center in the ground state. The first two steps are discussed in this article. Electron States of a Neutral Radiation Defect in Si and Ge The muonic impurity atom is in an exited state just after formation because its chemical bonds with host atoms are broken.In accordance with the standard idea of quantum chemistry, only electrons with the same principal quantum number can create a chemical bond if they were on an unfilled energy level of the atom.In this case, they form hybridized states.For lattices with diamond structure electron states, ns and np are represented with equal probability, where n = 2, 3 and 4 relate to C, Si and Ge, respectively.Hybridized states are formed in atomic time, but chemical bonds' formation is determined by exchange interactions that are weaker than Coulomb interactions, which form the appropriate atomic configuration. When a chemical bond is formed, a significant energy (of the order of some eV) can be emitted.In gasses and liquids, this excess energy can be transferred to the third body.This kind of energy transfer in crystal must be connected with a phonon emission.One-phonon emission with the energy of ≥1 eV in covalent crystal is impossible.Therefore, a transfer of this energy value could be realized in the case of a multi-phonon process.This kind of process have a very small probability.Thus, a radiation transition with a photon emission seems to us more preferable with respect of the other processes. In this section, we consider this process for a neutralized radiation defect in Si and Ge when three electrons are in hybridized states [25].Consider an impurity atom with the nuclear charge Z − 1, which is formed as a result of a neutralization process in an atomic time and has an atomic configuration where electrons at the external shell are in the "mixed", but not in the ground, state: (1) We assume that the electron configuration with the principal quantum number less then n is completely occupied, and the state of such electrons is described by the unperturbed wave function of the free atom.The initial state of the radiation defect in a silicon lattice is sketched in Figure 1a.The mixed state (1) forms a chemical bond, if it possesses by maximal spin S = 3/2, and a spacial part of its wave function one may represent in the view of equiprobable superposition of three Slater's determinants: where: Here, M = m + m , and m, m = 0, ±1. A spacial part of the wave function of the defect in a final state can be represented as a superposition of three hybridized states forming the chemical bond with host atoms of the lattice: Here, summation is carried out over all permutations P of the valence electrons of the impurity and: The unit vectors n a are directed from the impurity to nearest neighbors (along the directions of the chemical bonds).The one-particle functions ψ n a (r) are the hybridized states with directed bonds, and they can be written in a form (see e.g., [26,27]): and satisfy normalization conditions: From this condition, we can obtain relations for the coefficients in the superposition ( 6): Electron States of a Positively-Charged Radiation Defect in Diamond A positively-charged radiation defect in diamond has the effective nuclear charge Z = 5.Its atomic configuration contains only two electrons in an external (unfilled) electron shell, and they are in the "mixed" state [28]: We suppose also that the electronic configuration for the principle quantum number n = 1 is completely filled, and electronic states of the external atomic shell are described by unperturbed wave functions of a free atom.The initial state of the radiation defect in a diamond lattice is shown schematically in Figure 2a. The mixed state ( 9) forms a chemical bond with the nearest host atoms of the lattice.We express a wave function of the initial state (9) in distinguish from the state (1) in the form of superposition with all possible spin states: where: ψ 2s (r) is the wave function of the 2s-state.We assume that all p-states with different projections have equal probabilities: S = 0, 1 are the values of the total electron spin; |S, M S is the appropriate spin-state vector.Spin states with different projections are considered as having equal probabilities; so coefficients in the superposition (10) satisfy the following condition: The space part of the defect wave function in the final state must correspond to the determined value of the total electron spin S, and this can be represented in the form of the superposition of hybridized states providing a chemical bond with the lattice host atoms: where summation is carried out over all possible directions of the chemical bond of impurity valence electrons with the nearest neighbors of the lattice, The unit vectors n a are directed from the impurity to the neighbor atoms (along the direction of chemical bonds) like in (5).One-particle wave functions ψ n a (r) of hybridized states with directed bonds are determined by Equations ( 6)- (8). The final state of the charged impurity is described by a function similar to the superposition (10) where the wave functions with a determined spin must be replaced by Expression (13). Formation of the Neutral Center ( µ A A 4 ) 0 in Si and Ge A lifetime of exited states (1) and ( 10) is determined by a rate of a radiation transition in a bond state and can be calculated by using Fermi's "golden rule": The interaction operator is: where A(r) is the vector-potential of the free radiation field.Let us consider now only the term for one electron with a = 1 in the operator (14) to simplify the following calculations.In this case, matrix elements of the perturbation operator could be represented by the expression: (16) and: where the indexes are a, b, c = 1, 2, 3, 4. If we direct the axis z||n 1 , then the other three p-states in hybridized states (6) turn out as a result of the rotation of the state ψ n1,0 (r) in the state with a rotation moment projection equal to zero on the axes n b .In this case, we get the opportunity to calculate easy integrals incoming in Expressions ( 16) and ( 17): where R(θ a , ϕ a ) is the rotation operator, and the matrix elements (19) are determined by the second column of the rotation matrix: We put in Equation ( 18) the obvious expressions of the matrix elements of the rotation operator (see e.g., [29]). Without reduction of the generality of the calculations, we consider a matrix element only for the zprojection of a momentum operator.Therefore, we have: where: For clarity, we show some intermediate calculations.The total matrix element in Expression ( 14) consists of 72 different items corresponding to different matrix elements between states of the superpositions (2) and ( 4).However, it is enough to calculate only four of them.We give them below. The rest of the three matrix elements are determined by the other possible sets of n a for the electrons with coordinates r 1 , r 2 and r 3 : It is easy to see that permutations of electrons in the superposition (5) do not change expressions for the matrix elements ( 24)- (27).Therefore, the number of permutations in the state (5) with similar expressions reduces the total number of items by six-times. We get a very cumbersome expression for the arbitrary values of the parameters α a , β a , θ a and ϕ a .However, it is necessary to take into account that the system under consideration has a symmetry at less C 3v .In this case, the result could be essentially simplified.We examine the simplest case at first, when the system has a tetrahedral symmetry and all parameters in the hybridized states ( 6) are equal to each other: If the vector n 4 lies in the xz plane, the angles θ a , ϕ a are equal: In the case of symmetrical structure, we get the following expressions for the matrix elements: Adding up Expressions ( 30)-( 32), we get: The matrix elements for the state with M = −1 are calculated by a similar way: For the state with M = 0: After substitution of the values o Parameters ( 28) and ( 29) and adding up Expressions ( 33)-( 35), we have: For the calculation of the integral I sp , we take the appropriate wave functions of the hydrogen-like atom with an effective nuclear charge equal to Z.In accordance with Slater [30], an effective charge is determined as Z = Z − σ, where Z is the real nuclear charge and σ is a screening constant. In a silicon crystal, an aluminum µ-atom µ Al is formed.It has the principle quantum number n = 3, and appropriate calculations for µ Al give the following results: where a 0 is the Bohr radius. In a germanium lattice, a gallium µ-atom µ Ga with the principle quantum number n = 4 must be formed.The unknown value of the matrix element for µ Ga is equal to: In the calculation of a transition probability per unit time, we will take into account that at least three electrons participate in the matrix element of the operator (19), and the number of spin states in Determinant (3) is 2S + 1 = 4: Here, ω i f appropriates the transition frequency of a neutral radiation defect from the energy level of the corresponding free atom state on the energy level corresponding to a hybridized state in a lattice. After integration over the wavevector of photons and averaging over all angles, we get: where α = 1/137 is the fine structure constant.We have substituted here the effective charge Z Al ≈ 3.5 and Z Ga ≈ 5.0 in accordance with Slater [30].16) and ( 17) must be modified for diamond in accordance with Equations ( 11) and ( 13) as follows: Formation of Neutral Center ( and: To derive a common expression now, we consider that the Z-axis does not coincide with any of the bond directions n a .Therefore, as for ( 18) and ( 19), we have: Without the reduction of the generality of the calculations, we consider a matrix element only for the zprojection of a momentum operator. sp , (45) where sp is determined by Equation (23) for n = 2.The interaction operator (15) conserves the total spin, and we need to calculate only matrix elements for the superpositions (10) and ( 12) between states with equal total spins.For states with a total electron spin S = 1, the matrix elements of the interaction operator in Expression ( 14) are equal to zero because of the symmetry of the two-particle states (11) and (13).Therefore, contrary to calculations performed for neutral defects in Si and Ge, it is necessary to calculate matrix elements only for singlet states.Accordingly, a radiation transition in dipole approximation for triplet-states (with a maximum spin value of the ( µ B) + defect) is forbidden.Note that singlet spin states constitute only 1/4 part of all spin states under the assumption of an equal probability of a population of all spin states.This fact leads to the reduction of a total probability transition by nine-times with respect to the probability transition between triplet states. Taking into account a local symmetry C 2v of the ( µ B C 4 ) + cluster, we direct axes as shown in Figure 2b and introduce the following designations: After summation over all bonds, one obtains the expression: We can determine superposition coefficients in (6).Taking into account the relations (8) as Therefore, we have: Here, the condition 2α 2 1 + 2α 2 3 = 1 and, respectively, cos(n 1 ,n 3 ) = cos θ 1 cos θ 3 were used.Finally, the matrix element (47) can be written in the form: For the appropriate wave functions of the hydrogen-like atom with an effective nuclear charge equal to Z, the integral sp is equal to: An effective charge for a boron atom is equal to Z B ≈ 2.6 [30]. In the calculation of the probability of a transition per unit time, we shall take into account that two electrons participate in the matrix element of the operator (15).Carrying out integration over a wavevector of photons and averaging over all angles, we obtain: Here, ω sp corresponds to a transition frequency of the charged radiation defect from the energy level of the free ion ( µ B) + to the energy level of the hybridized (bound) state in the lattice ( µ B C 4 ) + . In accordance with the matrix element (49) and the result of the matrix element (50) calculation, a configuration factor f (θ 1 , θ 3 ) is equal to: A transition frequency ω sp and angles for a configuration factor f (θ 1 , θ 3 ) were calculated numerically in [28] by the quantum-chemical methods.The crystalline chemical environment of clusters in Figure 2 has been taken into account by the procedure [31] based on a passivation of unnatural valences on a border of the cluster by hydrogen atoms.The variation of the geometrical position of the H* atoms, if it is possible, ensures the stoichiometry of the charge distribution on the carbon atoms of the model C 5 H* 12 fragment (Figure 2a, where the B atom is substituted by the central C atom). The initial state of the radiation ( µ B) + defect in a diamond lattice is modeled by the tetrahedral structure of Figure 2a where the central carbon atom is substituted by the B atom, and the nonequilibrium length of four B-C bonds coincides with the equilibrium length of 1.523 Å for C-C bonds found by us earlier after geometry optimization of the central structural C 5 H* 12 (T d ) fragment.As a result of a structural relaxation, the defect transits into the final hybridized state |ψ Cr described by the lowest in energy triplet structure of the [BC 4 H* 12 ] + (T, C 2v ) cluster (Figure 2b).The energy of such a transition is calculated from the difference between the total energies of levels |ψ in and |ψ Cr to be equal to 1.17 eV. The angles in the cluster are equal to θ 1 ≈ 65 • , θ 3 ≈ 128 • .Calculations of a spin density at the center of the cluster give the value of the superposition parameter A 0 = √ 7/4.The effective charge for 2s and 2p states of the boron atom is Z = 2.6.Substituting the calculated values into Formulas (51) and ( 52), we obtain a numerical estimate of the radiation transition rate of the impurity center ( µ B) + into a hybridized state: The obtained value confirms the validity of the assumption on the kinetics of a charged radiation defect ( µ B) + thermalization reported in [24].The hybridization rate ( 53) is two orders of magnitude less than the rate of a non-hybridized charged center formation. The hybridized charged center ( µ B) + quickly, during characteristic lattice times, transfers into a neutral state.Therefore, the neutralization time of a charged defect formed by a negative muon in a diamond lattice is determined by the value of ( 53), and at least two order higher than that for silicon and germanium (40). Formation of an Ionized Acceptor Center In this section, we will consider the process of an electron capture on the neutral radiation defect with totally restored chemical bonds and the formation of an acceptor center in the ionized state. According to our cluster calculations, the neutral [( µ B)C 4 ] 0 defect has C 3v symmetry and creates the substantial electric dipole moment, which is equal to 1.08D, in a diamond lattice.Here, D is Debye, the unit of an electric dipole moment in the atomic system of units (D= 10 −18 CGSE).The dipole moment is directed along the symmetry axis.Therefore, we can suppose that any neutral cluster of a type [( µ A) A 4 ] 0 in crystals with a diamond structure possesses an electric dipole moment of the order of 1D. This electric dipole moment gives rise to an interaction necessary to capture a lattice electron and the form of an ionized acceptor center.The neutral center µ A has unsaturated chemical bonds because a crystal lattice turns out to be deformed.This deformation is a reason to change a phonon spectrum and the local phonon mode appearance.Chemical bounds are saturated after the missing electron capture.The new cluster is an ionized acceptor center and possesses a local crystal symmetry.When the ionized acceptor center is formed, an appropriate phonon of the local mode is radiated, and crystal deformations disappear.Therefore, the problem is very similar to the problem of the thermalization of molecular ions in molecular crystals and cryocrystals of noble atoms (see e.g., [32][33][34]).An exact solution of the problem taking into account a crystal symmetry is scarcely possible.However, qualitative estimations can be obtained in some approximations [35,36]. Effective Hamiltonian and Interaction Operator Detailed calculations of an ionized acceptor center formation in diamond, silicon and germanium were carried out in [35,36], and here, we will present the main results.The electric dipole moment creates a scalar potential, and the interaction energy with lattice electrons is U = eϕ = e(dr)/(εr 3 ), where ε is a dielectric penetration.Therefore, taking into account the displacement u (r → r + u) and neglecting the changing of the denominator of the potential, we can write an electron-phonon interaction operator: e(d û) The operator of radial displacements can be determined in the approach of an isotropic elastic media.For this reason, we need to study vibrations of a sphere at the center of which is placed an electric dipolar moment d.This dipolar moment creates an electric induction D, and a displacement vector u satisfies the following equation: where c is the longitudinal sound velocity, κ is the dielectric susceptibility and ε = 1 + 4πκ, ρ is the density of a media. To solve Equation (55), we will take into account only radial vibrations of a deformed crystal lattice with respect to the center of the sphere.The center of this sphere coincides with our radiation defect.Therefore, we can introduce as usual for problems with central symmetry a radial displacement χ = ur.In this case, we have the more simple equation for χ: where χ is the second derivative on the radial variable r.Making the one-dimensional Fourier transformation for the function χ(r, t): we obtain a dispersion relation in a long wavelength approximation: where R 1 is the radius of the first coordination sphere.More detailed calculations are represented in Appendix A. The numerical estimates for the frequency Ω in diamond, silicon and germanium crystals are presented in Table 1 (the parameters oft the crystals were taken from [37,38]).Since a dipole moment d is considered as an unknown parameter, the numerical estimates are presented in units of Debye. Table 1.Physical parameters of the C, Si and Ge crystals and the estimate of the frequency Ω. Crystal Density ρ, g cm −3 R 1 , 10 Now, we can construct an effective Hamiltonian describing the radial vibrations of a lattice and determine the operator of radial displacements û in Operator (54).Consider radial vibrations in a sphere with the radius R D with boundary conditions u(R D ) = 0. Keeping a dependence on time t, we need to use discrete Fourier amplitudes χ n instead of Equation (57): where k n = πn/R D .Therefore, χ n and ρ χn are generalized coordinates and the momentum of the system under consideration, respectively.The effective Hamiltonian for a system of independent oscillators can be written as: Here, and the dimensionless generalized momentum and coordinate are defined as usual: while the units of the generalized momentum and coordinate are equal to: Introducing, as usual, the and creation operators: we obtain the operator of radial displacements of a lattice, which must be substituted in Equation (54), in the form: It is easy to see that the interaction operator (54) has a strong singularity at r → 0, which leads to the divergence of matrix elements.However, Operator (54) is obtained in a dipolar approximation; its expression is valid for respectively large values of the radius vector and is not applicable at r → 0. In this case, as usual (see e.g., [39]), the potential is taken as constant at r < R 0 , where R 0 is a certain characteristic distance, i.e., V e−ph ≈ ed/(εr 3 ) cos θ û, for r > R 0 , edr/(εR 4 0 ) cos θ û, for r < R 0 . (65) In our case, we can take R 0 ≥ r 0 , where r 0 is the length of chemical bonds in the lattice. Electron Capture Rate The probability of electron capture per unit time can be calculated with the well-known Fermi golden rule: (66) We will consider a case of low temperature, and the initial state |i = |k |0 ph corresponds to the free electron in the valence band with the wave-vector k and the absence of exited radial phonons. The final state is determined by the captured electron to the |ns or |np hybridized state of the cluster and by the excitation of the radial phonon with the wavevector k ph : Here, n = 2, 3 and 4 for C, Si and Ge, respectively.Equation (66) takes into account that all electrons having wavevectors in the interval from k to k + dk can be captured by the cluster. The volume element of the final states in the case of one-dimensional motion is equal to: where do is the element of the solid angle into which a phonon is emitted.The energies of the initial and final states are: m * is the effective mass of the electron.To obtain numerical results, the ionization energy of the cluster (acceptor) is taken for h .Integration of (66) over k gives: where a vector of the final state | f e takes into account electron states only, and according to Equation (65), Here: For the further analysis, it is convenient to introduce the dimensionless parameters and variables: where n = 2, 3 and 4 for C, Si and Ge, respectively.The estimated characteristic parameters for considered crystals are summarized in Table 2. Table 2. Characteristic parameters estimated for C, Si and Ge. Crystal r 0 , 10 −8 cm Z r 0 ω 0 , 10 14 c −1 k 0,max , 10 7 cm −1 k 0,max r 0 k ph,max , 10 All considered crystals are anisotropic, and the sound velocity c in the dispersion relation (58) depends on the direction of a phonon propagation.To take this into account, we will use, as usual, the average value of the longitudinal velocity of sound c , neglecting the effects of anisotropy. We define a characteristic frequency: and corresponding dimensionless frequencies ω = ω ph /ω 0 , = /ω 0 and Ω = Ω/ω 0 .Taking into account the dispersion relation of the radial phonons (58), we can write the electron capture rate in the form: where: The dimensionless matrix element A is determined by Function (68): Equation ( 72) shows that the electron can be captured both in sand p-states of the hybridized state of the cluster.As was shown in [36], the capture rate to the p-state w p is approximately two orders less than the capture rate to s-state w s .Therefore, in the following, we can consider the electron capture to the s-state.In this case, the matrix element (74) is given by the expression: where I 1 ( ω) and I 2 ( ω) are the real and imaginary parts of the matrix element, respectively.Finally, the capture rate (72) has the form: Obtained Formula (76) determines the formation rate of the ionized acceptor center through the capture of the electron of the medium on the neutral radiation defect induced by the negative muon or pion in crystals.Analytical expressions depend on well-known characteristic parameters of the medium except for two parameters of the cluster, namely, the electric dipolar moment d and the parameter R 0 in Equation (65).The dependence on d of the integral in Equation ( 75) is weak and can assume that w capt ∝ d 2 .Unfortunately, the dependence of the results on the parameter R 0 is more critical, because the matrix elements exponentially depend on the lower integration limit.Nevertheless, the parameter R 0 cannot be smaller than the length of the chemical bound in the lattice.We can determine the upper limit also R 0 R 1 , the radius of the first coordination sphere. Numerical calculation were performed for several different values of the uncertain parameters d and R 0 .The results are summarized in Table 3. According to the obtained results in the considered range of the uncertain parameters R 0 and d, the spread of the estimates of the capture rates is about two orders of magnitude and seems at first unsatisfactory.However, it is not surprising because both the dipole approximation and the dispersion relation for the radial phonons (58) are quite rough for r ∼ R 1 .Nevertheless, the considered interaction mechanism is fairly justified and describes the process of the electron capture on the neutral defect of the lattice with the formation of the ionized state of the acceptor center. Discussion We have considered the total process of an acceptor center formation in crystals with the diamond structure, which appears as a radiation defect induced by negative pions or muons.It was shown that the evolution of this kind of radiation defects can be divided into two physically different stages.At the first stage, the negatively-charged particle is stopped in the crystal and captured by a nucleus with the charge Z (in the case of π − ) or at the K-shell of a muonic atom (in the case of µ − ).Both negative pions and muons create a host nucleus with an effective charge Z − 1.This strongly-charged center interacts with trace electrons and captures them.This stage of radiation defect neutralization exists because of Coulomb interaction.Numerical calculations show that this stage of neutralization strongly differs in diamond and other crystals.Namely, the radiation defect is completely neutralized in Si and Ge for a relatively short time τ n ≤ 10 −11 s.In diamond, this radiation defect can be completely neutralized for a long time τ n > 10 −6 s. The second stage of evolution of this radiation defect is connected with restoration of chemical bonds with the lattice as the first step and formation of an appropriate acceptor center as the final step.Chemical bonds in Si and Ge are restored for neutral radiation defects and in C chemical bonds can be formed for a single-fold charged center.Formation of a chemically-bound radiation defect is accompanied with sufficiently large energy emission.Therefore, the process of chemical bonds' formation can be described with the help of radiation transition.Our estimates gave rather a long time for this step: τ hybr ≈ 2.6 × 10 −9 s for Si and τ hybr ≈ 2.0 × 10 −9 s for Ge.Charged radiation defect in C forms chemical bonds very quickly: τ hybr ≈ 0.6 × 10 −11 s.We can see that this time is many orders shorter than τ n .Therefore, the radiation defect in diamond is neutralized in the chemically-bound state.This time cannot be strictly estimated, but we can suppose that it is determined by characteristic electronic times in the lattice and must be of the order of 10 −10 s.We can conclude that the first step of the formation of a chemically-bound neutral radiation defect is approximately two orders shorter in diamond with respect to silicon and germanium. The second step finishes the formation of an acceptor center in the ionized state.This step is similar in all of the above-mentioned crystals, but very complicated for obtaining good quantitative results.We can see that the chemically-bound neutral radiation defect is a cluster with unsaturated chemical bonds.This unsaturated chemical bond can be saturated if an electron of a valence band of the crystal is captured by the cluster.What kind of interaction gives an opportunity to capture a missing electron?The neutral cluster possesses a relatively large electric dipole moment, which interacts with lattice electrons.This interaction can be qualitatively described in the dipolar approximation.Our analytical and numerical calculations show that: 0.5 × 10 −10 s ≤ τ capt ≤ 2 × 10 −8 s for diamond, 1.4 × 10 −8 s ≤ τ capt ≤ 2 × 10 −7 s for silicon, 0.5 × 10 −9 s ≤ τ capt ≤ 1.7 × 10 −8 s for germanium. The total time for the acceptor center formation in the ionized state as a result of a radiation defect induced by negative pions and muons is the sum of times for all steps: and is determined by the longest item.The final step is slower with respect to the first two steps, and we can conclude that the formation time of the ionized acceptor center is the shortest for diamond (≤2 × 10 −8 s) and the longest for silicon (≤2 × 10 −7 s).These values are comparable with characteristic times in semiconductor devices. The ionized acceptor center is neutralized through the mechanism of Coulomb capture of the hole from the valence band.This process is well studied in many articles (see e.g., [39][40][41]), and we will not concern ourselves with this problem here. Conclusions The obtained results will be useful both for µSR experiments and research of different radiation defects in semiconductors.The considered approach can be applied to crystals with the sphalerite-type structure (A I I I B V semiconductors, e.g., GaAs, InSb, CdS), which are widely used in electronic devices.Unfortunately, these crystals are more complicated for analysis because of the large variety of possible impurity centers.In addition, the model of chemical bonds must be modified for some calculations. the main contribution to the integral comes from the values k ≤ r −1 .Correspondingly, in the accepted approximation, the integral on the right-hand side can be modified to the form: After that, the simple dispersion relation (58) is obtained. Figure 1 . Figure 1.Radiation defect µAl state in a silicon lattice: (a) all bonds with host atoms are broken in the initial state, and the defect state is determined by the "mixed" function 3s3p 2 ; (b) three electrons of the impurity form chemical bonds with host atoms in the final state, and an unsaturated (broken) bond is equiprobable for four nearest neighbors of the cluster (AlSi 4 ) 0 . Figure 2 . Figure 2. Structure of the radiation ( µ B) + defect in a diamond lattice: (a) the ( µ BC 4 ) + cluster has a T d symmetry in the initial triplet state; (b) it has a lower C 2v symmetry with angles θ 1 = 65 • and θ 3 = 128 • in the final triplet state.The internuclear distances are given in Å.The spin densities on atoms are marked by bold type. Table 3 . Estimate of the formation rate of ionized acceptor centers ( µ A) − in the C, Si and Ge crystals.
10,316.6
2017-06-14T00:00:00.000
[ "Physics" ]
Analysis of Monthly Rates of Return in April on the Example of Selected World Stock Exchange Indices The article presents a study of the effectiveness of 22 selected stock indices with the use of the rates of return in the month of April. The portfolio replicating the stock index was bought at the closing prices on the last session in March, and sold at the closing prices on the last session in April. The presence of market inefficiency is demonstrated in cases of the following indices: All-Ord, AMEX, BUX, CAC40, DAX, DJIA, DJTA, DJUA, EOE, FTSE100, SMI, SP500, but for the following indices: B-Share, Bovespa, Buenos, Hang-Seng, MEX-IPC, Nasdaq, Nikkei, Russel, TSE and WIG, the obtained monthly rates of return were statistically equal to zero. In the last part of the article, the correlation coefficients of rates of return for analyzed indices in month of April were surveyed. © Copyright Institute of Economic Research Date of submission: March 1, 2015; date of acceptance: September 28, 2015 ∗ Contact<EMAIL_ADDRESS>Warsaw School of Economics, al. Niepodległosci 162, 02-594 Warsaw, Poland 308 Krzysztof Borowski Introduction Efficient market hypothesis (EMH), the core of the influential paper of Fama, has been a cornerstone of financial economics for many decades (Fama, 1970, pp. 383-417).Although current definitions differ from that developed by Fama, the efficiency of markets prevents systematic outperforming the market, usually in a form of above-average risk-adjusted returns.The problem of the financial markets efficiency, especially of equity markets, has been discussed in a number of academic papers, which has led to a sizable set of publications examining this issue.Therefore, only some views presented in some of the scientific papers, regarding market efficiency, have been included in this paper.In many empirical works dedicated to the time series analysis of rates of return and stock prices, there were found statistically significant effects of both types, i.e. calendar effects and effects associated with the size of companies.These effects are called "anomalies", because their existence testifies against market efficiency (Simson, 1988, pp. 124-156;Jajuga & Jajuga, 2006, pp. 147-149). One of the most common calendar anomalies observed on the financial markets are (Nowakowski & Borowski, 2005, pp. 322-329) 1 : − Day-of-the-week effect -daily average rates of return registered on the stock market differ for various days of the week.One of the first works dedicated to this type of effect was developed by Hirsh (Hirsh, 1987, pp. 98-124).He examined the behavior of the S&P 500 index in the period from June 1952 to June 1985, proving that the index close on Monday was lower in 57% than the index close on the preceding Friday.For other days of the week, the following trend was observed -the index close on one session was higher than the index close on the previous session (Tuesdays/Monday of in 43% observations, Wednesday/Tuesday in 55,6%, Thursday/Wednesday in 52,6%, Friday/Thursday in 58%).The day-of-the-week effect was presented on the US market (Jaffie et al., 1989, pp. 641-650;French, 1980, pp. 55-69;Lakonishok & Maberly, 1990, pp. 231-243 ), as well as on other markets (Kato et al., 1990;French, 1980, pp. 55-69;Lakonishok & Maberly, 1990, pp. 231-243;Connolly, 1989, pp. 133-169).− Monthly effect -achieving by portfolio replicating the specified stock index, different returns in each month.For the first time, this effect was observed by Keim (1983, pp. 13-32), who noted that the average rate of return on stocks with small capitalization is the highest in January. Rozeff and Kinelly, applying equally-weighted equity index, found that for the period 1904-1974 in the US market, the average return per month was about 0.5%, whereas for January it was 3.5% (Rozeff & Kinelly, 1976, pp. 379-402).But Lakonishok and Smid, using a sample for the period of 1987-1986 for the Dow Jones Industrial Average (DJIA), found no January effect (Lakonishok & Smith, 1988, pp. 403-425). Bernstein, taking into consideration the behavior of the US equity market in the period from 1940 to 1989, discovered the interdependence between rates of returns in each of analyzed months (Bernstein, 1991, pp. 25-45).Modern bodies of research, e.g.Gu and Schwert, proved that in the last two decades of the twentieth century, the phenomenon of the month-of-the-year effect was much weaker (Gu, 2003, pp. 18-28), (Schwert, 2002, pp. 937-972).This fact would suggest that the discovery and dissemination of the monthly effect in world financial literature contributed to the increase of market efficiency.− Other seasonal effects -for example: the "within-the-month effect" -positive rates of returns only occur in the first half of the month (Ariel, 1987, pp. 161-174;Kim & Park, 1994, pp.145-157) or the holiday effect -tendency to price increase before trading breaks caused by holidays (Ariel, 1987, pp. 161-174;Ariel, 1990Ariel, , pp. 1611Ariel, -1626)).Among the monthly effects, the ones that can be distinguished are the January effect and St. Nicholas rally (also called "the end of the year effect").The most popular monthly effect is called "January effect", i.e. the tendency to observe higher average returns of the stock market indices in the first month of the year.In turn, the effect described in the scientific literature as "St.Nicholas rally" -the second half of December-is characterized by the highest rates of return of the 24 half-months.Both of these effects were analyzed in a number of papers (Choudhry, 2001, pp. 1-11;Fountas & Konstantinos, 2002, pp. 291-299).In turn, in the scientific literature, the tendency to register highly positive rates of return on the stock market in April is called "April effect".This effect was particularly strong on the British equity market (Rozeff & Kinney, 1976, pp. 379-402;Corhay et al., 1988).Gultekin andGultekin, analyzing the data for 1959-1979, proved strong seasonal pattern in returns on the British market.Although January was the best single month, the period from December to April consisted of months which on average produced positive returns.The compound return calculated for the December-April period was higher than the compound return for the whole year, because other seven months generated a negative return (Gultekin & Gultekin, 1983, pp. 469-481).According to Bernstein, the rates of return in the month of April are strongly positive (Bernstein, 1996, pp. 76-77).However, some authors have questioned the existence of this effect in selected markets -e.g.Raj and Thurston proved that both the January and April effect were not observed on the New Zealand Stock Exchange (Raj & Thurston, 1994, pp. 81-83).Hasan and Raj, as well as Li and Liu, came to the same conclusion (Hasan & Raj, 2001, pp. 100-105;Li & Liu, 2010, pp. 9-14).On the other hand, Raj and Kumari proved that on Bombay Stock Exchange and National Stock Exchange in India, the January effect was not observed, but the rates of return in April were significantly higher than in other nine months of the year.In their opinion, the main factor responsible for the occurrence of the April effect was the strong equity sale on the stock exchange in March due to the end of tax year in India, which falls in March (Raj & Kumari, 2006, pp. 235-246).A similar explanation for the existence of the April effect on other stock exchanges is provided by, among others, Reinganum and Shapiro (Reinganum & Shapiro, 1987, pp. 281-295), and on the New Zealand Stock Exchange -Hasan and Raj (Hasan & Raj, 2001, pp. 100-105).According to the authors, the introduction of income tax on capital gains in April 1965 in the UK and the change of the fiscal year beginning from January 1 to April 1, was the main incentive responsible for the April effect on the British equity market.The occurrence of the April effect on the UK market has also been proven in other papers (Clare et al., 1995, pp. 398-409). In Poland, research on market efficiency and the occurrence of seasonal effects were conducted by several authors (Buczek, 2005, pp. 51-55;Szyszka, 2007, pp. 141-146;Czekaj et al., 2001, pp. 85-96).A study done by these authors indicate a high level of efficiency in the weak form of the Polish equity market.It can be stated that, apart from the initial period (until 1994), the prices of shares listed on the Warsaw Stock Exchange satisfy the assumptions of the weak form of market efficiency.Buczek confirmed the existence of anomalies associated with the occurrence of the January effect (Buczek, 2005, pp. 51-55).The author proved that in the period of 1999-2004, the average rate of return of WIG index in January was equal to 6.2%, while the average rate of return calculated for all other months stayed at the level of 0.8% The aim of this paper is to examine the prevalence of the April effect with the use of close price during last session in March and April on the selected financial markets represented by the following indices: All-Ord, AMEX, B-Shares, Buenos, BUX, CAC40, DAX, DJIA, DJTA, DJUA, EOE, FTSE100, Hang Seng, Middle-IPC, Nasdaq, Nikkei, Russel, SMI, S&P500, TSE. Research Methodology Analyzing the occurrence of the effects of seasonality, the calculation of rates of return with the use of close prices for two consecutive sessions is implemented (Gultekin & Gultekin, 1983, pp. 469-481). In the case of the monthly effects, the rate of return will be calculated basing on the close value of the analyzed stock index on the last session in March (I t-1 ) in relation to the close value of the index on the last session in April (I t ): Due to the fact that the analyzed indices have different starting dates of their publication, and taking into account the contents of the database provided by Bank Ochrony Srodowiska Brokerage House, the analysis of the seasonality effects for individual indices will be proceeded in different time intervals. For each analyzed index available in the database provided by BOS Brokerage House, the monthly rate of return in April will be calculated.Then the null hypothesis will be tested that may be formulated as follows: the average monthly rate of return in April for each of the 22 analyzed index is equal to zero (for % 5 = α ). Rejection of the null hypothesis would allow to accept the alternative hypothesis that the average monthly rate of return in the month of April, for a particular stock index, is statistically different from zero.This fact will prove the occurrence of the April effect on the basis of closing prices on the last sessions in March and April, for a given index. The occurrence of April effect with the use of the closing price on the last sessions in March and April, permits to achieve excess returns in long term (which can be used in practice) in case of the analyzed stock index, and is a proof of the existence of the calendar anomalies, providing against the theory of the efficiency of financial markets, what testifies against the theory of market efficiency.The outcome may be regarded as a part of the ongoing discussions on the hypothesis of financial markets efficiency, which was introduced by Fama (Fama, 1970, pp. 383-417). For the research, the following 22 indices were selected: 1. All-Ordinaries -the index of the Stock Exchange in Sydney, 2. AMEX -the American Stock Exchange index, 3. B-shares -the stock market index in Shanghai, The analyzed group of equity indices consists of two group of indices: a) Equity indices of developed countries (e.g.DJIA, TSE 300, S&P 500), b) Equity indices of emerging markets (e.g.WIG, BUX, MEX-IPC, Bovespa, Buenos).The choice of indices is followed by the availability of data for multiple time intervals offered by Bank Ochrony Srodowiska Brokerage House.In turn, the index abbreviations are in accordance with generally accepted abbreviations applied in the information services. The starting dates, concerning the availability of the value of the analyzed indexes, are presented in Table 1.Regarding CAC40 index, the covered period when the seasonality effect was examined, extends from April 1995 to April 2014, which is equivalent to 20 monthly rates of return.The longest available time series covering more than 40 years, enabled the calculation of 43 April monthly rates for Nasdaq index, and 45 monthly rates of return for the following indices: S&P500, Nikkei, DJIA.The transaction cost were not included in the analysis.30.04.1992 * _ Due to the fact that the first session on the Warsaw Stock Exchange was held on 04.16.1991, the first April monthly return for WIG index was calculated in 1992.Given the fact that at that time, sessions on the Warsaw Stock Exchange were conducted only once in a week, as well as that only rates of return in the second half of April 1991 would be taken into account, it appears to be unjustified to calculate monthly return in April 1991. Source: own calculation. In the last part of the investigation, the correlation matrix between the April returns for different stock market indices was calculated, what will allow to assess the degree of interdependence between the different markets represented by equity indices. Analysis of Results The results for analyzed stock indices are summarized in Table 2 and Table 3.In the cases where the number of monthly rates of return was lower than 30, the t-Student distribution was applied.Otherwise, the normal distribution was implemented. The nominal rate of return in the Table 2 and Table 3 was calculated as the multiplication of the average monthly return in the month of April by the factor equal to 12.If there were no reasons to reject the null hypothesis, then in the Table 2 and Table 3, in the line entitled "Null hypothesis verification" the word "Truth" is used, but when the null hypothesis was rejected in favor of the alternative hypothesis -the word "False" was implemented. The average April returns were statistically different from zero in 12 out of the 22 analyzed indices, i.e. in 54,54% of analyzed cases.The April returns, statistically different from zero were observed for the following indices: All-Ord, AMEX, BUX, CAC40, DAX, DJIA, DJTA, DJUA, EOE, FTSE100, SMI, SP500.For all remaining indices, e.g.B-Share, Bovespa, Buenos, Hang-Seng, MEX-IPC, Nasdaq, Nikkei, Russel, TSE and WIG, there were no reasons to reject the null hypothesis, which means that the average rate of return was equal to 0 with the confidence level of 95%.It is worth considering that the group of indices, for which the April average rates of return were statistically different form zero, is dominated by the equity indexes of developed countries: ALL-Ord, AMEX, CAC40, DAX, DJIA, DJTA, DJUA, EOE, FTSE100, SMI, SP500.The Budapest Stock Exchange (BUX), as the only representative of the group of countries with lower levels of financial market development compared to USA or UK, was included in the group of indices for which the hull hypothesis was rejected. The group of indices, for which average monthly return in April is equal zero (for ), is dominated by stock markets indices of countries with less significant level of financial market development (B-Share, Bovespa, Buenos, MEX-IPC and WIG), although the representation of the stock market indices of the countries with higher level of financial market development is large (Hang-Seng, Nikkei, Russel, TSE).This fact allows to draw the conclusion that the April effect was observed more frequently in the developed than in the developing (emerging markets) countries. In all analyzed cases, the monthly average returns in April remained positive and reached the highest value equal to 4.20% for the BUX index, which was higher by 0.05% than the return calculated for the WIG index.But it should be noted that the null hypothesis was rejected in the case of the BUX index, and there were no reasons to reject the null hypothesis for WIG index.The null hypothesis verification: "Truth" -there is no reason to reject the null hypothesis, "False" -the null hypothesis is rejected in favor of the alternative hypothesis.Source: own calculation.The null hypothesis verification: "Truth" -there is no reason to reject the null hypothesis, "False" -the null hypothesis is rejected in favor of the alternative hypothesis. Source: own calculation. The lowest average monthly return in April, equal to 0.69%, was registered for the Canadian index TSE (in this case there were no reasons to reject the null hypothesis).The biggest error in estimating the average return in April and equal to 3.03% was observed for the WIG index, but the lowest was noted for MEX-IPC (0.58%).The index volatility analysis in the month of April permits to draw the conclusion that the highest value of the standard deviation was calculated for the WIG index (14.51%)and the lowest for DJUA (3.16%).In the case of the range of trait variability, the highest value was obtained by the WIG index (0.6606), and the smallest by DJUA (0.1168). The percentages of positive returns registered in the month of April, ordered by decreasing value, are presented in the Table 4.For AMEX and DJUA positive returns were observed in 80% of all cases, for BUX and DAX in 75%, and for the following 4 indexes: CAC40, DJTA, EOE and SMI in 75%.On the British market, positive returns were registered in 68.18% (FTSE100) and in Warsaw in 60.87% (WIG).Regarding TSE index, positive returns in April were calculated in 52% of analyzed cases.For the same index, the percentage of negative returns recorded in April mounted 48% and was the highest.Let's consider the following investment strategy (Portfolio Replicating Strategy in April).The long position in the investment portfolio, replicating the specific stock index, is opened on the last session in March at closing prices, and is canceled on the last session in April, also at closing prices.During all months in a year, the financial resources are invested on an interest-free deposit.The cumulative returns in the period of 1995-2014 (i.e. during 20 years), for each of the analyzed stock market indices are shown in Figure 1 (except for those, for which the number of monthly returns was lower than 20 years, i.e. for the following indexes: B-Shares, Buenos and Russel). The highest rate of return available to achieve for this type of strategy was registered in the case of the WIG index and equal to 123%.The second highest return amounted to 120% and was observed for the other stock exchange originating from the Central and Eastern Europe, e.g. for Hungarian exchange, represented by BUX index, The lowest rate of return, recorded for the Portfolio Replicating Strategy in April attained the level of 26% and was calculated for TSE index -Figure 2. Source: own calculation. The correlation coefficients of April monthly returns for all analyzed indexes are given in Table .5. The negative correlation coefficients were recorded only in 6 out of 231 cases.The most negative (-0.20) was computed for two indices: BUX and DJUA, while for the pair of indices WIG and DJUA the factor mounted to -0.15.In other cases, the correlation coefficient was higher than -0.10.It is worth noting that in 4 cases (DJUA, WIG MEX-IPC and Nikkei), the correlation coefficient was slightly negative.The most positive value of the correlation coefficient for April monthly returns was observed for two indices: AMEX and DJIA (0.95), and the second most positive for also two American stock indices: DJIA and S&P500 (0.92).A total number of correlation coefficients greater than 0.6 was equal to 105, that represents 45.45% of all calculated correlation coefficients -see Table .6. For Polish investors, the practical significance may have a high correlation coefficient of April monthly returns, calculated for WIG index and the following indices of foreign stock exchanges: Bovespa (0.77), BUX (0.66), MEX-IPC (0.65) and Russel (0.72).In turn, the correlation coefficient observed for WIG and FTSE100, as well as for the pair of indices: WIG and S&P500, was almost close to zero, and in both cases was equal to 0.09. The relatively high correlation coefficient, calculated in the analyzed period, for returns of the pair of indices: WIG and BUX (approx.0.66), proves undoubtedly the cash inflow from foreign investment funds on the stock exchanges in Poland and Hungary.Stock exchanges in both countries belong to the segment of emerging markets, and many funds making portfolio investments, treats these two exchanges as a single investment area.One of the explanations of the monthly effect occurrence may be publishing of significant macroeconomic information from global markets, as well as information regarding listed companies at the turn of the month.This type of view is expressed by Penman and Connolly (Penman, 1987) (Connolly, 1991).According to these two authors, the largest number of information concerning listed companies, appears just on weekends and at the turn of month.The similar conclusion are presented by Thaler, as well as Dyl and Maberly, who explain the existence of the-end-of-the-week effect and of the monthly effect as deposition of significant market information for the weekends and for the turn-of-the-month by listed companies (Thaler, 1987;Dyl & Maberly, 1988).When analyzing the monthly returns, this explanation is far less important than the analysis of the weekend effect.The most reliable explanation of April effect, however, is a process of rebuilding investment portfolios by investors in those countries, in which fiscal year ends on March 31.Poorly performing shares are sold in March and investors buy back shares (frequently the same) in April, pushing up prices. Foreign investor from counties, where other than calendar tax year inures, opening new long positions, contribute in this way to the creation of the April effect in selected countries. Finally, it should be noted that the existence of positive returns on certain days of the week or months, and negative returns in others, is a characteristic feature of each of the financial markets and reflects the inefficiency of such a market.This type of approach can be found in French's work, who does not specify the causes of negative returns in the US market, considering them to be characteristic for the American market and providing for its inefficiency (French, 1980, pp. 55-69).A similar thesis was presented by Rogalski (Rogalski, 1984, pp. 835-837). Conclusions Assuming the acquisition of an index replicating portfolio on the last session in March and liquidating this positon on the last session in April (in both cases at close prices), calculations conducted in this paper proved the existence of returns statistically different from zero for the following stock market indices: All-Ord, AMEX, BUX, CAC40, DAX, DJIA, DJTA, DJUA, EOE, FTSE100, SMI, SP500.Returns statistically equal to zero were observed for the following indices: B-Share, Bovespa, Buenos, Hang-Seng, MEX-IPC, Nasdaq, Nikkei, Russel, TSE and WIG. The obtained result proved the existence of April effect on financial markets, thus confirming previous conclusions achieved by other researchers (Rozeff & Kinney, 1976, pp. 379-402;Corhay et al., 1988, pp. 120-135;Clare et al., 1995, pp. 398-409;Gultekin & Gultekin, 1983, pp. 469-481;Bernstein, 1996, pp. 76-77).This remark concerns mainly the British market, represented in the survey by the FTSE100 index, for which the existence of April effect was proved.The same effect occurred in the analyzed period for such Anglo-Saxon exchanges as American (represented in the research by indices: AMEC, DJIA, DJTA and DJUA) and for Australian (All-Ord index), but the April effect was not found in the case of other Anglo-Saxon indices as: Nasdaq and Russell (both USA) and TSE (Canada). Research regarding the capital market effectiveness should be continued in the future, and its outcome compared with results obtained by other researchers. Basic statistical data obtained for April returns for the first 11 of the 22 analyzed indices Figure 1 . Figure 1.The cumulative returns in the investment horizon of 20 years (1995 -2014) for analyzed stock indices with the use of the Portfolio Replicating Strategy in April Figure 2 . Figure 2. The value of the unit portfolio for the following 3 indices: WIG, BUX and TSE in the period 1995-2014 with the use of Portfolio Replicating Strategy in April Table 1 . The starting date and the number of monthly returns for each index Table 3 . Basic statistical data obtained for April returns for the remaining 11 of the 22 analyzed indices Table 4 . The number and percentage of positive and negative returns in the month of April for the analyzed stock indices, sorted in descending order for the percentage of positive returns Table 4 continued Source: own calculation. Table 5 . The correlation coefficients of monthly returns in April for analyzed indices Table 6 . The numbers and percentage of the correlation coefficients in different ranges
5,576.2
2015-05-01T00:00:00.000
[ "Economics" ]
Quantum Oscillations of Interacting Nanoscale Structural Inhomogeneities in a Domain Wall of Magnetic Stripe Domain It was established that at low temperatures, quantum oscillations of a pair of interacting nanoscale structural inhomogeneities (vertical Bloch lines) occur in a domain wall of stripe domain in uniaxial ferromagnetic film. The effective mass of vertical Bloch line and conditions for this effect were determined. The effect can be used in the hybrid storage devices bit + q-bit. Background The investigation of structural inhomogeneities in domain walls (DWs) in ferromagnetic materials is an important issue of the physics of nanoscale ferromagnetic systems. In uniaxial films, the vertical Bloch lines (VBLs) [1], or local transition zones between subdomains of DW, are often considered. These nanoscale objects with characteristic size ≤10 2 nm are the topological elements of DW internal structure, which impact on the behavior of DW in external magnetic fields and give rise to various dynamic effects (see [1,2]). Moreover, VBLs appear not only in DWs in ferromagnetic films but also in nanosized ferromagnetic stripes [3][4][5] and wires [6]. Similar topological structures were recently found out in the ferroelectric materials [7]. These examples show that the VBL can be regarded as an integral part of the process of self-organization of the order parameter in nanoscale magnetic and electrical structures. It is worthwhile to note that a pair of VBLs with a negative topological charge in the domain wall of magnetic stripe domain (SD) was proposed to be used as a bit of information in the solid-state data storage devices [8,9]. In this system, there are exchange and magnetostatic interactions between the vertical Bloch lines; as shown below, the resultant force causes minor oscillations of VBL. At the same time, the VBL and Bloch points (intersections of two VBLs) exhibit macroscopic quantum properties at low temperatures (T < 1 K) [10][11][12][13][14][15]. It is natural to assume that this feature of structural inhomogeneities in DW will be reflected both in the quantum dynamics of two interacting VBLs and in the properties of new data storage devices on their basis. Indeed, by superimposing magnetic field that provides quantum oscillation mode of VBL pair in the domain wall of SD, we can form a q-bit with the ground state and another level of VBL oscillation spectrum, excited by magnetic field. Therefore, there are prerequisites for creating a hybrid data storage device bit + q-bit. It should be emphasized that this situation is possible only due to the interaction between the nanoscale inhomogeneities in DW (VBLs in our case). Thus, the study of quantum oscillations of interacting VBLs in a domain wall of SD is of great importance. A solution for this problem allows to develop ultra-dense data storage devices with high functionality, which would combine "classical" and quantum modes of data recording. This work deals with the quantum oscillations of a pair of interacting VBLs in a domain wall of SD in uniaxial ferromagnetic film whose quality factor Q (the ratio of magnetic anisotropy energy to magnetostatic energy) is significantly higher than 1. In the first section, an expression for the VBL effective mass in a domain wall of SD is derived. This expression will be used in the following sections for the study of VBL quantum oscillations and conditions of their excitation. Methods The Effective Mass of VBL Let us consider a VBL in a domain wall of SD. We will determine the effective mass of the vertical Bloch line m L , using the general formalism proposed in [16]. To do this, we need to find the gyrotropic bending of DW due to the motion of VBL at velocity v L . The problem will be solved in a Cartesian coordinate system with origin in the center of the domain, Z axis oriented along the axis of anisotropy, and X axis along the vertical Bloch line. Then, the Lagrangian L of the system can be written as where q i is the coordinate of normal displacement of DW center, ψ i is the angle between the magnetization vector M ! S i in the DW center and X axis, γ is the gyromagnetic ratio, h is film thickness, σ 0 is the surface energy of DW, Δ is the width of DW, and W m is the magnetostatic energy of SD due to the presence of magnetic "charges" on the film surface. To simplify the task, the domain wall of SD without VBL will be considered to be pinned by defects and fixed (the evaluation will be provided below). In this case, according to the results of [17], the energy, W m can be written as where q k is the Fourier transform of q, κ = a/h, a is the SD width, C = 0.5772 is Euler's constant, K 0 (hk) is the Macdonald function, and Λ ¼ Δ ffiffiffiffi Q p is the characteristic size of VBL. The magnetization pattern in DW with VBL can be written as ψ = 2arctg exp(ξ/Λ), where ξ = x − x L and x L is the coordinate of VBL center. In this case, calculating variations, taking into account (2), Lagrangian (1) and solving the corresponding variational problem for v L < ω M Λ, we find for the DW gyrotropic bending: where It is also follows from the formulas (1) and (2) that the frequency of SD free oscillations is determined by f k ω M . The effective mass of VBL can be found from the following equation (see [16]): where F g = 2πM S v L /γ is the gyrotropic force acting on the DW from the moving VBL. According to the above and using (3) and (4), one can easily find An analysis of the integral in (5) for typical parameters of ferromagnetic films and SD (γ~10 7 Oe −1 s −1 , Q~10 − 16, Δ~10 − 6 cm, h~10 − 4 cm, 4πM S~( 10 2 − 10 3 ) G, and κ~1) shows that its value is determined by the behavior of function f k which has a minimum at k c , and k c ≈ Therefore, the function f k can be approximated as The Δ=h for various values of film quality factor are plotted in Fig. 1; a good correspondence between the integrands is observed. Then, using (6) and the properties of function f k , we eventually find from (5): It should be noted that the obtained expression also describes the effective mass of VBL in an isolated DW. In this case κ → ∞, and instead of the term f 0 = Δ ln(1 + κ − 2 )/πh → 0 in the expression for f k c , the term f = H ′Δ/4πM S appears, where H′ is a gradient magnetic field which stabilizes DW. Then, f k c is expressed through the critical field of DW bending instability f c : [18,19]. It is easy to see that in the case of intense stabilizing fields with (f − f c )/f c > > 1, (7) transfers into the expression for the effective mass of VBL in DW of a solid ferromagnetic material [20]: In turn, it is easy to see that if f is close to f c , i.e., (f − f c )/ f c < < 1, then the function f k −1 has a sharp peak at k = k c . Therefore, integrating (5) with the integrand g k −1 near this point, we obtain an expression that matches well with the formula for the effective mass of VBL in [16,19]: Note that the agreement of (8) and (9) with the known expressions for m L evidences the correction of proposed approach to the determination of VBL effective mass. Besides, expression (7) shows that the VBL effective mass is determined by the structure of spectrum of domain oscillations. Let us now consider the pinning of domain wall in SD without VBL by defects. The opposing DW acts on the unit area of DW with magnetostatic attractive force F m 1;2 , causing its displacement q 2 . In turn, given that the surface density of magnetic charge equals M S , we can estimate magnetostatic energy of interaction between two DWs W m 1;2 as W m 1;2 eΛ 2 q 1 q 2 M 2 S =a , where q 1 is determined by (3). Then, we find Comparing this expression with the force F d~2 M S H c (H c~0 .1 Oe is the coercitivity) that acts on the DW from defects, we obtain the condition under which the DW movement can be neglected: Obviously, this relation is consistent with the previously mentioned requirement for the velocity of vertical Bloch line v L (see the derivation of (3)). Problem Solving and Discussion Consider a pair of VBLs with the same topological charge in a domain wall of SD. There is an exchange interaction between spins of each VBL, and the energy of this interaction W e is determined as [1]. where A~10 − 7 erg/cm is the exchange constant and θ and ψ are the polar and azimuth distributions of a) c) b) Fig. 1 Functions Φ k and G k at different values of film quality factor Q: a Q = 10; b Q = 12; c Q = 16 magnetization vector, respectively, which determine the internal structure of DW and VBL. In turn, W e can be written as where e 2 is the effective electron density of interacting VBL spins per unit length. Equating the above expressions, we obtain e 2 = 4AΔ. Considering that the effective electron density of the whole VBL equals 2e 2 , we find the exchange energy of two VBLs separated by distance s: Differentiating W e by s, we find the force of exchange interaction of VBL per unit length: In addition to the repulsive force F e , there is an attractive magnetostatic force F m between two VBLs, that can be expressed as [1]. Comparing these two forces, we get the well-known equation for the equilibrium distance s 0 between vertical Bloch lines: This equation has the following solution: s 0 ¼ ffiffi ffi 2 p πΛ [21]. Considering the VBLs as quasiparticles, examine now the small fluctuation displacements of VBL from the equilibrium position. Obviously, these displacements are equal in magnitude and opposite in phase. During movement along X axis, a VBL experience force F = F e − F m . According to (10)- (12), this force can be written as where E L = 8AQ − 1/2 is the energy of VBL, normalized by length, and δs/s 0 < < 1 is the displacement of VBL. Obviously, for the VBL moving in opposite direction F > 0. Approaching each other, VBLs can be considered in the same manner. Thus, small displacements of interacting VBLs from the equilibrium position cause the force F = F e − F m that is directed opposite to their displacements. This force has stiffness factor k L = − ∂F/∂δs (per unit length); therefore, using Eqs. (7), (12), and (13) and formula ω L = (k L /m L ) 1/2 , we can find the frequency of VBL small oscillations: Estimation of the VBL effective mass and the oscillation frequency for the films with a quality factor Q = 10 − 16 gives values of m L ≈ (3.8 − 3.3) ⋅ 10 − 15 g/cm and ω L (2.1 − 1.8) ⋅ 10 − 1 ω M , respectively. As expected, these values reduce with increasing film quality factor. Indeed, according to (12), higher Q leads to a longer equilibrium distance s 0 . As a result, the values of VBL interaction energy and DW gyrotropic bending decrease, which ultimately leads to reduction of ω L and m L . Consider now the possibility of quantum oscillations of a pair of VBLs. Let us compare the average energy of vertical Bloch line W H;L ¼ 2π 2 Δ 2 M 2 S H x ð Þ 2 h=m L ω 2 L in uniform magnetic field H x directed along the axis of its oscillations with the "interstate" energy gap ΔW L = ℏω L (where ℏ is the Planck constant) (see [15,22]). Using (7) and (14), after some transformations, we find that n > > 1 (i.e., quantum transitions from the ground level to quasiclassical zone are considered), and so W H;L >> Δ W L if the external fields satisfy inequation where h x = H x /8M S . It should be noted that small oscillations of vertical Bloch line in a domain wall of magnetic bubble were studied in [15]. However, the stiffness factor of these oscillations was not associated with the interaction between vertical Bloch lines and was provided by magnetic field applied normally to VBL. Estimation of (15) at σ 0~1 erg/cm 2 gives h x > > 10 − 4 that is consistent with the requirement for planar magnetic fields applied to DW. It is worthwhile to note that w n is the probability distribution ñ = W H,L /2ℏω L of quanta on n discrete levels of VBL spectrum, which is determined by Poisson distribution [22]: w n ¼ 1 n! e −ññn . As follows from the calculations, n~10 is a typical quantum level excited by magnetic field H x . Results and Discussion Let us estimate the influence of dissipative force F r on the process of VBL quantum oscillations in a stripe domain. Since the problem is considered in quasiclassical approximation, we can use for F r the expression obtained by integrating the density of Thiele's dissipative force [23]. In this caseF r ¼ 4M S αhv γ ffiffi ffi Q p ,where α~10 − 3 − 10 − 1 is the magnetization vector decay parameter. Taking into account that the VBL velocity can be represented as v L = ω L A n , where A n ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2n þ 1 ð Þℏ=m L ω L h p is a "quasiclassical" amplitude of VBL oscillations [15], the expression for F r can be rewritten as In turn, the force F H acting on VBL from magnetic field can be written as [1]. Using expressions (16) and (17), as well as the numerical parameters of the system (see above), one can easily find that F r /F H < < 1 in the following fields where E n = ℏω L (n + 1/2) is the energy of VBL quantum oscillations. It is obvious that the obtained relation is in line with the estimation for the field h x (see expression (15)) which provides quantum transitions into the quasiclassical zone of VBL spectrum. Due to various ranges of fields h x required for the activation of VBL energy levels and its displacement along the DW of stripe domain (this occurs at h x < 1), it is feasible to control both quantum and "classical" process of data recording in the data storage devices based on VBL. It is natural to expect that the quantum-mechanical behavior of vertical Bloch lines will be reflected in the dynamics of DW gyrotropic bending. This problem was studied by us in [15,24] for small oscillations of Bloch point and vertical Bloch line stabilized by magnetic field in a domain wall of magnetic bubble, where the quantum nature of the change of DW gyrotropic bending was established. Obviously, this effect should also appear in the oscillations of interacting vertical Bloch lines in a DW of stripe domain. Therefore, basing on the results of the abovementioned works, we can write for the DW gyrotropic bending the following expression An estimation gives q n~1 0 − 3 Δ. Besides, an analysis of function f k and expressions (7), (14), and (18) show that at h/Δ → ∞, k c~( Δ/h) 1/2 , m L~( h/Δ) 1/2 , ω L~( Δ/h) 1/4 , and q n~( Δ/h) 5/8 , i.e., the considered effect takes place only in magnetic films and is absent in solid ferromagnetic materials. It is easy to see that the quantization of DW gyrotropic bending is most pronounced in the ferromagnets which have DWs with high Δ values, such as yttriumiron garnet (YIG) films, where Δ can reach up to 10 − 4 cm [25]. It should be noted that the above values, q n < < Λ indicate a negligible contribution of transverse component into the VBL effective mass [26]. This component m L of gyrotropic origin is significant for DWs with bending comparable to the length of vertical Bloch line. Using relation ℏω L~n k B T (where k B is Boltzmann constant), (14), and the abovementioned numerical parameters for film and domain, we can find the temperature T of the process: T e 10 −3 −1 À Á K : The T values are in the same range with the temperatures of other quantum phenomena that occur for the vertical Bloch lines and Bloch points (see [11][12][13][14][15]24]). Therefore, one can conclude that the macroscopic quantum effects in the domain systems with complex internal structure become apparent at subhelium temperatures. This allows neglecting the contribution of exchange relaxation [27] into the processes of magnetization dissipation, which accompanies these phenomena (see [15]). Conclusions The effective mass of vertical Bloch line in a domain wall of a stripe domain in magnetic film with a strong uniaxial magnetic anisotropy was determined. It was found that the effective mass of vertical Bloch line is determined by the characteristics of the spectrum of domain oscillations. The energy spectrum of low-temperature quantum oscillations of two interacting vertical Bloch lines in a domain wall of magnetic stripe domain was determined. This result can stimulate the development of a new type of hybrid memory devices which combine two recording media: specified stable states of physical memory element and quantum levels of energy spectrum of this element, activated by external field.
4,200
2016-10-25T00:00:00.000
[ "Physics" ]
De novo antioxidant peptide design via machine learning and DFT studies Antioxidant peptides (AOPs) are highly valued in food and pharmaceutical industries due to their significant role in human function. This study introduces a novel approach to identifying robust AOPs using a deep generative model based on sequence representation. Through filtration with a deep-learning classification model and subsequent clustering via the Butina cluster algorithm, twelve peptides (GP1–GP12) with potential antioxidant capacity were predicted. Density functional theory (DFT) calculations guided the selection of six peptides for synthesis and biological experiments. Molecular orbital representations revealed that the HOMO for these peptides is primarily localized on the indole segment, underscoring its pivotal role in antioxidant activity. All six synthesized peptides exhibited antioxidant activity in the DPPH assay, while the hydroxyl radical test showed suboptimal results. A hemolysis assay confirmed the non-hemolytic nature of the generated peptides. Additionally, an in silico investigation explored the potential inhibitory interaction between the peptides and the Keap1 protein. Analysis revealed that ligands GP3, GP4, and GP12 induced significant structural changes in proteins, affecting their stability and flexibility. These findings highlight the capability of machine learning approaches in generating novel antioxidant peptides. of peptide sequences compiled by Specht et al. 18 .This choice was motivated by the similarity in the distribution of key amino acids between the antioxidant peptides and this dataset. Subsequently, we employed transfer learning to fine-tune our model specifically for generating AOPs.All obtained data from the fine-tuned model underwent classification to determine their scavenging activity.This classification relied on a model trained on the antioxidant dataset, along with assessments of their toxicity from two distinct servers.After this rigorous filtering process, a final list of peptides was compiled.These peptides were further analyzed by clustering their sequences, and the centroid peptides were selected for subsequent investigation.Density functional theory (DFT) calculations were employed to assess molecular properties, including the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and the HOMO-LUMO energy gap [19][20][21][22] .These parameters served as criteria for the antioxidant ranking of the peptides in the present study.Based on these calculated parameters, we selected six peptides with a refined set of candidates for further investigation.Subsequently, we have demonstrated that by synthesizing and testing the chosen peptides, we will be able to identify the non-hemolytic AOPs using 2,2-diphenyl-1-picrylhydrazyl(DPPH), hydroxyl, and hemolysis assays.All the implemented processes are shown in (Fig. 2). In our pursuit of robust antioxidant peptides, we embarked on a multifaceted strategy, to discover if the generated peptides could simultaneously exhibit antioxidant scavenging activity and inhibit the Keap1 protein by utilizing molecular dynamic (MD) simulations.The Keap1-Nrf2 protein-protein interaction (PPI) is pivotal in regulating Nrf2, a transcription factor that safeguards cells against oxidative stress by controlling the transcription of over 200 antioxidant response element (ARE)-containing genes.The Keap1, in complex with Cullin3-RBX1, negatively regulates Nrf2 through cytosolic binding, promoting its ubiquitination and proteasomal degradation.Oxidative stress, a major contributor to various pathological conditions, underscores the importance of disrupting the Nrf2/Keap1 PPI.This disruption is considered a promising strategy to upregulate Nrf2 levels and enhance cellular protection against oxidative stress [23][24][25] . Dataset To train our pre-trained model, the peptide dataset detailed in Ref. 18 , was utilized which comprises approximately 10,000 distinct sequences.Subsequently, for the development of generative antioxidant models, we turned to the antioxidant peptide dataset as described in Ref. 13 .During the data preprocessing phase, we established a criterion limiting peptide sequences to a maximum of 15 residues.For fine-tuning the generative model, peptides classified as non-antioxidant and chelators were deliberately excluded.Likewise, for the classification model, we specifically excluded chelator peptides to focus exclusively on evaluating the scavenging antioxidant activity of peptides (Fig. 2). Generative model The generative models were developed by using Tensorflow 26 (Fig. 2).Leveraging the natural-based amino acid dataset at our disposal, characterized by a vocabulary size of 20, our workflow involved tokenizing and encoding input sequences, and following by their input into the embedding layer.Subsequently, the embedding outputs were fed to two gated recurrent units (GRUs) layers 27 .The final layer was a dense layer with an output shape matching the vocabulary size, augmented with a softmax function.During training, we employed Sparse Categorical Cross Entropy as the loss function, closely monitoring validation loss to select the optimal model throughout the training process.To ensure robustness, 90% of the base dataset was allocated for training, while the remaining 10% served as a validation set.The base model underwent 250 epochs of training, with Adam as the optimizer 28 . In the context of creating an antioxidant generative model, we implement a transfer learning approach.This technique involves transferring knowledge acquired by the model from a larger dataset, containing more general information about peptide sequences and their representation, to train a model for a specific task with a smaller dataset.This approach led to improved performance compared to deploying an untrained model, as deep learning models typically struggle with limited datasets.Initially, we employed the same base model trained on DPPH scavenging activity assay A straightforward and expeditious approach for quantifying the antioxidant activity of antioxidants involves employing the 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay, a spectrophotometric technique [36][37][38] In Eq. ( 1), A 0 is the absorbance of the negative control (methanol) and A 1 is the optical absorbance of the peptide samples. Hydroxyl scavenging activity assay Hydroxyl radical is one of the strong reactive oxygen species in biological systems that react with the unsaturated fatty acid of phospholipids of the cell membrane and cause cell damage [36][37][38] .To assess the efficacy of peptides in countering hydroxyl radicals, we subjected them to a hydroxyl radical-scavenging activity assay, indicating their scavenging potential across a range of concentrations.The method for preparing the test samples, reagents, and control samples for the hydroxyl scavenging activity assay is as follows: initially, peptide samples at 10 mg/mL concentration were dissolved in ethanol, and subsequently, diluted concentrations of 2, 4, 6, and 8 mg/L were obtained from this solution.The assay reagent includes 6 mM hydrogen peroxide and 6 mM ferrous sulfate in ethanol.A solution of ascorbic acid with 0.5 mM concentration has been prepared as a positive control and a solution without the peptide sample is considered as negative control.A colored solution was obtained by adding 200 μL of the assay reagent and 200 μL of the peptide sample.The mixture was shaken for 10 min at room temperature.Then 200 μL of 6 mM salicylic acid was added to the mixture.After 30 min, the UV-Vis absorbance was determined at 510 nm.The determination of hydroxyl radical scavenging activity for the peptide samples was executed as follows: where A 0 is the absorbance of the negative control (solution without the peptide sample) and A 1 is the absorbance of the peptide samples. Hemolysis assay The hemolysis test was performed as standard protocol.A 1.5 mL heparin blood sample was obtained from a healthy volunteer in our laboratory.The red blood cells (RBCs) were collected by centrifugation of blood samples at 3000 rpm for 15 min and then were washed three times with PBS.The RBC pellet was resuspended in a 10 mL phosphate-buffer solution (PBS).A 100 µL/well of different concentrations of peptides were added into a 96-well plate (2000 µg/mL, 1000 µg/mL, 500 µg/mL, 250 µg/mL, 125 µg/mL, 62.5 µg/mL, 31.25 µg/mL, 15.62 µg/ mL, 7.8 µg/mL).For positive hemolysis control 100 µL/well of 0.1% sodium dodecyl sulfate and distilled water, and for negative hemolysis control 100 µL/well of PBS were added into the appropriated wells.After that, 100 µL RBC suspension was added into all the wells.The plate was incubated at room temperature for 4 h.Then the 100 µL supernatant of each well was carefully transferred to a new 96-well plate.The solution absorbance was investigated by a UV-Vis instrument at 450 nm. Molecular dynamics analysis for Keap1 protein Molecular dynamic (MD) simulations were conducted on 13 different systems.The reference system consisted of a receptor, water, and the appropriate amounts of sodium and chloride ions to achieve a salt concentration of 0.15 M; additionally, the other 12 systems included a receptor, water, salt, and one of each of the peptides GP1-GP12.The MD interaction was taken in place in the active site of the KEAP1 protein.To construct all the simulation systems, CHARMM-GUI was utilized [39][40][41] .The simulations were performed in the NPT ensemble using the GROMACS 5.1.5simulation package [42][43][44] .The CHARMM36m force field 45,46 was applied to both the ligands and receptor.Temperature (310 K) and pressure (1 bar) were maintained during the simulations.Temperature control was achieved using the Nose-Hoover thermostat 47 with a coupling time of 0.5 ps, while pressure control was accomplished by coupling the simulation cell to a Parrinello-Rahman barostat with a coupling time constant of 2 ps 48 .Periodic boundary conditions were employed, and the transferable intermolecular potential 3-point (TIP3P) water model was used 49 .Atom bond lengths were constrained using the LINCS algorithm 50 .The equations of motion were integrated using the leap-frog algorithm with a time step of 2 fs 51 .Coulomb and van der Waals interactions were cut-off at 1.2 nm, and long-range electrostatic interactions were handled using the particle mesh Ewald method 52 .Unfavorable atomic contacts were eliminated through the steepest descent energy minimization 53 .Initially, the positions of the ligands were restrained, and equilibration was performed in the NVT ensemble for 1 ns, followed by equilibration in the NPT ensemble for 9 ns.After the equilibration steps, all simulations were run for 250 ns starting from their initial conditions and atom coordinates. Ethical approval The authors have fully observed the ethical points in conducting the research and writing the results.All methods were carried out in accordance with relevant guidelines and regulations.All experimental protocols were approved by the Payame Noor University Research Committee.Informed consent was obtained from all subjects and/or their legal guardian(s). Deep generative model By utilizing the antioxidant generative model, 50 k peptide sequences were generated (Fig. 2f).In line with our predetermined criteria, the length of these peptides was restricted to a maximum of eight amino acids.To make sure that we only analyze and work with novel and unique peptide sequences, we remove these redundant sequences, thus ensuring that each peptide entry in our dataset is unique.Furthermore, for the elimination of any duplications, an analysis within the pre-existing peptide database was conducted 54 .Through these procedures, we successfully created a collection of nearly 30k unique and novel peptide sequences (as illustrated in Fig. 2e).It is noteworthy that the uniqueness of this curated dataset, in which the peptide length was capped at eight amino acids, was quantified at 61.8%, reflecting the challenges posed by the imposed constraint.However, the novelty of the generated sequences was notably high, with a 97.3% novelty score.To validate the thorough understanding of AOP representations by our fine-tuned model, a comprehensive analysis of the generated peptide sequences was conducted.This analysis encompassed an examination of their average hydrophobicity, mean amino acid fraction, and standard deviation across multiple datasets, including the AOP dataset used for fine-tuning, as well as the pre-trained dataset.Our findings affirm that the antioxidant model adeptly acquired the amino acid representations essential for the task at hand, evident in its ability to generate peptides closely aligned with the characteristics of the AOP dataset (Fig. 3). Antioxidant classification model To enhance the rigor of our approach, a fivefold classification process was applied which resulted in preserving all generated models for subsequent analysis.The model achieved an average ROC-AUC score of 82.64% and accuracy scores reaching 76.47%, 76.53% for precision, and 52.88 for The Matthews correlation coefficient (MCC).These metrics are summarized in Table 1.To assess the activity of generated sequences, we employed all five models from each fold to predict whether a peptide possesses antioxidant activity.For a peptide to qualify, it must receive unanimous approval from all five models, with a threshold set at 0.99.In other words, the peptide is accepted only if all five models concur with 99% confidence or higher regarding its antioxidant attributes.Following this methodology, 122 peptide sequences with the desired criteria were identified (Fig. 2f). Toxicity prediction and clustering The remaining peptides were uploaded to the ToxinPred and ToxIBTL web servers.We intersected those results and only selected the peptides considered nontoxic by both web servers.In this step, 76 peptides remained for the next steps (Fig. 2g).In the subsequent stage, we applied clustering to the filtered sequences using the RDKit Butina module, employing a threshold of 2 and utilizing the Levenshtein distance as the distance function.From each cluster, we selected the central sequence.This workflow introduced a total of 12 peptides (Fig. 2h). Besides, the energy gap (E g ) between HOMO and LUMO is a pivotal indicator of a molecule's biological activity.A larger E g signifies increased chemical stability, while a smaller E g suggests enhanced compound polarization and notable electron transfers between donors and acceptors 56 .Moreover, E HOMO and E LUMO energies directly correlate with ionization potential (IP = − E HOMO ) and electron affinity (EA = − E LUMO ) 57 .A lower IP implies a reduced tendency to lose electrons, signifying a stronger affinity for transitioning to the radical cation form.Practically, this involves removing an electron from the HOMO of the neutral antioxidant, transforming it into the radical cation form.Electron affinity values, as descriptors, express the energy involved in electron abstraction.A higher EA for an antioxidant indicates easier electron abstraction compared to other molecules.Specifically, electrons are assimilated from a free radical into the LUMO of the neutral antioxidant.Table 2, summarizing data for the twelve peptides (GP1-GP12) determined by the ML method, reveals noteworthy insights.Particularly, GP12, boasting an E HOMO of − 4.88 eV (IP = 4.88 eV), emerges as a potent electron donor, indicative of heightened antioxidant activity.Peptides with lower E g values, exemplified by GP12, are anticipated to exhibit diverse interactions with free radicals, potentially intensifying antioxidant efficacy through electron transfer.Conversely, peptides like GP1 and GP3, with higher E g values of 3.67 and 3.68, may be less reactive.Validation of our results using the BLYP exchange-correlation functional underscores their reliability.The E HOMO order of GP1-GP12, displaying increasing values from GP6 to GP12 (GP6 = GP11 < GP7 < GP1 < GP2 < GP4 < GP3 = GP5 < GP8 < GP10 < GP9 < GP12), suggests GP9, GP10, and GP12 as potential strong antioxidants due to their higher E HOMO (and lower E g ).Accordingly, these three peptides, along with GP1, GP2, and GP5, were chosen for synthesis and subsequent antioxidant activity testing using the DPPH and hydroxyl tests ("Antioxidant activity" and "Hemolysis results" sections).HOMO and LUMO mappings for the examined peptides (Fig. 4) highlight the significant influence of the indole ring, particularly in regulating antioxidant actions, through modulation of the HOMO.Notably, instances of coexistence between the LUMO and HOMO underscore the potential for diverse antioxidant mechanisms.This is particularly evident in the case of GP12, characterized by the highest HOMO, suggesting a potential for multiple mechanisms, including electron transfer, to contribute to its antioxidant activity. The indole ring's electron-rich nature, highlighted by the HOMO localization, suggests a potential for effective electron donation, aligning with electron transfer mechanisms in antioxidant activity.This, coupled with the conjugated structure, enhances its electron-donating properties, contributing to scavenging reactive oxygen species.Findings align with Trp's role in antioxidant activity, due to indole rings, emphasizing its hydrogen donation capacity 57 .However, our HOMO-centric results prompt reconsideration, suggesting a shift to electron transfer mechanisms.The specific role of the indole ring in electron transfer needs careful investigation for refined understanding.This insight adds nuance to the mechanisms governing indole-containing peptides' antioxidant activities. Antioxidant activity The DPPH assay is based on the reduction of the stable free radical DPPH by accepting electrons from antioxidant which leads to a colored solution 37 .The methanolic solution of DPPH radical has a violet color that shows the maximum light absorption at 519-595 nm.After reduction, the DPPH radical is converted to DPPH2.In this case, the violet color of the solution changes to yellow (Fig. 5a), and absorption intensity at 517 nm decreases.As shown in Fig. 5c, all six synthesized peptides exhibited concentration-dependent free radical scavenging activities. The DPPH radical scavenging activity of the synthesized peptides were 53.9-80.7% at 10 mg/mL.Peptides GP9, GP10, and GP12 with the highest E HOMO value show the highest free radical scavenging activity, respectively.These peptides have almost the same antioxidant activity as ascorbic acid.Despite ascorbic acid instability, the peptide-based antioxidants are stable.In the hydroxyl radical scavenging activity assay, the degree of hydroxyl radical scavenging is measured in different concentrations of the sample (Fig. 5b) 37 .The hydroxyl radical scavenging activities were in the range of 6.1-19.35% at 10 mg/mL (Fig. 5d).In comparison to ascorbic acid (93.9%), the hydroxyl radical scavenging activities of all the synthesized peptides were significantly lower. Hemolysis results To evaluate the lytic effect of different concentrations of the synthesized peptides on RBCs, hemolysis assay was conducted.The levels of released hemoglobin into the supernatant fluid were compared with positive and negative controls.As illustrated in Fig. 6a (ESI File, S2) the supernatant of different concentrations of all peptides was clear as well as negative control, whereas in positive control the membrane of RBCs was completely damaged somehow the written letter with a pen were readable obviously (the yellow arrows in Fig. 6a).These letters were not visible in negative control wells.This observation is confirmed by comparison of the optical density (OD) value of the peptide samples with positive and negative control.The OD values for negative and positive control were 0.015 and 2.21 respectively, while the maximum OD value at 0.26 was obtained for RBC which was treated with the highest concentrations of the peptide (2000 µg/mL) (Fig. 6b).The red blood cells were incubated with different concentrations of GP9, GP10, GP5, GP1, GP2, GP12 and 0.1% SDS+ distilled water and PBS were www.nature.com/scientificreports/as positive and negative control, respectively.After 4 h, the supernatants were transferred into a new 96-well plate and the absorbance was read at 450 nm.Like negative control, the appearance of supernatant with different concentrations of peptides in the wells showed that there weren't any hemoglobin particles in the wells; but in positive control, the red color of the supernatant indicated the release of hemoglobin particles into the supernatant fluid.The OD value for negative and positive control was 0.015 and 2.21 respectively, while the OD of different concentrations of the synthesized peptides were in a low range, from 0.07 to 0.25. Molecular dynamic analysis Root mean square fluctuation (RMSF) RMSF analysis provides a better understanding of the dynamic behavior of proteins and offers insights into the structure-function relationships.This analysis can assist us in guiding experimental research and play a role in protein engineering strategies and drug discovery efforts.RMSF analysis can identify residues that experience significant fluctuations upon ligand binding.Amino acids with high RMSF values are generally associated with flexible or disordered regions of the protein.These regions may be involved in vital biological functions such as binding to other molecules or structural changes 58,59 .As shown in Fig. 7, the RMSF values for all receptor amino acids were evaluated in the presence of various ligands.As can be seen, there are two distinct behaviors in the presence of different ligands (ESI File, S3).The RMSF values for receptor amino acids in the presence of ligands GP10, GP3, GP1, GP2, GP7, and GP12 undergo significant changes and demonstrate high flexibility, while they remain relatively rigid in the presence of ligands GP4, GP11, GP9, GP8, GP5, and GP6 with no significant variations in the flexibility of receptor amino acids compared to the reference system (system consisting of the receptor only). Ligand binding can induce structural changes in the protein, which affects the flexibility of amino acids.This phenomenon, known as induced fit, allows the protein to adjust its structure for optimal ligand binding 60,61 .To investigate the impact of flexibility on protein conformational changes, we focus on studying the free energy surface. Free energy surface Proteins in their natural environment exist not as single structures, but rather as a dynamic ensemble of configurations that are distributed over a range of energies and on a Free Energy Surface (FES) based on their probabilities of occurrence 62 .The FES provides insights into the structural dynamics of proteins, pathways of folding/ unfolding, binding events, and other thermodynamic properties.They can also help identify stable states or meta-stable intermediates, determine transition states, and elucidate underlying mechanisms of protein function.The information obtained from FES is valuable in various fields including drug design, protein engineering, and understanding protein folding and structural changes 63 .In this study, we explored the free energy surface (FES) using two key collective variables-gyration radius (Rg) and root mean square deviation (RMSD)-to characterize the motions and coordinates of the target protein (receptor), as illustrated in Fig. 8 and Supplementary Information (ESI) File S2.It should be noted that a larger Rg indicates a more expanded structure, while a smaller Rg corresponds to a more compact or folded structure.Additionally, lower RMSD values indicate greater similarity between simulated structures and the reference, while higher RMSD values indicate greater structural deviation.The results obtained demonstrate that a structure with Rg and RMSD values of 1.8 nm and 0.2 nm, respectively, represents the most stable conformation of the protein (receptor) in the absence of a ligand (reference system).With the addition of ligands GP3, GP6, and GP12, the dominant observed configuration in the reference system completely disappears, and structures with different and diverse configurations with Rg equal to 1.3 nm and 1.8 nm, and RMSD equal to 0.5 nm and 0.8 nm in the presence of GP3 emerge.Therefore, the structural similarity of the protein is completely lost.However, in the presence of GP6, the most stable protein configuration is also observed with Rg equal to 1.8 nm and RMSD equal to 0.6 nm and 0.8 nm, and there have been no significant changes in Rg in the presence of this ligand.With the addition of the ligand GP12 to the protein, significant changes in Rg and RMSD are observed, such that structures with Rg equal to 1.3 nm and 1.8 nm, and RMSD equal to 0.8 nm are evident thermodynamically.It appears that the most significant structural changes in the protein are caused by the presence of these three ligands.Despite the influence of additional ligands, the original protein configuration remains discernible within the system, contributing thermodynamically to its stable configurations.The presence of the ligand GP5 induces noticeable changes in the protein's Rg, leading to a more compact conformation.The protein adopts stable configurations with an RMSD of approximately 0.8 nm.In the presence of ligands GP9 and GP8, the protein experiences changes solely in RMSD, with no significant alterations in structural compactness observed.Additionally, the presence of ligands GP4, GP10, GP1, GP2, and GP11 leads to the creation of stable configurations with partial changes in Rg and significant changes in RMSD.Finally, the least changes in protein configuration resulting from the presence of the ligand GP7 are observed, such that it is only associated with an RMSD of 0.4 nm and no noticeable change in Rg. Hydrogen bonding Hydrogen bonds contribute to the stability and structural integrity of proteins.They form between the electronegative atoms of oxygen or nitrogen in the peptide backbone and hydrogen atoms connected to these atoms.These bonds help maintain secondary structures such as alpha helices and beta sheets.Proteins are not static molecules; they undergo structural changes to perform their functions.Hydrogen bonds are dynamic interactions that can form, break, and rearrange during these structural changes 64,65 .The findings reveal a slight increase in the number of intramolecular hydrogen bonds (protein/protein), as depicted in Table 3.This emphasizes the necessity for a more comprehensive investigation into the various intramolecular hydrogen bonds within the protein and their correlation with protein structure.As seen in Table 2, the reference system exhibits 118 anti-parallel bridges for the protein.With the addition of the ligand and its interaction with the protein, significant changes occur in the number of anti-parallel bridges.Specifically, the protein shows the most pronounced changes in the presence of GP10, GP11, GP12, GP4, and GP5, resulting in a significant reduction in the number of anti-parallel bridges (Table 3).It should be noted that anti-parallel bridges are typically found in the secondary structures of proteins, especially in β-sheets. The β-sheets consist of multiple peptide chains connected by anti-parallel bridges.Hydrogen bonds formed between adjacent chains play a crucial role in maintaining the stability and integrity of β-sheets.The hydrogen bonds formed through anti-parallel bridges contribute to the structural stability and functional properties of proteins [66][67][68][69] .Therefore, reducing their number can significantly impact protein instability and the loss of functional properties.Consequently, a severe decrease in hydrogen bonds formed through anti-parallel bridges can be highly significant in terms of structural and functional changes in proteins.Another significant change can be observed in the number of hydrogen bonds between O(I) → H-N(I + 2).This hydrogen bond is typically found in the regular secondary structures of proteins, such as α-helices and β-sheets.In an α-helix, the carbonyl oxygen of an amino acid residue at position 'I' forms a hydrogen bond with the amide hydrogen of the amino acid residue www.nature.com/scientificreports/ at position 'I + 2' (two positions ahead in the sequence).This hydrogen bond contributes to the stability and strength of the α-helix structure.In the reference system, 10 such bonds have been established, and a significant increase is observed in the presence of all ligands, except for GP9.In O(I) → H-N(I + 3), the hydrogen bond decreases in the presence of certain ligands (GP4, GP12, GP11), while it significantly increases in the presence of other ligands, further confirming the formation and enhanced stability of the α-helix structure in this protein. Finally, the hydrogen bond between position 'I' and the preceding positions is not very pronounced and does not lead to drastic changes. To comprehensively study and evaluate the thermodynamic behavior of different antioxidant compounds binding to protein residues, we conducted an assessment of the thermodynamic favorability of these interactions using free energy calculations, as presented in Table S2 70 .In addition, considering that hydrogen bond formation in molecular dynamics simulations typically occurs within distances less than 3.5 Å [71][72][73] , we assessed the residues binding to each antioxidant by examining distances between their center of mass (COM) and nearby protein residues, as detailed in Fig. S3 and Table S3 (Supporting Information File). Conclusion This research presents the experimental application of a machine learning model to design antioxidant peptides (AOPs) de novo.The optimized generative model was utilized to generate twelve novel sequences of AOPs.To identify the most promising peptides for synthesis, the generated peptides were ranked by DFT calculation based on their E HOMO and Eg.The peptide GP12, with an E HOMO of − 4.92 eV, emerged as a formidable electron donor, suggesting heightened antioxidant properties.Peptides GP9, GP10, and GP12 with the highest E HOMO along with three other randomly selected peptides were synthesized for their antioxidant capacity and anti-hemolytic activity.Three (GP9, GP10, and GP12) out of the six synthesized peptides showed antioxidants active to the extent of ascorbic acid with non-hemolytic properties.The RMSF and FES analysis of proteins in the presence of the computer-generated peptides GP1-GP12 showed that different sequences induce significant structural changes in proteins, affecting their stability and flexibility.Additionally, the number of hydrogen bonds, especially anti-parallel bridges and those within secondary structures, varies with ligand presence, impacting protein stability and function. From the above results and observations based on the MD simulations and the antioxidant assays, GP12 shows the best results to be included for further analysis towards in vitro and in vivo assays for both Keap1 and antioxidant activity analysis.It can be claimed with certainty that the machine learning methods, along with DFT calculations and MD analysis, are applicable to automated peptide design in a prospective setting without having to extract, purify, synthesize, and test large sets of peptides.However, as the results show, the model is not capable of generating active AOPs for the hydroxyl scavenging test.This limitation arises from the fact that the currently available dataset lacks information regarding the specific testing methods used to assess the activity of each antioxidant.Therefore, the options for creating antioxidant peptides with specific activity towards any of the available mechanisms are constrained.Despite attempts to identify each antioxidant's activities based on their reference papers, the variations in experimental methods, some of which are no longer popular in contemporary research, have led to ambiguities.Furthermore, most of the reported peptides have only one recorded activity, potentially leading to biased interpretations.To address these challenges and enhance the discovery of active AOPs, a high throughput screening of the current peptides in the dataset is necessary, utilizing specific and fixed antioxidant activity assays such as DPPH, hydroxyl, and ROS activity.This approach would not only enrich the dataset but also offer a comprehensive understanding of the interrelation of amino acids in the sequence. In order to advance the methods and accelerate the discovery of AOPs using machine learning and quantum calculations, several suggestions is proposed: 1. Exploration of additional deep learning layers such as convolutional layers, LSTM layers, simple RNN layers, and Attention-based layers to analyze the model's performance 74,75 .2. Development of alternative architectures including Autoencoders, VAEs, and GPT-based generative models [76][77][78] .Autoencoder models will give a chance to develop and benchmark classification models for antioxidant activities besides their potential to be a useful generative model while the GPT's embedding layer can also be used for classification, besides the possibility of using fine-tuned BERT-based models for the classification tasks [79][80][81][82] .3. Integration of reinforcement learning to develop AOPs with dual activity towards the KEAP1 protein and antioxidant activity.This could involve the use of a classification model for AOP prediction and a molecular docking approach for the KEAP1 protein to train the model to generate active peptides targeting both parameters.Also, classification or regression models for KEAP1 can also be created.However, the limitation based on the number of peptides in the dataset could create challenges to developing models sensitive towards peptide sequences and their representation.4. Furthermore, the exploration of unnatural amino acid-based AOPs presents a promising avenue, considering their potential to overcome the limitations of natural peptides in the human body.To enable this exploration, alternative representations such as molecular fingerprints for these peptides could be considered for developing machine learning models.However, as this chemical space is not well understood and explored, developing machine learning models to predict or generate new data points is not very suited as we do not have enough data to train a generative model or to be sure of a capable classification model that could distinguish the activity of a chosen sequence with natural and unnatural amino acids.With future advancements and explorations on the mentioned topics, the field of designing AOPs is poised to make a significant step in the future.www.nature.com/scientificreports/ 5.While our DFT calculations, which focus on HOMO and LUMO energies, have successfully identified promising antioxidant peptides, it is crucial to recognize the inherent generalization of this approach.For instance, when comparing the possible activity of the selected peptides, based on their calculated parameters and the experimental results, we can clearly see a challenge towards the accuracy of this general strategy.This strategy has been done and selected as a simple method to identify active or inactive AOPs.Our strategy towards selecting the final peptides for the synthetic and experimental validation phase was good based on the DPPH assay.However, it's worth noting that GP12 wasn't the most active peptide against this assay.This information shows that besides the good accuracy towards the DPPH, those implementations are not enough to predict and rank the chosen peptides perfectly.Additionally, the proposed ranking based on the calculated properties did not fully align with the final ranking observed in the experimental method.This strategy also did not show a very reliable method towards the hydroxyl assay.To enhance the predictive power of our findings and ensure robust conclusions, future research endeavors should engage in a thorough examination of all potential antioxidant mechanisms.The incorporation of DFT methods alongside transition state analysis could provide valuable insights for more efficient peptide design.In summary, while our DFT calculations offer novel insights, a comprehensive exploration of antioxidant mechanisms is indispensable for advancing the field of peptide-based antioxidants. Figure 1 . Figure 1.The traditional and the de novo generative method pipelines for antioxidant peptide design. Figure 2 . Figure 2.An overview diagram for the de novo antioxidant peptide design.(a) The base (pre-trained) generative model.(b) The fine-tuned model for predicting AOPs.(c) Classification model for predicting the antioxidant activity of the generated sequences.Five models were developed from the fivefold classification evaluation.(d) 50 thousand peptide sequences with a maximum of eight amino acids were generated from the fine-tuned model.(e) Filtering the generated peptides based on their novelty and uniqueness.(f) Filtering the remaining generated peptides by using antioxidant classification models and intersecting the results with thresholds of 0.99 and greater based on the output probability of all of the five classification models.(g) Intersection of two peptide toxicity prediction web servers on the 122 remaining peptide sequences.(h) Clustering the remaining sequences with Levenshtein distance and choosing the centroids' data points of each cluster.(i) Implementing the DFT calculations on the twelve peptides and selecting six peptides based on their properties.(j) Implementing DPPH scavenging assay.(l) Implementing hydroxyl scavenging assay.(k) Implementing hemolysis assay. Figure 3 . Figure 3.Chemical space analysis and comparison of the pre-trained, antioxidant, and generated datasets.(a) Mean amino acid fractions and their standard distribution of the three datasets that were used and generated.(b) Average hydrophobicity of the pre-trained, AOPs and the AOPs generated dataset. Figure 4 . Figure 4.The HOMO and LUMO maps for the six synthesized peptides.The red ball represents the oxygen atom; the light gray ball represents the hydrogen atom; the dark gray ball represents the carbon atom; the blue ball represents the nitrogen atom. Figure 5 . Figure 5. (a) Comparison of the color intensity of different samples based on the antioxidant activity in DPPH assay.(b) Comparison of the color intensity of different samples based on the antioxidant activity in the hydroxyl radical scavenging activity assay.It should be mentioned that ascorbic acid was used as the positive control (C+) and methanol was used as the negative control (C−).(c) The DPPH radical scavenging activity of the six synthesized peptides compared to ascorbic acid (Asc) as a positive control.(d) The hydroxyl radical scavenging activity of the synthesized peptides compared to the Asc as a positive control. Figure 6 . Figure 6.Hemolysis activity of different concentrations of peptides in vitro.The red blood cells were incubated with different concentrations of GP1, GP2, GP5, GP9, GP10, GP12, and 0.1% SDS+ distilled water and PBS were used as positive and negative-control respectively.After 4 h, the supernatants were transferred into a new 96-well plate and the absorbance was read at 450 nm.(a) The appearance of supernatant in the wells has shown that, like negative control, there weren't any hemoglobin particles in wells that were incubated with different concentrations of peptides, but in positive control, the red color of supernatant indicated the release of hemoglobin particles into the supernatant fluid.(b) The OD value for negative and positive-control was 0.015 and 2.21 respectively, while the OD of different concentrations of peptides was in a low range, from 0.07 to 0.25. Figure 7 .Figure 8 . Figure 7.The RMSF of the receptor's residues in different simulated systems.(a) Significant changes and high flexibility of the receptor; (b) no significant variations in the flexibility of the receptor. https://doi.org/10.1038/s41598-024-57247-z https://doi.org/10.1038/s41598-024-57247-z . Preparation of the test sample, reagent, and control sample in the DPPH assay is as follows: peptide samples with a concentration Table 1 . The antioxidant peptide classification model's performance and the average results. Table 2 . The DFT calculated E HOMO , E LUMO , E g , IP, and EA values of the peptides GP1-GP12.All the energy units are in eV.In parentheses value calculated by BLYP methods. Table 3 . Different types of hydrogen bonds (H-bond) between residues of the KEAP1 protein.
8,283.8
2024-03-18T00:00:00.000
[ "Chemistry", "Computer Science" ]
Enhancement of TE polarized light extraction efficiency in nanoscale (AlN)m /(GaN)n (m>n) superlattice substitution for Al-rich AlGaN disorder alloy: ultra-thin GaN layer modulation The problem of achieving high light extraction efficiency in Al-rich AlxGa 1 − x ?> N is of paramount importance for the realization of AlGaN-based deep ultraviolet (DUV) optoelectronic devices. To solve this problem, we investigate the microscopic mechanism of valence band inversion and light polarization, a crucial factor for enhancing light extraction efficiency, in Al-rich AlxGa 1 − x ?> N alloy using the Heyd–Scuseria–Ernzerhof hybrid functional, local-density approximation with 1/2 occupation, and the Perdew–Burke–Ernzerhof functional, in which the spin–orbit coupling effect is included. We find that the microscopic Ga-atom distribution can effectively modulate the valence band structure of Al-rich AlxGa 1 − x ?> N. Moreover, we prove that the valence band arrangement in the decreasing order of heavy hole, light hole, and crystal-field split-off hole can be realized by using nanoscale (AlN)m/(GaN)n (m>n) superlattice (SL) substituting for Al-rich AlxGa 1 − x ?> N disorder alloy as the active layer of optoelectronic devices due to the ultra-thin GaN layer modulation. The valence band maximum, i.e., the heavy hole band, has px- and py-like characteristics and is highly localized in the SL structure, which leads to the desired transverse electric (TE) polarized (E⊥c) light emission with improved light extraction efficiency in the DUV spectral region. Some important band-structure parameters and electron/hole effective masses are also given. The physical origin for the valence band inversion and TE polarization in (AlN)m/(GaN)n SL is analyzed in depth. Introduction Al x Ga −x 1 N alloys have a large direct band gap from 3.4 eV for GaN to 6.2 eV for AlN, making them very useful for ultraviolet (UV) and deep ultraviolet (DUV) light emitting diodes (LEDs), laser diodes (LDs), and visible/solar-blind UV detectors with operating wavelength down to 200 nm [1][2][3][4][5]. These optoelectronic devices can be widely used in the areas of water purification, bio-agent detection, sterilization, and medicine [6][7][8][9]. However, it is still a formidable task to pursue highly efficient DUV LEDs and LDs due to the low light extraction efficiency (∼0.1% at 230 nm) and emission power (a few tens of nW) of the Al-rich Al x Ga −x 1 N active layer in these devices [6]. The key issue in the Al-rich AlGaN-based active region is related to its valence subband crossover (see figure 1 in reference [10]), which is different from that of the InGaN-based active region. In the InGaN-based active region, the charge separation effect is a key limitation on light extraction efficiency. To suppress the charge separation effect induced by the strong built-in electric field (in the magnitude of MV/cm) due to spontaneous and piezoelectric polarization effects [11], several approaches (semipolar-plane growth [12], staggered quantum well (QW) [13,14], ternary InGaN substrate [15], etc.) have been proposed to improve electron-hole wave function overlap and the radiative recombination rate in InGaNbased LEDs. Recently, some light scattering or redirection structures, such as reflective scattering structures [16], GaN micro-domes [17], colloidal-based microlens arrays [18], TiO 2 microsphere arrays [19], and silica sphere arrays [20], have also been adopted to improve light extraction efficiency in these LED devices. The optimized light extraction efficiency based on these novel structures can be expected to be enhanced several times compared with that of conventional planar LEDs. It has been known that light extraction efficiency has a close relationship to light polarization because it determines the light emission patterns and their propagation direction in Al-rich AlGaN-based LEDs. The intensity of light emission with transverse electric (TE) polarization, originating from the interband optical transition between the conduction and heavy hole (HH) bands, in c-oriented Al x Ga −x 1 N decreases dramatically with increasing Al content [17,[21][22][23][24][25]. The transverse magnetic (TM) component of spontaneous emission becomes dominant in Al-rich Al x Ga −x 1 N due to strong interband optical transition from the conduction to the crystal-field split-off hole (CH) band. Obviously, there is a critical Al content, i.e., the cross-point between the HH and CH bands, at which light switches its polarization from TE to TM mode [10]. However, the reported polarization switching values are scattered. For example, Nam et al [21] found that the emitted light alters its polarization from TE to TM mode in Al x Ga −x 1 N epitaxial layers grown on c-plane sapphire for x > 0. 25. Kawanishi et al [22] reported that the light-polarization switching occurs at x ≈ 0. 36-0.41 in Al x Ga −x 1 N multiquantum wells (MQWs) on AlN/SiC substrate. The threshold up to x ≈ 0.82 for light emission with TE polarization in Al x Ga −x 1 N MQW LEDs with well width of ∼1.3 nm has also been observed [26,27]. The high polarization switching value in the thin QW structures has been attributed to the effect of strain and quantum confinement [26]. It is natural to raise an important question with respect to achieving TE polarization in Al-rich Al x Ga −x 1 N alloy other than in thin QW structures. Our purpose here is to investigate the fundamental physics, explore effective ways to increase the polarization switching value, and enhance TE polarized light extraction efficiency in Al-rich Al x Ga −x 1 N alloy. The low light extraction efficiency in Al-rich Al x Ga −x 1 N is because of its AlN-like valence band structure and polarization property, which are different from those of GaN [28,29]. For a wurtzite (WZ) structure, the combined action of crystal-field splitting and spin-orbit coupling (SOC) leads to a three-edge structure at the top of the valence band. A negative crystal-field splitting energy Δ cr exists in AlN because it has a much smaller c a ratio and a larger u parameter than those of GaN [29]. Thus, the top three levels of valence band in AlN are Γ v 7 1 (CH), Γ v 9 6 (HH), and Γ v 7 6 light hole (LH), in decreasing order of energy. The superscript (subscript) stands for the corresponding irreducible representation without (with) the SOC effect. Because of the very large negative Δ cr in AlN (Δ cr ≈ −220 meV), the fundamental optical transition at the Γ point is from the conduction band to the top CH band. The zone center wave function of the CH band has almost 〉 |Z -like character [30]. Hence the light polarization is mainly parallel to the crystal c-axis (E∥c). On the other hand, the positive crystal-field splitting energy in GaN leads to the sequence of Γ v 9 6 (HH), Γ v 7 6 (LH), and Γ v 7 1 (CH) of the top valence band [28,31], which gives rise to the fundamental optical transition between the conduction and HH bands. The dominant light polarization thus becomes perpendicular to the c-axis (E⊥c) because the wave function of the HH band at the zone center is mainly composed of ± 〉 i |X Y characters [30]. As a consequence, Al-rich AlGaN-based LEDs and LDs show dominant TM polarization other than the desired TE polarization when Al content exceeds the cross-point between the HH and CH bands [10]. We note that nanometer-scale compositional inhomogeneity has an important influence on the luminescence efficiency of group III nitride semiconductors [32][33][34][35][36][37]. It has been well accepted that indium atoms form nanometer-scale In-rich quantum dot-like structures in In x Ga −x 1 N alloys due to phase separation [34][35][36], which significantly enhances light emission efficiency. Recent first-principles calculations also show that several-atom In-N clusters, as radiative recombination centers, can highly localize electrons at the valence band maximum (VBM) and dominate the light emission in Ga-rich In x Ga −x 1 N alloy even though it has a high threading dislocation density (10 9 cm −2 ) [38][39][40][41]. Similarly, it has been proved that the nanoscale islands or quantum dots observed in Al x Ga −x 1 N ternary alloy can improve its internal quantum efficiency [32,33,37]. Furthermore, atomic-scale compositional superlattice (SL) has also been observed in Al-rich Al x Ga −x 1 N thin films grown by molecular-beam epitaxy [42][43][44][45][46]. Based on the six-band k p · formalism, Zhang et al found that the TE-polarized optical gain can be enhanced by using the optimized AlGaN-δ-GaN QWs [47,48]. We thus believe that the nanometer scale compositional inhomogeneity should have a great impact on the electronic structures and optical properties of Al-rich Al x Ga −x 1 N ternary alloy and can enhance its light extraction efficiency. This paper is dedicated to solving the key issues of valence band inversion and light polarization in Al-rich Al x Ga −x 1 N alloy by focusing our attention on the nanoscale compositional inhomogeneity in it. We will investigate the fundamental physics related to valence band crossover and explore possible ways to enhance the TE polarized light extraction efficiency of AlGaN-based DUV LEDs and LDs based on first-principles calculations. To guarantee the reliability of our calculations for the electronic structures and optical properties, we carefully compare our results obtained from three different schemes, i.e., Heyd-Scuseria-Ernzerhof (HSE) [49,50], local-density approximation with 1/2 occupation (LDA-1/2) [51], and generalized gradient approximation with the Perdew-Burke-Ernzerhof (GGA-PBE) [52] functional. By simulating different Ga-atom distributions in Al-rich Al x Ga −x 1 N ternary alloy, we find that the nanoscale (AlN) m /(GaN) n (m>n) SL with an ultra-thin GaN layer, such as one GaN monolayer, can convert the VBM from the CH to the HH band, which directly leads to TE polarized light emission in the DUV spectral region. The rest of this paper is organized as follows. In section 2, we outline our calculation methods. The numerical results for the electronic structures and optical properties of (AlN) m /(GaN) n (m>n) SL and Al-rich Al x Ga −x 1 N disorder alloy are given and discussed in section 3. The SOC effect is taken into account to obtain the accurate band structure near the Γ point. Section 3.1 gives the electronic band structures and densities of states (DOSs). In section 3.2, some important band parameters associated with the k p · Hamiltonian are given by means of direct fitting to our first-principles band structures. The electric charge density and optical properties are calculated and discussed in sections 3.3 and 3.4, respectively. The physical origin of valence band inversion in (AlN) m /(GaN) n SL is thoroughly analyzed in section 3.5. Finally, our main conclusions are summarized in section 4. Calculation methods Our first-principles calculations are based on the density functional theory (DFT) and are carried out using the Vienna ab initio simulation package (VASP) code [53,54], implemented with projector augmented wave (PAW) potentials [55]. We adopt the AM05 exchangecorrelation (XC) functional [56] to optimize our structural parameters because it can give more reasonable lattice constants than the traditional LDA and GGA-PBE [57,58]. The important SOC effect is taken into account in the present calculations. The active layer of DUV optoelectronic devices (LEDs and LDs) usually consists of an Al-rich Al x Ga −x 1 N MQW with a typical thickness of 20∼30 nm [27,59,60], which can be constructed from an ∼20-period WZ AlGaN supercell (∼1.5 nm) generated by × × 3 3 3 primitive cells (see figure 1). Two different Ga distributions are considered here. One is nanoscale (AlN) m /(GaN) n SL, and the other is Al 0.83 Ga 0.17 N disorder alloy (see figures 1(c) and (d)). The Al (3s 2 3p 1 ), Ga (3d 10 4s 2 4p 1 ), and N (2s 2 2p 3 ) are treated as the valence electrons. A cutoff energy of 550 eV for the plane-wave basis set is used. The Brillouin zone integration is sampled with a × × 4 4 2 Γ-centered Monkhorst-Pack [61,62] k-point mesh. Convergence with respect to the plane-wave cutoff energy and k-point sampling has been carefully checked. The total energy is converged to less than 10 −5 eV during the geometry optimization. The forces acting on all atoms are less than 0.02 eVÅ −1 . The lattice parameters and atomic coordinates are optimized by minimizing the total energy and the Hellmann-Feynmann forces so that the strains caused by the large covalent radius of the Ga atom can be released completely in our SL structures. The optimized structures are then used to calculate the electronic structures and optical properties of Al-rich the Al x Ga −x 1 N alloy. Note that our SL structures can be achieved by means of metalorganic vapor phase epitaxy [44], switched atomic layer metalorganic chemical vapor deposition [63], and molecular beam epitaxy [64]. Considering that the band gap in the usual LDA and GGA calculations is seriously underestimated due to an incomplete cancellation of artificial self-interaction and the lack of discontinuity of the exchange-correlation potential in going from the valence to the conduction band, we adopt the LDA-1/2 method [51] and the HSE hybrid functional [49,50] to improve our calculations. Generally, the LDA-1/2 method includes the self-energy of the particle excitation and can give an accurate band gap of semiconductors at a computational cost comparable to the ordinary LDA or GGA [51]. In the LDA-1/2 scheme, the atomic self-energy potential is expressed as the difference between the all-electron potentials of the atom and those of the half-ion [51,65] The potential V s has a long-range Coulomb tail that has to be trimmed by means of a function (2) n 3 In equation (2), the value of CUT is chosen such that the resulting energy band gap of the crystal reaches its extreme. The values of CUT and n are further tested by means of a comparison with experimental band gaps of GaN and AlN. The half ionization is applied to the p-orbital of the N atom and the d-orbital of the Ga atom. We adopt CUT = 2.90 (a.u.) and n = 8 for the N atom, and 1.23 (a.u.) and 100 for the Ga atom. The calculated band gaps with these parameters are 3.51 and 6.09 eV for GaN and AlN, respectively, which are in good agreement with their experimental values. Hence these parameters are also used to calculate the band structures of the Al-rich Al x Ga −x 1 N alloy. Within the HSE scheme, the exchange-correlation functional is constructed through a weighted mixing of the PBE [52] exchange (x) and correlation (c) functional and the Hartree-Fock (HF) exchange term, i.e., where the exchange interaction is separated into short-range (SR) and long-range (LR) parts, μ is the screening parameter, and α is the exact exchange mixing ratio. In general, the bandstructure parameters and the electron and hole effective masses obtained from the HSE calculation are accurate and reliable, even though the HSE calculation is computationally demanding. Some calculations have indicated that the band gap increases if the exchange mixing ratio α increases in the HSE scheme [66,67]. We find from our HSE calculation that the band gaps of 3.60 eV for GaN and 6.00 eV for AlN can be obtained by choosing α = 0.32 and μ = 0.20. These band gap values are very close to the experiments and previous calculations (see table 1). We thus adopt α = 0.32 and μ = 0.20 to calculate the electronic structures and optical properties of the Al-rich Al x Ga −x 1 N alloy. In our optical property calculations, we adopt a dense × × 8 8 4 k-point mesh and a Gaussian smearing width of 0.05 eV. The frequency-dependent dielectric tensor is determined within the random phase approximation. Using the longitudinal expression under the longwavelength limit, the αβ (α, β = x, y, z) component of the imaginary part ε 2 of the dielectric function can be expressed as [77] h Experimental value at 300 K; see reference [72]. where α e represents the unit vector in the α direction; q denotes the wavenumber of the incident electromagnetic wave; Ω is the volume of the unit cell; c and v refer to conduction and valence band states, respectively; w k is the k point weight which sums to 1; and u ck is the cell periodic part of the wavefunctions at the k point. The real part ε 1 of the dielectric function can be derived from the usual Kramers-Kronig transformation [78]. If we choose the c-axis of the WZ crystal along the z-direction of the Cartesian coordinate system, nonzero components of the dielectric tensor merely correspond to the ordinary (ε xx = ε yy , E⊥c) and the extraordinary (ε zz , E∥c) light polarization. Once we obtain the dielectric function, the absorption coefficient can subsequently be derived easily. The general k p · Hamiltonian for the WZ structure is directly applied to the nanoscale (AlN) m /(GaN) n SL to get its band-structure parameters by fitting the Hamiltonian to our firstprinciples band structures because the SL has the same hexagonal symmetry in the c-plane as GaN and AlN [79]. We solved the eigenvalue equation det − = H E k k I | ( ) ( ) | 0 (I is a 6 × 6 identity matrix) and fitted the eigenvalues to the band structures obtained from first-principles calculations for GaN, AlN, and (AlN) m /(GaN) n SL. All the parameters are initialized to obtain (i = 1-6 corresponding to the six eigenvalues, j = 1-3, and k = 1-7). The conjugate gradient algorithm is then used to iterate the values of these parameters until 6 , and A 7 (see references [30,80,81] for their definitions). Considering that the A 7 term has a close relationship with the SOC effect and can significantly influence band dispersion, we adopt the block diagonalized Hamiltonian, including the A 7 term, to fit our band structures [81,82]. Electronic structures The optimized lattice parameters for GaN, AlN, and Al 0.83 Ga 0.17 N with two different Ga distributions, i.e., the (AlN) 5 /(GaN) 1 SL and disorder alloy, are shown in table 2. We can see from table 2 that the results obtained from the AM05 XC functional are in excellent agreement with experimental values. This clearly shows that the AM05 XC functional is accurate and reliable. To certify that a good electronic structure can be obtained from the lattice parameters optimized with the AM05 XC functional, we recalculate the electronic band structures of GaN and AlN (refer to figure 2) and compare them with previous theoretical and experimental results in detail. We can see from level. This energy-splitting process is further explained in figure 3. Generally, the crystal-field splitting energy without SOC can be defined as [85] (see table 1). value) for AlN, which is in good agreement with the previous theoretical value of −219 meV [29] and the experimental result of −220 ± 2 meV [86]. On the other hand, we obtain Δ cr = 40 meV (a positive value) for GaN (see figure 2(a)), which is in accordance with the experimental value of 9-38 meV in reference [87] and the theoretical value of 34 meV in reference [88]. Figures 2(c) and (d) further show the electronic band structures of GaN and AlN near the Γ point, in which the important SOC effect is included. We can see from figure 2(c) that the calculated energy difference between the A and B (C) valence bands is 4.7 (43.1) meV for GaN, which is in good agreement with the calculated results of 6 (43) meV in reference [28]. The corresponding energy splitting is 13 (−223.5) meV for AlN (see figure 2(d)). Excellent agreement with the previous calculation value of 13 (−213) meV is confirmed again [29]. The calculated valence band width (VBW) is 7.11 eV for GaN, which is also in good agreement with the experimental value of 7.0 eV [71]. The foregoing results clearly show that the calculated electronic structures from the lattice parameters optimized with the AM05 XC functional are reliable and accurate. We thus will calculate the electronic structures and optical properties of the Al-rich Al x Ga −x 1 N alloys from the structure optimized with the AM05 XC functional. It is well known that high Ga concentration is required to reverse the valence band order in Al x Ga −x 1 N ternary alloy due to the large negative crystal-field splitting energy in AlN. It is thus quite difficult to achieve valence band order inversion in Al-rich Al x Ga −x 1 N alloy. In order to explore possible methods to reverse the top valence band order, we calculate the electronic band structures (see figure 4) disorder alloy to nanoscale SL structure. The calculated crystal-field splitting energy is 208.6 meV for the (AlN) 5 /(GaN) 1 SL, which is much larger than the value of −153.5 meV for the Al 45 Ga 9 N 54 disorder alloy. The band gap of the (AlN) 5 /(GaN) 1 SL has a slight decrease compared with that of the disorder alloy. This clearly indicates that the light emission is still in the DUV spectral region (∼237 nm) with the (AlN) 5 /(GaN) 1 SL substituting for the Al 45 Ga 9 N 54 disorder alloy as the active layer. We further find that the electronic band structures of the (AlN) 4 /(GaN) 2 and (AlN) 3 /(GaN) 3 SLs are similar to those of the (AlN) 5 /(GaN) 1 SL. As the number of GaN monolayers increases, the crystal-field splitting energy and the VBW increase slightly, whereas the band gap decreases. Hence we will turn our attention to the (AlN) 5 /(GaN) 1 SL in our following calculations. It is interesting to note that the HH and LH bands are degenerate at the Γ point in the SL structure, whereas there is a separation of 9.3 meV between them in the disorder alloy (see figure 4). The physical reason is that the Ga-atom distribution has the same sixfold rotation symmetry in the SL as in the host crystal. To confirm this conclusion, we further calculate the Here the band gaps are revised based on the LDA-1/2 calculation and the SOC effect is not included. It is worthwhile to note that the typical character of the VBM state, dominating the interband optical transition, is converted from the CH (C) band of the disorder alloy to the desired HH (A) band of the (AlN) m /(GaN) n SL due to strong modulation of the ultra-thin GaN layer. The HH and LH bands are degenerate at theΓ point for the SL structure, whereas they split into two bands for the disorder structure. electronic band structure of the Al 48 Ga 6 N 54 alloy with two different Ga-atom distributions. One holds the hexagonal symmetry of the Ga distribution and the other does not. Our calculations show that the HH and LH, mainly with p x -and p y -like characters, are split in the structure without hexagonal symmetry. This is caused by the inequivalence of the x and y directions. In contrast, figure 5 indicates that the HH and LH bands are degenerate at the Γ point in the structure with hexagonal symmetry of the Ga-atom distribution. The electronic band structures and corresponding partial densities of states (PDOSs) for the nanoscale (AlN) 5 /(GaN) 1 SL are presented in figure 6 with three different methods, i.e., HSE, LDA-1/2, and GGA-PBE, in which the SOC effect is considered. For convenience, the top of the valence band at the Γ point is set as the reference energy level for these three different calculations. We can see from figure 6(a) that, according to the HSE calculation, the band gap E g is 5.24 eV, the energy difference between the HH and LH (LH and CH) is 9.1 (216.7) meV, and the VBW is 7.06 eV. Moreover, both the HH and LH bands become flat and dispersionless levels along the Γ-A direction, which indicates that the HH and LH are highly localized and have a very large effective mass along the c-axis (listed in table 4). Figure 6(b) shows that the conduction band minimum (CBM) is mainly determined by the Ga-4s state. In particular, the PDOS has a sharp peak in the vicinity of the VBM, which is dominated by the 2p state of N atoms bonded with Ga atoms. This directly proves the strong localization of the HH and LH in the GaN monolayer in the SL structure. The interband optical transition becomes the desired TE polarization (E⊥c), which is due to the transition between the VBM determined by N atoms bonded with Ga atoms and the CBM dominated by Ga atoms. This clearly indicates that TE polarization can be achieved in the (AlN) m /(GaN) n (m>n) SL instead of in the Al-rich Al x Ga −x 1 N disorder alloy. Our LDA-1/2 calculation (see figure 6(c)) shows that the band gap E g is 5.23 eV, the energy difference between the HH and the LH (LH and CH) is 11.7 (199.1) meV, and the VBW is 5.48 eV. The corresponding results for the GGA-PBE calculation (refer to figure 6(e)) are E g = 3.48 eV, ΔE AB = 10.7 meV, ΔE BC = 203.7 meV, and VBW = 6.42 eV. Obviously, the band gap derived from the LDA-1/2 calculation is in excellent agreement with the HSE result. Except for the band gap, the valence band structures obtained from the GGA-PBE calculation are in good accordance with the HSE calculation. In addition, the PDOSs obtained from the LDA-1/2 (see figure 6(d)) and GGA-PBE (see figure 6(f)) are similar to those derived from the HSE calculation (see figure 6(b)). Furthermore, we calculate the crystal-field splitting energy in the case of ignoring the SOC effect in order to have a comprehensive comparison among these three methods. Our calculated values are 221.3 meV for HSE, 203.2 meV for LDA-1/2, and 208.6 meV for GGA-PBE. All the foregoing results clearly show that the combination of the LDA-1/2 and GGA-PBE methods, instead of the computationally expensive HSE calculation, can give a reliable prediction of the electronic structures of semiconductors. Band-structure parameters The k p · method [30] is widely used to calculate the band structures of semiconductors near the band edge. Many band-structure parameters related to the k p · Hamiltonian have been well established for WZ GaN and AlN [30,69,82,[88][89][90][91]. Among these parameters, the valenceband effective mass parameters A i (i = 1-7) determine the band dispersion and Δ i (i = 1-3) determine the splitting energy at the Γ point. Usually the corresponding parameters for the ternary compound Al x Ga −x 1 N are obtained by means of the linear interpolation of GaN and AlN values [92,93]. Generally speaking, the parameters calculated in this way are unreliable. More specifically, it is impossible to obtain the k p · parameters of the (AlN) m /(GaN) n SL by linear interpolation. A preferable method to determine these important band-structure parameters for the ternary compound is to fit the k p · Hamiltonian to the first-principles band structures. Our fitted band-structure parameters for the (AlN) 5 /(GaN) 1 SL are presented in table 3. For the sake of comparison, the band-structure parameters for GaN and AlN together with the previous theoretical data are also listed in table 3. It can be seen from table 3 that our calculated band-structure parameters for GaN and AlN are in excellent agreement with the Table 3. Comparison of our calculated k p · parameters with other theoretical results for GaN and AlN. The last row denotes our calculated k p · parameters of the nanoscale (AlN) 5 /(GaN) 1 SL. Here Δ i (i = 1-3) is in units of meV, and A i (i = 1-6) is in units of  m 2 2 0 . The unit of A 7 is eVÅ. previous published values. Naturally, the predicted k p · parameters for the (AlN) 5 /(GaN) 1 SL structure are reliable and can be used for device simulations. The electron and hole effective masses are also calculated based on our first-principles band structures. Considering the anisotropy of the WZ structure, we assume a parabolic energy dispersion for the lowest conduction band with different effective masses in and out of the cplane. The electron effective masses are denoted as ⊥ m e and ∥ m e for perpendicular and parallel to the c-axis, respectively. The electronic band structure near the CBM is given by the following formula, figure 7 that the CBM, HH, and LH states are localized around the GaN monolayer, especially for the HH and LH states. On the other hand, the CBM, CH, HH, and LH bands are delocalized Bloch-like states in the disorder alloy. Both the HH and LH bands are the p x -and p y -like states, whereas the CH band is the p z -like state. The interband optical transition from the CBM to the VBM (CH) becomes E∥c (TM polarization) for the disorder alloy due to the p z -like symmetry of the CH state. Nevertheless, the strong E⊥c (TE polarization) optical transition from the CBM to the VBM (HH) can be realized in the (AlN) 5 /(GaN) 1 SL due to the strong localization of the p x -and p y -like HH state. This is suitable for applications of some DUV optoelectronic devices such as LEDs and LDs. To have a comprehensive understanding of the polarization properties of the (AlN) 5 /(GaN) 1 SL, we further calculate the dielectric spectra and absorption coefficient as a function of the incident photon energy with the HSE, LDA-1/2, and GGA-PBE methods (see figure 8). We can see from figure 8 that the results derived from the HSE and GGA-PBE calculations are similar. At the same time, the dielectric spectra obtained from the LDA-1/2 calculation are in good accordance with the HSE result when the energy is less than 8.0 eV. However, there are some minor differences for the higher-energy interband optical transition between them due to lacking enough precision for the conduction and valence band structures away from the band edge in the LDA-1/2 calculation. Obviously, the HSE calculation not only corrects the wellknown band gap underestimation problem inherent in the GGA-PBE calculation, but can also provide accurate electronic structures as well as optical properties. Moreover, we can see from figure 8 that the peak value of ε xx 2 and ε yy 2 at the fundamental absorption edge (black line in figure 8) is larger than that of ε zz 2 (red line in figure 8). This indicates that the interband optical transition is dominated by the TE polarization in the (AlN) 5 /(GaN) 1 SL. This is further confirmed by the absorption spectra (see the insets of figure 8). Physical origin of band inversion It has been known that the crystal-field splitting energy Δ cr has a sensitive dependence on the cell-internal structural parameter u and the ratio c a of the lattice constants [28,29,100]. Furthermore, it is confirmed that Δ cr linearly depends on u and c a in the WZ structure [85]. However, it is still unclear why Δ cr is positive in GaN and negative in AlN even though the WZ GaN has the same space group C v 6 4 as AlN. To have a through understanding of the physical origin, we calculate Δ cr as a function of c a and u for GaN and AlN (see figures 9 and 10). In our calculations, the a-lattice constant is fixed at the experimental value of 3.180 and 3.112 Å for GaN and AlN, respectively. The internal parameter u is fixed to the ideal value of 0.375 when Δ cr is calculated as a function of c a. We adopt the ideal value c a = 8 3 to calculate Δ cr as a function of u. It can be seen from figures 9 and 10 that Δ cr has a good linear relationship with c a and u. By virtue of a linear fitting with the following formula [85], 6 . Based on the LDA-1/2 calculation, the band gap is revised to 5.23 eV in (c) and its inset by using the scissor operator value of 1.75 eV. It is worthwhile to note that the nanoscale (AlN) 5 /(GaN) 1 SL can be regarded as an AlN × × 3 3 3 supercell with one layer of Al atoms replaced by Ga atoms. This will lead to an increase in c a owing to the larger covalent radius of the Ga atom and the decrease in u due to the competition between bond-bending and bond-stretching forces in the (AlN) 5 /(GaN) 1 SL [101]. Our calculations (see figures 9 and 10) definitely show that the crystal-field splitting energy is directly proportional to c a and inversely proportional to u. By comparison, the internal structure parameter u has a more profound influence on Δ cr than c a. We can see from figures 9 and 10 that Δ cr increases by ∼20 meV if u decreases by 0.001, whereas the increase in c a by 0.001 leads to an increase in Δ cr by only ∼3 meV. Considering that both u and c a are closely related to the bond lengths and bond angles, we thus turn our attention to the relationship of the bond lengths and bond angles to the crystal-field splitting energy in order to explore the physical origin of the valence band inversion in the (AlN) m /(GaN) n SL. To investigate the influence of bond lengths and bond angles on the crystal-field splitting energy, we present the cation-centered tetrahedron in figure 11, in which R 1 denotes the nearest neighbor cation-N bond length along the c-axis, R 2 is the length of the other three equivalent cation-N bonds, and α 1 and α 2 are the corresponding bond angles around the cation site. The bond angles, i.e., α 1 and α 2 , are not independent due to the hexagonal symmetry of the WZ According to equation (8), we obtain u = 0.370 (0.374) for the Al-center (Ga-center) tetrahedron in the (AlN) 5 /(GaN) 1 SL. Compared with the value of u in AlN, u decreases by approximately 0.012, which gives rise to an increase of ∼240 meV in the crystal-field splitting energy. Moreover, u decreases by ∼0.003 of the Ga-center tetrahedron in the (AlN) 5 /(GaN) 1 SL compared with the GaN. This enlarges the positive splitting of crystal field energy by ∼60 meV. We thus can understand that the decreasing order of the CH and HH bands in the Al-rich Al x Ga −x 1 N disorder alloy can be inverted to HH/LH and CH bands in the (AlN) 5 /(GaN) 1 SL. Conclusions In this paper, we theoretically prove that the valence band order can be inverted and the TE polarized light extraction efficiency can be enhanced in nanoscale (AlN) m /(GaN) n (m>n) SL with an ultra-thin GaN layer instead of in Al-rich Al x Ga −x 1 N disorder alloy by using three different approaches, i.e., HSE, LDA-1/2, and GGA-PBE, in which the SOC effect is included. The calculated electronic band structures from the lattice constants optimized with the AM05 XC functional are in excellent agreement with experiments for GaN and AlN. Our calculations show that the microscopic Ga-atom distribution in the Al-rich Al x Ga −x 1 N alloy can effectively modulate its electronic band structures. The valence-band arrangement in the order HH, LH, and CH from the top can be achieved in the (AlN) m /(GaN) n SL. The crystal-field splitting energy in the SL is much larger than that in the corresponding disorder alloy. In addition, both HH and LH bands have a very large effective mass (∼33m 0 ) in the (AlN) m /(GaN) n SL. The HH and LH bands are degenerate at the Γ point if the Ga-atom distribution holds the hexagonal symmetry. Furthermore, the VBM of the SL structure is the HH band with p x -and p y -like characters, whereas the p z -like CH band becomes the VBM of the disorder alloy. High light extraction efficiency with TE polarization (E⊥c) can thus be obtained in the (AlN) m /(GaN) n SL as opposed to TM polarization in the Al-rich Al x Ga −x 1 N disorder alloy. Our calculations further show that the crystal-field splitting energy is mainly determined by the cell-internal structure parameter u, whereas u sensitively depends on the bond lengths and bond angles. It is the variation of bond lengths and bond angles that leads to the inversion of CH and HH bands in the (AlN) m /(GaN) n SL. We also find from our calculations that the computationally expensive HSE calculation for the electronic structures and optical properties can be substituted by the combination of the GGA-PBE and LDA-1/2 methods. The results obtained in this paper are vital for enhancing the TE polarized light extraction efficiency of Al-rich AlGaN-based DUV LEDs and LDs.
8,593.2
2014-11-26T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
Future river-ood damage increases under aggressive adaptations The risk of river ooding is expected to rise with climate change and socioeconomic development 1-6 , and therefore additional protection measures are required to reduce increased ood damage. Previous studies have investigated the effectiveness of adaptation measures to reduce ood risks 7,8 ; however, there has been no evaluation of residual ood damage (RFD), which reects the unavoidable increase in damage even under an aggressive adaptation strategy. Here, we evaluated RFD under several adaptation objectives. We found that China, India, Russia and countries in central Africa and Latin America can achieve a higher level of ood protection that will reduce RFD even under extreme scenarios. However, high RFD exceeding 0.1% of GDP remains, especially in eastern China, northern India, eastern Europe and central Africa. The high RFD are inevitable assuming the average construction period required for hard infrastructure (30 years), implying the need for immediate adaptation measures as well as soft adaptation. Introductory Paragraph The risk of river ooding is expected to rise with climate change and socioeconomic development [1][2][3][4][5][6] , and therefore additional protection measures are required to reduce increased ood damage. Previous studies have investigated the effectiveness of adaptation measures to reduce ood risks 7,8 ; however, there has been no evaluation of residual ood damage (RFD), which re ects the unavoidable increase in damage even under an aggressive adaptation strategy. Here, we evaluated RFD under several adaptation objectives. We found that China, India, Russia and countries in central Africa and Latin America can achieve a higher level of ood protection that will reduce RFD even under extreme scenarios. However, high RFD exceeding 0.1% of GDP remains, especially in eastern China, northern India, eastern Europe and central Africa. The high RFD are inevitable assuming the average construction period required for hard infrastructure (30 years), implying the need for immediate adaptation measures as well as soft adaptation. Main Text River oods are major natural disasters, causing serious economic losses and damage worldwide. Economic damage due to river ooding is projected to increase worldwide in the future, and more threatening conditions can be anticipated with the increasing global population and socioeconomic development [1][2][3][4][5][6] . Immediate effective adaptation measures should therefore be made for mitigating future damage. Conducting effective adaptation measures at the global scale requires information about residual ood damage (RFD), which refers to unavoidable ood damage above the current protection level, even under an adaptation strategy based on feasible adaptation costs. To clarify local differences in the magnitude of RFD, estimations of the affordable adaptation level, which re ect local economic conditions and local costs of adaptation measures, are required to determine the feasibility of the adaptation measures. Adaptation costs at the global scale have been quanti ed in a few previous studies. For example, Jongman et al. 9 demonstrated that adaptation cost of approximately €1.75 billion for increasing the ood protection level in all river basins in the EU could decrease the €7 billion total expected annual ood losses by 2050. Winsemius et al. 3 and Ward et al. 7 showed that global adaptation costs for levees could produce a much higher bene t (reduced damage through additional adaptation) in most combinations of climate and socioeconomic scenarios. Here, we estimated global RFD under the feasible maximum adaptation level, i.e. the maximum future ood protection level that is both attainable and economically bene cial. This produced the highest net bene t (i.e. the cost of additional adaptation subtracted from the bene ts) and was referred to as the 'optimized adaptation objective'. The reduced damage was estimated by considering damage with and without additional adaptation measures (see "Estimation of RFD and bene ts" in the methods). We set a maximum limit of the adaptation level as a 1000-year return period of maximum ood magnitude in the past climate based on the current distribution of ood protection standards 10 , which was derived from the FLOod PROtection Standards (FLOPROS) database. The local adaptation level under the adaptation objective was calculated for each subnational administrative unit. It should be noted that RFD is not the total damage due to ooding, but the increase in future damage over that under the current protection level. Interestingly, the RFD was still very high under the low emission scenario (16.7 billion USD per year, RCP2.6/SSP1), which was due to the high level of economic development in the inundation areas exposed to ooding. Because the estimated adaptation costs were similar among the scenarios (8.7- billion USD per year) , the ood protection level reached the level required to obtain the maximum net bene t (i.e. reduced ood damage minus the adaptation cost) (see "Estimation of RFD and bene ts" in the Methods section). On the other hand, ood protection levels remained low in countries where the adaptation costs were higher than the bene ts of adaptation (i.e. the amount of damage reduction), which was observed in many regions of Africa, Bolivia and Paraguay. The estimated RFD under the assumption of an economic limitation identi ed regions or countries where aid funding agencies or international cooperative frameworks should support adaptation to the effects of climate change in terms of ood risk. To assess the economic limitation on future ood protection levels, we conducted a similar analysis under the maximum adaptation objective, which minimized future ood damage (maximized bene ts) without considering the local economic limitation. The maximum adaptation objective would reduce future ood damage by 73.6 billion USD per year. However, a signi cant RFD still remained in regions, such as China, north-eastern Australia, southern and northern India, Siberia, eastern Europe, Nigeria, Alaska and northern Argentina. The main reason for the signi cant RFD was ood damage that occurred during construction (i.e. 2020-2050) (Supplementary Figure S4). Hardware adaptation measures require a long time to become effective; therefore, early decisions and other soft measures are also needed to reduce the increased ood damage under a warming climate 8 . The RFD was high in areas of Asia, central Africa and Latin America that have experienced strong socioeconomic development, where the magnitude and frequency of ooding are projected to increase in the future 2 . In these regions, the ood protection standard required a high return period ( Figure 1). On the other hand, the RFD in Europe and North America exceeded 0.01% of the GDP for the optimized adaptation objective. In these regions, adaptation costs would be greater than the bene ts. This is because the high level of ood protection already exist (>50-year return period, Supplementary Figure S1), and because the frequency of large oods in the future (e.g. 100-year ood) would decrease 2 . The maintenance of current ood protection levels was the best economic option under the optimized adaptation objectives. This trend did not change with the lower or higher adaptation unit costs (Supplementary Figure S5). The regions of eastern Asia, Siberia, western China, southern India, western and central Africa, northeastern Latin America, southern Canada and Alaska had large RFD values (Figure 3a). Among the different parameter-scenario combinations implemented in this study (e.g. SSPs, RCPs, discount rate, unit cost, operation and maintenance (O&M) costs, and protection area), more than 50% produced a signi cant RFD in these regions for the optimized adaptation objective. However, most regions had a much lower RFD for the maximum adaptation objective (Figure 3b), implying the potential needs for an international nancial mechanism to increase the resilience of these regions to future increases in ooding. We found a signi cant RFD under the optimized and maximum adaptation objectives for most parts of the world, indicating a limit to adaptation. In this study, the limit to adaptation was caused mainly by the economic costs in subnational administrative units and assumed construction period, indicating that early decisions and international funding support are key factors for conducting effective adaptation measures at the global scale. Furthermore, the enhancement of autonomous adaptation via social adaptation activities is important for increasing the limit to adaptation because vulnerability was decreased by autonomous adaptation 6,11,12 . Future studies are needed to clarify the relationship between autonomous adaptation and ood protection measures. Method Summary The overall modelling framework consisted of the following steps: (1) global river ood simulation, (2) downscaling ood inundation, (3) damage calculation, (4) estimation of adaptation costs by adaptation level and (5) estimation of RFD and bene ts of the two adaptation objectives. Global river ood simulation. The return period of river ooding under various climate scenarios was calculated from the daily total water storage derived from the global river ood simulation. We used the Catchment-based Macro-scale Floodplain (CaMa-Flood) model 13 to conduct a simulation forced by the daily runoff at a 0.5° × 0.5° resolution and output daily total storage at a 0.25° × 0.25° horizontal resolution. For future river ood simulation, we used ve general circulation models (GCMs) (GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM and NorESM1-M) and four RCPs (2.6, 4.5, 6.0 and 8.5 Wm −2 ). The cumulative distribution function-based downscaling method (i.e. a nonparametric bias-correction method) using percentiles of empirical cumulative distribution functions removed the biases of climatic variables in the GCMs 14 . A global ood inundation simulation was conducted for the period 1961-2005 for historical climate conditions and for the period 2006-2100 for future climate conditions, except for HadGEM2-ES (2006-2098). We did not consider the effects of ood protection levels, human modi cation of river discharge, or channel bifurcation in the global river ood simulation. This resulted in uncertainty regarding the inundation areas in mega delta regions (see SI6) 15,16 , and caused over-and underestimations of RFD, especially in downstream regions. Downscaling ood inundation. The simulated over ow oodwater volume at a 0.25° × 0.25° resolution was downscaled to obtain the inundation area at a 30′′ × 30′′ resolution. First, the over ow oodwater volume was calculated from the annual maximum total water storage when the return period of the annual maximum ood water exceeded the local protection levels. The return period and corresponding river water storage were estimated based on the Gumbel distribution using the L-moment method 17 and were calculated from the annual maximum total water storage for the period 1961-2005 derived from the river ood reanalysis (see SI1). The current local protection levels were obtained from the model layer of FLOPROS 10 (Supplementary Figure S1). Finally, the over ow oodwater volume was downscaled to a 30′′ × 30′′ horizontal resolution using a high-resolution digital elevation model. The ooded area fraction was calculated at the same resolution. Damage calculation. The RFD was calculated as the increase in ood damage over the present level that would still occur despite the implementation of additional adaptation measures. To quantify RFD, the damage (Risk) was calculated based on the following equation: Risk = Hazards × Exposure × Vulnerability where Hazards is the magnitude of the ood, Exposure is the value of assets potentially affected by ooding, and Vulnerability is the susceptibility to harm or lack of the socioeconomic capacity to cope with ood risk. Hazards were derived from the over ow ood water depth from the downscaling ood inundation. Exposure was derived from the asset map, which was constructed from a gridded GDP map (see SI2). Exposure was targeted on the assets in the ooded areas, and therefore we used an asset map multiplying by the over ow ooded area fraction derived from the downscaling ood inundation. We used the global damage-depth function derived from Huizinga et al. 18 as a Vulnerability index, which was based on a literature survey, and this index covered each region (Asia, Africa, Europe, Oceania, North America and Central and South America). The damage-depth function was derived from the mean value for commercial buildings, industrial buildings, transport and infrastructure (roads) sectors. It was noted that ood protection levels were considered as Vulnerability in previous studies 7, 19,20 ; however, we did not explicitly consider ood protection levels in the damage calculation, because we already included ood protection levels in the downscaling ood inundation. The modelled damage forced by the river ood reanalysis (see SI1) captured not only the global uctuations of ood damage (Supplementary Figure S6), but also the event-scale damage (Supplementary Figure S7). We compared the modelled damage forced by the historical simulation with the values calculated by other studies. Our estimation was within the range of other estimates (Supplementary Table S1), indicating that it was likely valid. Estimation of adaptation costs by adaptation level. The adaptation costs of hardware measures were considered in this study. They were composed mainly of the costs of construction and O&M. The construction costs were calculated as the dimensions of the required ood protection levels multiplied by their unit costs. The unit cost was set as 2.404 [million USD/km/log 2 ( ood protection level)], which was derived from the original unit cost database of hardware measures (see SI3). The dimensions of the required protection measures were composed of their construction length and future ood protection levels. The construction length of a ood protection structure was calculated as the river length in a unit catchment (corresponding to a 0.25° × 0.25° horizontal resolution), derived from CaMa-Flood boundary data overlaid on the mask of the protection area. We assumed that the unit catchment was protected when the urban population density derived from the spatially explicit population scenarios in 2050 21 was higher than 400 persons km −2 . This value corresponded to the de nition of urban in Canada. The total length of the ood protection structure was calculated for subnational administrative units. The future protection levels were determined by adaptation levels for subnational administrative units. Because there were no future scenarios for ood protection levels, we de ned the relationship between future ood protection level (FPL) and adaptation level by the following equation: where FPL Future and FPL Current are the future and current ood protection levels described by return periods [year], respectively, and L is the adaptation level. In this study, L ranged from 0.0 to 10.0 at 0.25 intervals. FPL Future and FPL Current ranged from 0 to 1000 years. If FPL Current was 0.0, FPL Future was set to 2 years when L = 1.0. We assumed that the costs of workers, materials and land acquisition were included in the construction costs. We assumed the construction period was from 2020 to 2050. The O&M costs that were equal to 1% of the construction costs occurred during the period 2051-2100. Finally, we calculated the adaptation costs as the total cost of construction and O&M, with a 5% discount rate. Estimation of RFD and bene ts of the two adaptation objectives. The RFD and bene ts were determined under consideration of the adaptation objectives. This analysis was conducted with a discount rate of 5% for subnational administrative units and for the evaluation period 2020-2100. The two adaptation objectives were the 'optimize adaptation objective' and 'maximum adaptation objective'. The optimize adaptation objective maximized the difference between bene ts and adaptation costs (i.e. net present value). The adaptation objectives reduced RFD if there were adaptation limitations (i.e. a future protection level within a 1000-year return period). On the other hand, the maximum adaptation objective is an ideal adaptation objective that minimizes RFD as much as possible. The RFD under the optimize adaptation objective was the most affordable option under the speci c socioeconomic conditions, while the maximum adaptation objective indicates a limit to adaptation. The RFD was estimated as the difference between future damage with additional adaptation and the relative damage equivalent to the present level .
3,610.6
2020-10-28T00:00:00.000
[ "Economics" ]
Enhancer of Zeste Homolog 2 Inhibition Stimulates Bone Formation and Mitigates Bone Loss Caused by Ovariectomy in Skeletally Mature Mice* Perturbations in skeletal development and bone degeneration may result in reduced bone mass and quality, leading to greater fracture risk. Bone loss is mitigated by bone protective therapies, but there is a clinical need for new bone-anabolic agents. Previous work has demonstrated that Ezh2 (enhancer of zeste homolog 2), a histone 3 lysine 27 (H3K27) methyltransferase, suppressed differentiation of osteogenic progenitors. Here, we investigated whether inhibition of Ezh2 can be leveraged for bone stimulatory applications. Pharmacologic inhibition and siRNA knockdown of Ezh2 enhanced osteogenic commitment of MC3T3 preosteoblasts. Next generation RNA sequencing of mRNAs and real time quantitative PCR profiling established that Ezh2 inactivation promotes expression of bone-related gene regulators and extracellular matrix proteins. Mechanistically, enhanced gene expression was linked to decreased H3K27 trimethylation (H3K27me3) near transcriptional start sites in genome-wide sequencing of chromatin immunoprecipitations assays. Administration of an Ezh2 inhibitor modestly increases bone density parameters of adult mice. Furthermore, Ezh2 inhibition also alleviated bone loss in an estrogen-deficient mammalian model for osteoporosis. Ezh2 inhibition enhanced expression of Wnt10b and Pth1r and increased the BMP-dependent phosphorylation of Smad1/5. Thus, these data suggest that inhibition of Ezh2 promotes paracrine signaling in osteoblasts and has bone-anabolic and osteoprotective potential in adults. Decreased bone mineral density (BMD) 3 and matrix material properties are associated with increased fracture risk and an imbalance in the biological activities of bone-forming osteoblasts and bone-resorbing osteoclasts (1)(2)(3). Loss of BMD observed in individuals with osteoporosis, a prevalent skeletal disease, can be mitigated by anti-resorptive agents including bisphosphonates, selective estrogen receptor modulators (raloxifene), or antibodies that inactivate the osteoclast-stimulatory specific ligand RANKL (Denosumab) (4). Therapeutics that stimulate bone formation include bone morphogenetic proteins (i.e. BMP2) and intermittent treatment with parathyroid hormone (PTH) or PTH-related protein (PTHLH), as well as antibody suppression of WNT inhibitors (e.g. SOST), which is both anabolic and anti-resorptive (5)(6)(7). Novel classes of bone-anabolic therapies used alone or in combination with current treatments can potentially increase BMD more effectively with fewer adverse effects than current clinically approved regimens. Therefore, we investigated new boneanabolic mechanisms linked to regulation of osteoblast growth and differentiation. Mesenchymal stromal cells (MSCs) reside in various locations in the body such as fat and bone marrow and can differentiate into a variety of skeletal tissues, including bone and cartilage (8). The commitment of MSCs into osteogenic differentiation is controlled by transcriptional and epigenetic events (9,10). Several signaling pathways (e.g. Bmp, Pth, and Wnt pathways) result in the activation and expression of key osteo-genic transcription factors (e.g. Runx2 and Sp7) that facilitate the commitment of MSC into the osteogenic lineage (11). Osteogenic differentiation is also modulated by epigenetic mechanisms such as microRNAs, DNA methylation, and posttranslational modification of histones (12,13). Some epigenetic events suppress whereas others enhance osteogenic differentiation of MSCs (12). Hence, it is important to characterize the epigenetic events that control osteoblast differentiation. Reversible modifications of histones such as acetylation and methylation play a critical role in controlling gene transcription. Depending on the modification and site, these modifications permit or inhibit the transcriptional machinery in osteoblasts (12). For example, trimethylation of histone 3 lysine 4 (H3K4me3) is associated with transcriptionally active genes (14), whereas histone 3 lysine 27 trimethylation (H3K27me3) may epigenetically reduce chromatin accessibility and thus promote gene silencing (15). Because H3K27me3 suppresses gene expression, this mark has been extensively studied as a cancer therapeutic (16). More recently, altering the level of H3K27me3 has been explored in regenerative medicine (17). Formation of H3K27me3 marks is mediated by Ezh2, the catalytic unit of polycomb-repressive complex 2 (PRC2) (18). The PRC2 complex may contain Ezh1 instead of Ezh2. However, Ezh1 possesses low methyltransferase activity and is believed to silence genes through alternative mechanisms (18,19). The methyl-transferase activity of PRC2 is balanced by three major demethylases, Jhdm1d, Kdm6a, and Kdm6b, that catalyze the removal of methyl groups at H3K27 (20). Recent studies have demonstrated that changes in H3K27me3 alter the phenotypic commitment of progenitor cells (21)(22)(23)(24)(25). For example, inhibition of Ezh2 and the resulting reduction of H3K27me3 promotes osteogenic differentiation and inhibits adipogenic differentiation of MSCs (21,22). In this study, we assessed the role of Ezh2 and H3K27me3 levels in preosteoblasts in culture and in bone formation in vivo. We show that Ezh2 inhibition enhances osteogenic differentiation of preosteoblasts by reducing H3K27me3 near transcriptional start sites and enhances the expression of osteogenic genes. Administration of an Ezh2 inhibitor enhances bone formation and prevents bone loss associated with estrogen depletion in vivo. EZH2 Inhibition Enhances Osteogenic Differentiation of MC3T3 Cells-We utilized GSK126, a specific Ezh2 inhibitor, to assess whether enzymatic inhibition of this histone methyltransferase, and therefore suppression of H3K27 trimethylation (H3K27me3), promotes osteogenic differentiation of MC3T3 preosteoblasts. GSK126 exhibits concentration-dependent toxicity as measured by MTS assay (Fig. 1a). Subtoxic concentrations of this Ezh2 inhibitor decrease H3K27me3 levels in a concentration-dependent manner (Fig. 1b). The addition of GSK126 (2 M) inhibits H3K27me3 6 h after drug administration, and this effect perseveres for at least 72 h (Fig. 1c). These results indicate that Ezh2 activity is rate-limiting for global H3K27me3 in MC3T3 cells, which parallels the established molecular function of this epigenetic regulator (18,21). To determine the effects of Ezh2 inhibition on osteogenic commitment of MC3T3 cells, 5 M GSK126 was added to the cells for the first 6 days of osteogenic differentiation (Fig. 1d). This treatment regimen was selected because of the expression pattern of Ezh2 in differentiating MC3T3 cells (Fig. 1e). High expression of Ezh2 is observed in undifferentiated cells, whereas a significant decrease in expression occurs during osteogenic differentiation of MC3T3 cells. Ezh2 down-regulation during osteogenic differentiation could be due to transcriptional suppression by the bone master regulator Runx2 (26) and/or post-transcriptional inhibition by microRNA miR-101, which targets Ezh2 (27). Irrespective of exactly how cells regulate Ezh2, we tested whether inactivation of Ezh2 with GSK126 would have downstream functional consequences by biologically enhancing osteogenic differentiation of MC3T3 cells. The RT-qPCR results show that Ezh2 inhibition enhances the expression of several osteogenic markers including Sp7, Bglap, and Alpl (Fig. 1e). Similar expression of Ezh2 is observed in the vehicle and GSK126 treatment groups. Similar to Alpl mRNA expression, increased Alpl activity is observed in GSK126-treated MC3T3 cells (Fig. 1f). Alizarin red staining demonstrates that Ezh2 inhibition accelerates calcium deposition of MC3T3 cells. We note that alizarin red staining is very robust in GSK126-treated cells on day 24 of osteogenic differentiation, whereas few nodules are present in vehicle-treated cells (Fig. 1g). On day 27, more calcium deposition is observed in vehicle-treated cells, but this is significantly less when compared with GSK126-treated cultures (Fig. 1h). Similar to GSK126, another inhibitor of Ezh2, UNC1999, enhances osteogenic differentiation of MC3T3 cells (supplemental Fig. S1). RT-qPCR analysis (Fig. 1e) suggests that Ezh2 inhibition promotes expression of osteogenic genes at early stages of MC3T3 differentiation. To assess mechanistic consequences at a broader scale, we assessed global gene expression by RNA-seq during osteogenic commitment of MC3T3 treated with 5 M GSK126 and vehicle (Fig. 1d). Acta2, a mesenchymal progenitor marker, is down-regulated during the differentiation of MC3T3 cells and is further suppressed when Ezh2 function is inhibited (Fig. 2a). For comparison, the cluster of differentiation marker Cd200 is up-regulated during the differentiation time course and is further enhanced with the presence of Ezh2 inhibitor (Fig. 2b). MC3T3 differentiation results in enhanced expression of several osteogenic transcription factors including Dlx3, Dlx5, and Sp7 (Fig. 2c). The presence of GSK126 further increases expression of these transcription factors, as well as the bone master regulator Runx2. Similarly, extracellular matrixrelated genes (e.g. Sparc, Ibsp, Spp1, Bglap, Bglap2, and Alpl) rise in expression over the differentiation time course, whereas their levels are dramatically increased in the presence of Ezh2 inhibitor (Fig. 2d). Glypicans (Gpc1-6), several of which are implicated in BMP signaling, are also modulated by Ezh2 inhibition. More specifically, the expression of Gpc1 and Gpc3 is enhanced with GSK126 (Fig. 2e). To control for potential nonspecific effects of GSK126 and UNC1999, we performed siRNA transfection targeting Ezh2 in MC3T3 cells using "smart pool siRNA" (GE Healthcare) (Fig. 3a). Two days after transfection, Ezh2 was depleted and coincided with reduced H3K27me3 (Fig. 3b). Similar to enzymatic inhibition via GSK126 and UNC1999, the knockdown of Ezh2 enhances the expression of several osteogenic markers includ-ing Runx2, Sp7, Alpl, and Bglap after 6 and/or 11 days of osteogenic differentiation (Fig. 3c). The transfection of Ezh2 siRNA also results in enhanced alizarin red staining of MC3T3 cultures (Fig. 3d). Collectively, our results indicate that Ezh2 inhibition, and thus reduction of H3K27me3, promotes osteogenic differentiation of MC3T3 preosteoblasts. Ezh2 Inhibition Decreases Genome-wide Deposition of H3K27me3 Marks near TSSs-To assess the effect of Ezh2 inhibition on the epigenetic landscape in preosteoblasts, chromatin immunoprecipitation combined with next generation sequencing (ChIP-seq) analysis was performed utilizing a validated H3K27me3 antibody in MC3T3 cells treated for 24 h with vehicle or 5 M GSK126. EZH2 inhibition using GSK126 rapidly reduces total H3K27me3 levels by severalfold within 6 h, and these reduced levels are sustained for at least 3 days (Fig. 1). Consequently, these data predict a genome-wide change in this histone modification. However, H3K27me3 peaks are typically found near transcriptional start sites (TSSs) throughout the genome, whereas H3K27me1 peaks, for example, are characteristic for distal transcriptional enhancers. Average tag density from 5 kb upstream to 5 kb downstream of the TSSs for high confidence methylation peaks (false discovery rate Յ 1e-10) based on ChIP-seq analysis upon treatment with either vehicle or GSK126 are plotted (Fig. 4a). Ezh2 inhibition reduces the average tag density in the H3K27me3 plot near TSSs when compared with vehicle (compare Veh H3K27me3 with GSK H3K27me3). The average tag densities for the input DNA are similar between the two treatment groups (compare Veh Input with GSK Input). A comparison of genes showing a greater than 2-fold increase in fragments/kilobase pair/million mapped reads (FPKM) values between input DNA and the corresponding DNA after H3K27me3 ChIP indicates that there are fewer genes showing H3K27me3 in MC3T3 cells after GSK126 treatment (Fig. 4b). Comparison of FPKM values for the input DNA from MC3T3 cells treated with vehicle or GSK126 demonstrates that less than 1% of all genes show shown in e-h. e-h, RT-qPCR of Ezh2 and osteogenic markers (n ϭ 3) (e), alkaline phosphatase staining (f), and alizarin (Aliz) red staining (g and h) for MC3T3 cells treated with vehicle or GSK126. Alizarin red staining was quantified by ImageJ software. The experiments were repeated three times, and biological triplicates (means Ϯ S.D.) are shown when applicable. We note that it is possible to detect appreciable residual H3K27me3 levels upon longer exposures in Western blots of cells treated with GSK126, indicating that inhibition is not absolute. Tub, tubulin; Veh, vehicle; Osteo., osteoblastic; STD, standard deviation; Norm. Exp., normalized expression. greater than 2-fold difference in FPKMs (Fig. 4c). The small changes in input DNA that are observed may be accounted for by differences in DNA accessibility resulting from changes in chromatin structure following GSK126 treatment. Comparison of FPKM values from vehicle-and GSK126-treated MC3T3 cells after H3K27me3 ChIP shows an increased number of genes with Ͼ2-fold difference between the two groups, indicating that Ezh2 inhibition changes the status of H3K27me3 marks Fig. 1d). a and b, expression of the mesenchymal stem cell marker Acta2 is reduced (a), whereas the cluster of differentiation marker Cd200 is up-regulated (b) with Ezh2 inhibition. c-e, osteogenic transcription factors (c) and extracellular matrix-related genes (d), including glypicans (Gpc1 and Gpc3) (e), are up-regulated with Ezh2 inhibition. Three biological replicates were pooled to generate a single RNA-seq value (RPKM) for each condition time point. Gene expression for many bone-related markers trends upward during differentiation (solid lines) and GSK126 typically increases the slope of this trend (dotted lines). . siRNA depletion of Ezh2 promotes MC3T3 osteoblast differentiation. a, illustration of the experimental protocol for siRNA transfection of MC3T3 cells with control and Ezh2 siRNAs. b, Western blotting of Ezh2 protein and H3K27me3 relative to H3 and to ␤-actin 2 days after transfection. Arrows indicate molecular mass marker (kDa) location. c, RT-qPCR analysis of osteogenic genes for MC3T3 cells exposed to control or Ezh2 siRNA (n ϭ 3). d, alizarin red staining for MC3T3 cells in the presence of control or Ezh2 siRNA (n ϭ 3). The experiments were repeated three times, and biological triplicates (means Ϯ S.D.) are shown when applicable. Aliz, alizarin; Cont. or Ctrl, control; Osteo., osteogenic; STD, standard deviation; Norm. Exp., normalized expression. near TSSs (Fig. 4d). These data support the concept that enzymatic inhibition of Ezh2 decreases the deposition of H3K27me3 marks across the genome and in particular near TSSs. Correlation between H3K27me3 and Gene Expression-To correlate H3K27 trimethylation patterns with changes in gene expression, we compared ChIP-seq for H3K27me3 to gene expression by RNA-seq in MC3T3 cells treated with vehicle or 5 M GSK126. Genes that show H3K27me3 marks in vehicleand GSK126-treated cells (Fig. 4b) were compared with genes that are up-regulated at least 1.4-fold in GSK126-treated cells on the indicated days of differentiation (Fig. 5a). These experiments show that Ezh2 inhibition results in H3K27 demethylation and subsequent up-regulation of a number of genes with H3K27me3 marks for key osteogenic transcription factors, growth factors and genes that modulate BMP signaling (Figs. 5, b-d, and 6). Thus, the osteogenic effect of enzymatic inhibition of Ezh2 using GSK126 is mechanistically linked to selective changes in the deposition of H3K27me3 marks near TSSs of genes that encode components of principal gene regulatory signaling pathways. Ezh2 Inhibition Stimulates Paracrine Signaling in Osteoblast-Our initial RNA-seq analysis suggested that Ezh2 inhibition modulates components of the WNT and BMP signaling pathways (Fig. 5). We therefore performed additional analyses and experiments to address whether Ezh2 inhibition affects paracrine signaling in osteoblasts. Several Wnt ligands (e.g. Wnt10b, Wnt10a, and Wnt6) are robustly expressed in differentiating MC3T3 cells (Fig. 7a). Interestingly, the pro-osteogenic Wnt10b is greatly up-regulated by Ezh2 inhibition (Fig. 7b). Similarly, the PTH receptor (Pthr1h) is also enhanced by GSK126 in MC3T3 (Fig. 7c). Western blot analysis demonstrates that Ezh2 inhibition enhances Smad1/5 phosphorylation, a well established biomarker for the activation of BMP2 signaling, in MC3T3 cells (Fig. 7d). As a result of these findings, we assessed combination treatment of GSK126 and BMP2. As anticipated, GSK126 and BMP2 result in a faster acquisition of alizarin red-positive colonies in MC3T3 cells (Fig. 7e). Interestingly, the addition of GSK126 to BMP2-treated cells further enhances the mineral deposition. Similarly, combination of GSK126 and BMP2 enhances the expression of Bglap and Ibsp, two key osteogenic markers (Fig. 7f). Based on the combined results from our study, including RNA-seq and ChIP-seq data, we propose a mechanistic working model for Ezh2 as an epigenetic suppressor of paracrine signaling in osteoblasts (Fig. 7g). The exciting ramification of our study is that inhibitors of Ezh2, which include well tolerated and orally available drugs, may be effective by supporting the endogenous local activation of natural bone stimulatory ligands at physiological doses in bone. Ezh2 Inhibition Is Bone-anabolic and Osteoprotective in Vivo-RNA-seq data obtained during osteogenic differentiation of MSCs (21) or osteoblasts (Fig. 2) consistently indicate that pharmacological inhibition of Ezh2 is pro-osteogenic and enhances expression of skeletal ECM proteins. We therefore assessed the biological effects of decreasing Ezh2 activity on bone homeostasis in adult mice. Because our in vivo studies encompass multiple comparisons, we performed statistical analyses using the Wilcoxon test or Wilcoxon rank sums test (supplemental Tables S2 and S3). Only the most relevant comparisons are presented in the bar graphs (Figs. 8 and 9). Our first study examined whether the Ezh2 inhibitor GSK126 is bone-anabolic after skeletal patterning. We investigated biological effects in mice at 2 months of age (i.e. prior to skeletal maturation) (Fig. 8). Daily intraperitoneal administration of 15 and 50 mg/kg GSK126 for 5 weeks does not result in gross adverse reactions as suggested by similarities in body and spleen weight among the treatment groups (Fig. 8a). Analysis by CT shows a significant increase in cortical bone volume and thickness of the femoral diaphysis, while also revealing a trend toward increased cancellous bone thickness in the distal femoral metaphysis with GSK126 treatment (Fig. 8b and sup-plemental Table S2). Corroborating these results, histomorphological analysis of the distal femoral metaphysis reveals a significant increase in bone formation rate per tissue volume (Fig. 8c), number of osteoblasts per bone perimeter (Fig. 8d), and mineral apposition rate in the 50 mg/kg GSK126 treatment group (supplemental Table S2). Osteoclast number per bone perimeter and tissue area is not significantly different between the groups, indicating that GSK126 stimulates bone formation without affecting bone resorption ( Fig. 8e and supplemental Table S2). We conclude that pharmacological inactivation of Ezh2 has bone-anabolic effects in adult mice. Based on the bone-anabolic effects in normal adult mice, a second study assessed whether GSK126 can mitigate bone loss in female mice with a fully mature skeleton at peak bone mass ( Fig. 9 and supplemental Table S3). We used an ovariectomy (OVX) model for post-menopausal osteoporosis in female mice starting at 3 months of age and administered 50 mg/kg of GSK126 daily for 6 weeks. Body weight or spleen weight are similar between groups, whereas uterus weight is reduced in ovariectomized mice as expected (Fig. 9a and data not shown). As observed in the first study, femoral metaphyseal bone volume is not affected by Ezh2 inhibition in sham and OVX mice treated with GSK126 (Fig. 9b). However, there is a trend for increased cortical thickness and bone volume in the diaphysis of sham and OVX mice following GSK126 administration (Fig. 9c). In OVX mice, cortical thickness of the femoral diaphysis (Fig. 9d) and trabecular thickness of the femoral metaphysis (Fig. 9e) are increased upon administration of GSK126 compared with mice treated with vehicle. L5 vertebral bone volume, trabecular number, and trabecular thickness are reduced in OVX mice compared with sham animals (Fig. 9f). These parameters are at least partially restored in the presence of GSK126 in OVX mice (Fig. 9, f and g). Mitigation of the bone phenotype in OVX mice upon treatment by GSK126 suggests that inhibition of Ezh2 in adult females has osteoprotective properties. Intraperitoneal administration of the Ezh2 inhibitor GSK126 is not overtly toxic in mice, because we did not experience lethality in any of our cohorts. In addition, we have examined multiple soft tissues (e.g. heart, liver, kidney, and spleen) from mice treated with intraperitoneal doses of for up to 6 weeks. These studies did not reveal any obvious adverse effects at the level of gross anatomy, body weight, and spleen weight. The latter results indicate that GSK126 is well tolerated as previously suggested (28). Discussion The present study assessed the role of Ezh2 on osteoblast differentiation in vitro and whether inhibition of this epigenetic enzyme alters bone parameters in vivo. Knockdown and inhibition of Ezh2 enhances osteogenic differentiation of MC3T3 preosteoblasts. RNA-seq and ChIP-Seq analyses suggest that Ezh2 inhibition enhances expression of osteogenic genes by reducing H3K27me3 near TTSs. It remains to be established whether Ezh2 binds near TSSs in immature osteoblasts. We consider it likely that Ezh2 remains bound to the promoters of a number of genes that suppress cell growth or support osteogenic lineage-progression, consistent with data on Ezh2 binding in non-osseous cell types (18). The loss of H3K27me3 upon Ezh2 inhibition is expected to perturb the dynamic balance between H3K27 methylation and demethylation. The very rapid loss of H3K27me3 we observed inculturedosteoblastsindicatesthatthecorrespondingdemethylases (e.g. Kdm6a/Utx, Kdm6b/Jmjd3, Kdm7a/Jhdm1d) are highly active. Selective localization of Ezh2 and H3K27 demethylases could further modify local methylation kinetics at gene promoters. It is conceivable that the equilibrium between H3K27 methylation and demethylation changes during osteoblast differentiation, consistent with our observation that mRNA levels for Ezh2 and the H3K27 demethylase Jhdm1d are modulated during early stages of differentiation in mesenchymal stromal cells (21). Administration of an Ezh2 inhibitor increases bone density both in wild type adult mice and an estrogen-deficient mammalian model for osteoporosis, although these quantitative effects are relatively modest. Our study does not formally demonstrate a direct reduction of H3K27me3 in vivo at specific gene promoters in bone, but it has been established in other studies (28). Hence, the potential bone-anabolic effects of GSK126 in vivo may be limited by incomplete demethylation of H3K27me3, partial inhibition of Ezh2, and compensatory mechanisms by other enzymes (e.g. Ezh1). Nevertheless, molecular studies demonstrate that Ezh2 inhibition promotes paracrine signaling by enhancing expression and phosphorylation of key osteogenic signaling pathways. Thus, inhibition of Ezh2 has bone-anabolic and osteoprotective FIGURE 6. Osteogenic genes with a reduction in H3K27me3 and exhibiting enhanced expression with Ezh2 inhibition. We examined RNA-seq data (see Fig. 2) for genes exhibiting up-regulation in gene expression after GSK126 treatment. The genes presented here show decreased H3K27me3 levels (based on ChIP-seq data from Fig. 4) and focuses primarily on genes that modulate osteogenesis through transcriptional regulation and cell signaling mechanisms. potential (presumably by reducing H3K27me3), leading to enhanced expression and activation of pro-osteogenic pathways. Our results are consistent with studies demonstrating that Ezh2 plays a critical role in maintaining proliferation and multilineage differentiation potential of mesenchymal and other progenitor cells (18, 21, 22, 29 -34). Additionally, phosphorylation of Ezh2 promotes osteogenic differentiation of progenitor cells (23). These data collectively indicate that H3K27me3, which is balanced by Ezh2 and the corresponding demethylases (e.g. Jhdm1d and Kdm6a), controls osteogenic lineage commitment. Interestingly, Ezh2 expression is interlocked with the bone-related master regulator Runx2 (26) and long non-coding RNA LncRNA-ANCR (25). Additional regulation of Ezh2 may be attributed to miR-101, which was shown to target Ezh2 in other biological systems (27,35). Together, these regulatory feedback mechanisms may contribute to the observed osteogenic effects of Ezh2 inhibition. Current treatment options for osteoporosis, which affects 200 million people worldwide and is responsible for more than 8.9 million fractures annually, rely on drugs with therapeutic limitations, including anti-resorptive bisphosphonates (linked to pathologic femur fractures and osteonecrosis of the jaw) or the PTH-related bone-anabolic drug teriparatide. Use of the latter is restricted to 18 -24 months because of safety concerns with onset of osteosarcoma, even though this risk is very slight (36). Use of the bone-anabolic agent BMP2 is limited to spine fusion and fracture healing, but its potency provokes heterotopic ossification (37). The critical finding of our studies is that although the loss of Ezh2 function creates abnormalities in skeletal patterning and bone formation in young animals (21), Ezh2 inhibition in older and skeletally mature animals has both bone-anabolic and osteoprotective biological effects. Mechanistically, our results show that epigenetic modifications altered by Ezh2 inhibition promote osteogenic differentiation by stimulating pathways related to WNT, PTH, and BMP2. The latter mechanisms may proceed via paracrine physiological effects that are more controlled than treatment with exogenous ligands that are administered at supraphysiological levels. The more balanced endogenous activation of these pathways by Ezh2 inhibition may perhaps assuage some of the clinical concerns related to the therapeutic use of the corresponding ligands. Interestingly, our studies show that the Ezh2 inhibitor GSK126 enhances BMP2-induced osteogenic differentiation. This finding suggests that GSK126 may perhaps have utility as an adjuvant therapy in current clinical applications for BMP2. Consistent with our studies, Jing et al. (38) demonstrated that Ezh2 is up-regulated in osteoporotic MSCs and treatment of mice with 3-deazaneplanocin A increased bone formation in osteoporotic mice. Because 3-deazaneplanocin A is an inhibitor of S-adenosyl homocysteine hydrolase (39) that globally inhibits several methylation sites on histones (40,41) and does not specifically inhibit Ezh2 methyltransferase activity (16,41,42), it remains to be established whether their work is directly related to effects on Ezh2. In conclusion, a principal finding of our study is that specific enzymatic inhibition of Ezh2 has bone-anabolic and bone-protective effects in vivo. Mechanistically, our data suggest that Ezh2 inhibition promotes paracrine signaling in osteoblasts by up-regulating genes (e.g. Wnt10b and Pth1r) and enhancing phosphorylation of key osteogenic intermediates (Smad1/5), as suggested by other studies (30,38). These encouraging findings may lead to new therapeutic bone regenerative strategies to treat osteoporosis. MTS Activity Assay-MC3T3 cells were plated in 96-well plates in maintenance medium (5,000 cells/well). The following day, vehicle (DMSO) or Ezh2 inhibitor (GSK126 and UNC1999) in fresh maintenance medium was added to the cells. Three days later, MTS activity was assayed according to the manufacturer's protocol (Promega). Absorbance was measured at 490 nm using a SpectraMAX Plus spectrophotometer (Molecular Devices). Osteogenic Differentiation-MC3T3 cells were seeded in 6-well plates in maintenance medium (10,000 cells/cm 2 ). The following day, maintenance medium was replaced with osteogenic medium (␣-minimal essential medium supplemented with 50 g/ml ascorbic acid (Sigma) and 4 mM ␤-glycerol phosphate (Sigma)) containing vehicle or Ezh2 inhibitor (GSK126 or UNC1999). Three days later, vehicle or Ezh2 inhibitors were added again with osteogenic medium. When relevant, BMP2 (50 ng/ml; R&D Systems) was added and removed on the same days as GSK126. On day 6, Ezh2 inhibitor and vehicle were removed, and fresh osteogenic medium was added with medium changes scheduled every 3 days until RNA harvest at the indicated times. On day 6, a subset of the cell cultures were fixed in 10% neutral buffered formalin and stained with 5-bromo-4-chloro-3-indolyl-phosphate/nitro blue tetrazolium to monitor the enzymatic activity of alkaline phosphatase (Promega). Between days 21 and 28 of osteogenic differentiation, the cells were also fixed in 10% neutral buffered formalin and stained with 2% alizarin red to visualize calcium deposition. Absorption of alizarin red stain was quantified with ImageJ software (44). Ezh2 Knockdown and Osteogenic Differentiation-MC3T3 cells were seeded in 6-or 12-well plates in maintenance medium (10,000 cells/cm 2 ). The following day, siRNA transfections with control (D-001810-10-20; GE Healthcare) and mouse Ezh2 (L-040882-00; GE Healthcare) ON-TARGETplus siRNA SMARTpools were performed using RNAiMAX as instructed by the manufacturer (Invitrogen). The next day, MC3T3 osteogenic medium was added, and the cells were cultured until harvest. High Throughput RNA Sequencing and Bioinformatic Analysis-RNA-seq of mRNAs was performed using RNA isolated at days 3, 6, and 10 from MC3T3 treated with vehicle or 5 M GSK126. To improve sample representation, we pooled three distinct RNA samples (biological triplicates) for each treatment group at each time point. We note that pooling reduces biological variation to yield a single "averaged" sample (n ϭ 1) that does not permit visualization of statistical variation (e.g. error bars) in our figures. High throughput read mapping and bioinformatic analyses for RNA-seq were performed as previously reported (21,45). Gene expression is expressed in reads/kilobase pair/million mapped reads. RNA-seq data were deposited in the Gene Expression Omnibus of the National Institute for Biotechnology Information (GSE83506). ChIP-seq and Bioinformatics Analysis-MC3T3 cells (10,000 cells/cm 2 ) were plated in 10 cm plates in maintenance medium. Two days later, 5 M GSK126 or vehicle was added to the cells in osteogenic medium. The cells were harvested 24 h later by trypsin and analyzed using a ChIP assay as described previously (46) using H3K27me3 (17-622, lot 2213948; Millipore) and control IgG (PP64B, lot 2056666A; Millipore) antibodies. Sequencing libraries were prepared and massively parallel high throughput sequencing was performed on a Illumina HiSeq2000 system. The alignment, quality assessment, peak calling, and visualization was performed with the HiChIP analysis pipeline (47). Briefly, 50 base pair reads were aligned to the mm10 reference genome using the Burrows-Wheeler Aligner, and Picard was used to mark duplicates. Read pairs without a unique alignment were filtered out using SAMTools (48) and a custom script that only retains pairs with one or both ends uniquely mapped. Enriched regions were identified using SICER (49). Peaks were identified from vehicle-and GSK126-treated cells using the SICER package (49) at 1% FDR. A subset of high confident peaks with FDR Ͻ 1e-10 was extracted from each library and merged into a single list of peaks if peaks from the two libraries are within 100 bp from each other. The average tag density (normalized to 1 million mapped reads) from upstream 5 kb to downstream 5 kb around the middle of all the merged peaks was estimated using the ngsplot package (50). ChIP-seq data were deposited along with RNA-seq data (see above) with accession number GSE83506. Animal Welfare-All animal studies were conducted according to guidelines provided by the National Institutes of Health and the Institute of Laboratory Animal Resources, National Research Council. The Mayo Clinic Institutional Animal Care and Use Committee approved all animal studies. The animals were housed in an accredited facility under a 12-h light/dark cycle and provided water and food (PicoLab Rodent Diet 20, LabDiet) ad libitum. In Vivo Ezh2 Inhibition Studies-Female C57Bl/6 mice were purchased from Harlan Laboratories. Sample sizes used in this study were determined based on previous studies with boneanabolic drugs (51). For efficacy studies, 6-week-old mice received daily i.p. injections of vehicle (DMSO) or 50 mg/kg GSK126 in 20% Captisol adjusted to pH 4 -4.5 with 1 N acetic acid (28) for 5 weeks. The dosage, delivery schedule, and administration route were selected based on previous studies demonstrating the anti-cancer effects of GSK126 in mice (28). The animals were weighed daily. To label mineralizing bone surfaces, the mice received subcutaneous injections of calcein (10 mg/kg) 5 days and 24 h before euthanasia. The effects of GSK126 administration on the skeleton were evaluated in an estrogen-deficient OVX model. At ϳ12 weeks of age, female C57BL/6 mice underwent either sham or OVX surgeries. The following day, the animals received daily i.p. injections of vehicle (DMSO) or GSK126 (50 mg/kg body weight) for 6 weeks, as described above. Mice with body weights greater than 1 S.D. from the mean from each group were excluded from further analysis. For both studies, mice were randomly allocated to each group. Investigators performing tissue analysis (CT and histomorphology) were blinded from the group assignments. Microcomputed Analysis of in Vivo Ezh2 Inhibition Studies-Quantitative analyses of the femoral metaphysis and fifth lumbar vertebra (L5) were performed using a VivaCT 40 scanner (SCANCO Medical AG) with the following parameters: E ϭ 55 kVp, I ϭ 145 A, and integration time ϭ 300 ms. A voxel size of 10.5 m using a threshold of 220 units was applied to all scans at high resolution. Two-dimensional data from scanned slices were used for a three-dimensional interpolation and calculation of morphometric parameters that define cortical and trabecular bone mass and micro-architecture. Statistics-When applicable, the data are shown as mean Ϯ S.D. For in vitro studies, statistical analysis was performed with unpaired Student's t test. Significance is noted in the figures, when applicable (*, p Ͻ 0.05; **, p Ͻ 0.01; and ***, p Ͻ 0.001). For in vivo studies, statistical analysis was performed using Wilcoxon test or Wilcoxon rank sums test for multiple comparisons with the statistical software JMP Pro.
7,133
2016-10-10T00:00:00.000
[ "Biology" ]
Green Road Construction Using Reclaimed Asphalt Pavement with Warm Mix Additive Environmental impact and emissions produced from asphalt road construction, promote research on green materials by combining the recycling and warm mix asphalt additive which aims to reduce the environmental impacts. This study evaluates the effects high reclaimed asphalt pavement (RAP) content incorporated with wax warm mix asphalt additive. Milled reclaimed asphalt pavements obtained from local roads were incorporated with a warm mix additive named RH-WMA. These materials were evaluated for the physical and rheological properties, optimum binder content and mechanical properties. It was found that RH-WMA has softening effects on the binder. The additions of 3% RH-WMA content into 40% RAP mixture decreased the optimum binder content and energy consumption. Tests on mechanical properties indicated increased on stiffness with the addition of RAP which indicated better resistance to rutting. Additions of RH-WMA on specimens that subjected to combined effects of moisture and aging showed improvement in fatigue resistance. Hence, integration of RAP and RH-WMA showed potential as a green road construction material. Keywords— reclaimed asphalt pavement; warm mix additive; rheological; energy consumption; mechanical properties I. INTRODUCTION Awareness of environmental impact and emissions reduction has been emphasized globally in recent years. For example, green technology and environmental regulation in asphalt industry have fostered innovations and improvement of old technology in the pavement construction. The asphalt industry is continuously exploring new technology for enhancement of material's performance, improvement of construction efficiency, resources conservation and advance environmental stewardship. The application of reclaimed asphalt pavement (RAP) in road construction and maintenance had gained attention among scientists and asphalt industry due to its cost-effectiveness and environmental benefits. The environmental advantages of the materials recycling include emissions reduction and lower fuel usage related to the extraction and transportation of virgin materials, decreased demand for non-renewable resources and decreased landfill area for disposal of used pavements [1]- [2]. In the United States, the use of reclaimed materials alone conserved about 3.7 million tons of virgin binder which saves approximately USD 2.2 billion [3]. The performance of asphalt mixtures with high RAP content can be improved by modifying the mixture such as using a softer binder grade, incorporating warm mix asphalt (WMA) additive, utilizing rejuvenators, and adding anti-stripping agents. The use of WMA prompted research interest because of its potential benefits to reduce production and compaction temperatures. This study evaluates the use of high reclaimed asphalt content and WMA additive in pavement engineering that can contribute to achieving green road construction. Used asphalt pavement materials from road maintenance and rehabilitation are usually disposed of. This material can be recycled and incorporated into virgin materials for road maintenance and construction if properly treated. Reclaimed asphalt pavement is the term used to describe the recycling of old pavement in the road construction and rehabilitation, and it consists of valuable aggregate and binder. Typically, asphalt will age over time and becomes stiffer. This process is referred as aging. Several studies have been conducted in the past to evaluate the properties and performance of RAP such as the chemical compound of RAP binder, rheological properties, aging effect, and performance of mixtures containing RAP [4]- [7]. In general, these findings indicate that the use of RAP will increase the stiffness of binder, increased aging index and mixture containing RAP has higher resilient modulus. Aged binder in the RAP will require a higher production temperature which produces more emissions and high energy consumption. Past research reported that WMA additive could lower the production temperature due to a reduction in viscosity [8]. Other than lowering the production temperature, WMA additive enables larger quantities of RAP to be used in road construction, improves mixtures workability during construction, reduces odor from HMA plants, less aging of binder and better working environment at construction sites [9]- [11]. The incorporation of RAP increases stiffness and may improve the consistency of mixture performance at high service temperatures. Nevertheless, stiffness should be monitored to avoid fatigue failure [12]. Laboratory and field tests on fatigue resistance of mixture incorporated with RAP and WMA indicated that the addition of WMA additive enhances the fatigue resistance [13]- [14]. In order to promote the use of green materials by recycling the used asphalt and WMA additive locally, it is necessary to evaluate these materials on its rheological and mechanical properties as well as the optimum binder content. A. Materials The PG64 binder which equivalent to 80/100 pen and supplied by SHELL Sdn. Bhd was used in this study. This binder is commonly used for local road construction in Malaysia. The granites aggregate was supplied by Kuad Kuari Sdn. Bhd and fulfilled the Jabatan Kerja Raya (JKR) Malaysia aggregate gradation specification for AC14. RAP was obtained by milling process from two local roads along North-South Expressway. North-South Expressway is the main trunk road that links states in Malaysia. In this study, 40% RAP content based on RAP aggregate was incorporated into virgin materials. A wax WMA additive named RH-WMA was incorporated into the RAP and virgin material as a flow improver to decrease the production temperature. A designation shown in Table 1 was adopted to simplify the identification of the mixture blend. B. Characterization of RAP RAP was characterized to ensure its properties are acceptable to be incorporated into the virgin material. The milled RAP was processed in the lab through heating, crushing and sieving. The aggregate gradation of the RAP blends for two different sources of RAP is shown in Table 2. The binder from the RAP was extracted by using solvent named Trichloroethylene, followed by recovery using rotary evaporator. The physical properties of virgin binder and recovered RAP binders are shown in Table 3. C. Specimen Preparation The optimum RH-WMA content was determined prior to adding to RAP binder blend. Various percentages of RH-WMA (1, 2, 3, 4 and 5%) were added to a binder by the total mass of binder and evaluated for its physical and rheological properties. Rheological properties based on G*/Sin δ as stiffness indicator was determined using dynamic shear rheometer (DSR). From the physical and rheological properties tests, an optimum RH-WMA content was selected. Optimum RH-WMA content was added into RAP during mixture preparation. The addition of RH-WMA into mixture was performed by wet and dry mixing. Wet mixing was applied on the virgin binder. In the wet mixing, RH-WMA was mixed with a virgin binder at 145°C for 15 minutes using a mechanical mixer which is based on manufacturer recommendation. Similarly, RH-WMA was added to the batched RAP and then, mixes it manually. For mixing, the virgin aggregate and RAP were heated for 4 and 2 hours, respectively at mixing temperature. At the same time, the virgin binder also was heated to the targeted mixing temperature. After mixing, the loose mixture was short-term aged for 2 hours at the anticipated compaction temperature. All specimens were compacted using the gyratory machine and designed for 100 gyrations to simulate the high traffic. Compaction temperature and optimum binder content were determined by optimization using Response Surface Method (RSM). The input for the optimization also included the conventional marshal mix design parameters such as air voids, voids filled with asphalt, bulk specific gravity, stability and flow [15]. Optimization of RAP-WMA mixtures production based on the method suggested by Derringer was performed to propose the optimum binder content and compaction temperature [16]. Energy consumption was calculated by adopting the method where Q is the sum of heat energy(J), m is the mass of material(kg), c is the specific heat capacity coefficient (J/(kg/ o C)), ∆θ is the difference between the ambient and mixing temperature(°C), while i and j indicate different materials types. All specimens were fabricated at 10°C higher than the compaction temperature. D. Test Procedures Specimens were evaluated for mechanical properties such as resilient modulus, indirect tensile strength, and fatigue. The cracking potential of a mixture can be estimated from the tensile failure strain. All test specimens for indirect tensile strength were kept in the incubator at 15°C for 4 hours prior to testing. The indirect tensile strength was conducted at 15°C. Whereas; the resilient modulus test was performed using a Universal Asphalt Testing Machine (UTM-25) at 25°C. The test was conducted in accordance with ASTM D7369 procedures [18]. The diametral fatigue test simulates the tensile strains developed along the horizontal direction as consequences of repeated loadings by vehicles tires. Besides traffic loading, other factors affecting the fatigue in pavement such as moisture and aging. Hence, specimens for fatigue test were conditioned by simulating the effects of moisture and aging. The specimens were conditioned in distilled water with 6.62 gram/liter concentration of Sodium Carbonate (Na 2 CO 3 ) to accelerate the stripping in the mixture. After moisture conditioning, specimens were aged in the forced oven for five days to simulate the mixture aging in the field. The moisture and aging procedure were performed in accordance with ASTM D4867 and AASHTO R30 procedures, respectively [19][20]. Diametral fatigue test was conducted based on controlled stress mode in accordance with the procedures in BS 12697-24 [21]. Specimens were conditioned at 15°C for 4 hours and subjected to three different stress levels. A. Effects of WMA Additive Content on the Physical and Rheological Properties The addition of various RH-WMA content increases the penetration and softening points as presented in Table 4. The increment in penetration indicates the softening effects of the RH-WMA. On the contrary, the addition of RH-WMA increases the softening point. According to the Federal Highway Administration, some polymer network activated at high temperature when subjected to loading [22]. In the penetration test, the temperature is lower than the testing temperature used for the rutting resistance. Therefore, the polymer component may not activate at this temperature. The effects of RH-WMA content on G*/Sin δ on the unaged binder is illustrated in Fig. 1. The high-temperature performance grade (PG) of unaged binders is determined based on Superpave specification which is equivalent to 1.0 kPa. It was found that the addition of RH-WMA downgrades the PG by one grade for all RH-WMA content. This indicates that the addition of RH-WMA slightly reduces the binder stiffness. The trend of the G*/Sin δ is divided into two regions. The first region is the concentration of plots for the 1 to 4% RH-WMA, and the second region is the result of the 5% RH-WMA. This means that when 5% RH-WMA added into the binder, it shows more reduction in hightemperature performance grade. By considering the physical properties, the RH-WMA content from 1 to 3 % lies within the local binder specification for 80/100 pen. Meanwhile, the rheological properties of binders with RH-WMA content that more than 4% show a noticeable reduction of G*/Sin δ. Hence, 3% suggested optimum RH-WMA be incorporated into RAP. B. Optimum Binder Content Optimization using RSM optimizes factors affecting the materials production namely the RAP content, temperature and binder content. Responses are set according to the local asphalt mix design requirement in terms of volumetric and strength properties. Other than the local mix design requirement, optimization also considers the energy consumption. From the optimization, 130°C compaction temperature is proposed. Table 5 shows the optimum binder content (OBC) and energy consumption for each mixture. Compaction temperature shows a remarkable increase in energy consumption. Control mixture which was compacted at highest temperature results in highest energy consumption even though the OBC is the lowest. The addition of RAP increases the OBC due to aged and stiffer binder. However, the increased of OBC can be counterbalanced by the reuse of valuable binder from the RAP. Noticeable benefits from the addition of RH-WMA can be seen in the reduction of OBC and energy consumption. Energy consumption is one of the determinant factors in green road construction. Reduction of energy consumption implies that there will be less energy and less smoke produced from the asphalt production and during paving. Thus, provide a healthy environment to the paving workers. C. Mechanical Properties Mechanical properties of mixture evaluated based on indirect tensile strength (ITS), resilient modulus(M R ) and fatigue test. Fig. 2 shows the ITS result of various mixture types. The addition of RAP produces a stiffer mixture and hence increases the ITS. Goh and You reported similar finding on the increased ITS with the addition of RAP but on the porous asphalt [23]. The stiffness of RAP is affected by the source of RAP whereby RA2 indicates higher ITS compares to RA1. Penetration and stiffness based on performance grade indicate that the RA2 binder is stiffer compared to RA1. Stiffness in RAP is related to aging due to oxidation. According to Lu and Isacson, aging attributed to the characteristics of the binder, nature of aggregates and particle size distribution, air voids of the mixture, production-related factors, temperature and time [24]. Modification of RAP mixtures with RH-WMA slightly reduces the ITS for RA1-RH and RA2-RH about 2.69 and 0.54 %. Resilient modulus result is presented in Fig. 3. As shown, the resultant graphs exhibit similar trends with the ITS results whereby the resilient modulus increases with the addition of RAP. This finding is in agreement with the study conducted by Arshad et al. on the RAP-WMA mixtures containing 30, 40 and 50% with Sasobit [25]. The addition of RAP produces a stiffer mixture due to aged binder and aggregate. Mixture with high stiffness will have better resistance to rutting which commonly occurred in a tropic region such as Malaysia. Initial strain and maximum tensile at the center of the specimen are calculated before the determination of fatigue failure. In this study, the initial stiffness is equivalent to the value at 100 th loading cycle which is in accordance with the European Standard EN 12697-24 procedure [26]. Determination of fatigue failure is based on the number of cycles until the specimen failed. Fatigue failure can be indicated by the gradient of the graph or n based on the power equation. Higher gradient shows better fatigue resistance. Comparison of mixtures resistance to fatigue is presented in Table 6. In general, the addition of RAP exhibits lower n and indicates a reduction in the mixture resistance to fatigue. The addition of RH-WMA shows improvement of fatigue resistance in RA1-RH. By comparing the regression coefficient n of PG64 or control with the RA1-RH, the difference is about 31%. This means that even though the RAP mixture stiffens and compacted at slightly lower temperature, its' resistance to fatigue is comparable to the control with the addition of RH-WMA. According to McDaniel and Anderson, the stiffness of the mixture can dramatically increase with the inclusion of RAP [27]. When using high RAP content, the fatigue resistance can be reduced due to the addition of aged binder that stiffened the mixtures [28]. Nevertheless, the addition of RH-WMA improves the fatigue resistance in comparison with a mixture containing RAP only. Similar findings on the fatigue life improvement with the used of WMA additive by using other test methods were also reported in previous research [13], [18]. In terms of moisture and aging effects on the fatigue resistance, Tong et al. stated that moisture and aging are two main components that can significantly increase the potential for fatigue cracking to develop in a mixture [29]. Aging stiffens the binder and hence accelerates the cracking development. On the contrary, moisture weakens the bonding between the aggregate and binder. IV. CONCLUSIONS Based on the test results, a few conclusions are drawn. The additions of various RH-WMA contents into RAP binder blend exhibited a softening effect. Based on the physical and rheological tests results as well as the local specification, the optimum RH-WMA content was 3%. Therefore, 3% RH-WMA was incorporated into RAP mixture. Optimum binder content determined from the optimization using RSM indicated that compaction temperature showed the noticeable effect on the energy consumption and addition of RH-WMA reduced the OBC and energy consumption. With the addition of RH-WMA, the mixture can be compacted at 130°C which is lower than the conventional mixture. As a consequent, greener environment during road construction is expected. In terms of mechanical properties, resilient modulus and ITS increased with the addition of RAP hence produced stiffer mixtures. Mixture with high stiffness exhibited a lower fatigue resistance. However, the addition of RH-WMA into the mixture showed potential in improving the fatigue resistance. Specimen RA1-RH showed comparable fatigue resistance with mixture without RAP even though subjected to the combined effects of moisture and aging. Past research reported that moisture and aging significantly increases the potential of fatigue failure in mixture with RAP. In summary, the integration of RAP and WMA additive such as RH-WMA indicated the material's potential for a greener road construction with better mechanical properties.
3,876.6
2018-03-31T00:00:00.000
[ "Environmental Science", "Engineering", "Materials Science" ]
Analysis of Developing Batik Industry Cluster in Bakaran Village Central Java Article Information ________________ History of Article: Received June 2016 Approved July 2016 Published August 2016 ________________ INTRODUCTION In the economic development in Indonesia including in Central Java Province, the Micro, Small and Medium Enterprise (MSME) is always portrayed as a sector that has an important role.SMEs can contribute to poverty alleviation (Ashley, 2006;UNCTAD, 2009) in (Carlisle et al., 2013).Most people in Central Java Province are the low education population.Small business is very appropriate for the people's lives, both traditional and modern ones (Anisyah, 2011: 1).At the time of the economic crisis that occurred in Indonesia a few years ago, many large-scale efforts got stagnated and stop their activities, The Micro, Small and Medium Enterprises was able to survive in facing the crisis.In accelerating the recovery of economic activity due to the crisis, the government aggressively implemented the development and improvement in various sectors of economy, in which one of the strategic factors of concern is the MSME sector (Polnaya, 2015: 1).Besides the attention of the government, the attention of the public is also very important in the development of the Small and Medium Enterprises (SMEs) and the Micro, Small and Medium Enterprises (MSMEs) in order to grow more competitive along with other economic actors.The importance network in the innovation system, then in the development of competitiveness through the system of the regional innovation was needed by collaboration between the academy, the industry/the business and the government (Herliana, 2015). The Micro, Small and Medium Enterprises (MSMEs) grow in a cluster in a certain geographical area.Through this business cluster the members or entrepreneurs grow and develop.Various efforts are made by the government, universities, non-governmental organizations (NGOs), and other parties through the business cluster (Eva, 2013: 68).The proposed framework allows the identification of the factors driving SMEs performances and the capture of holistic firm performance within the craft industry (Rahman & Ramli , 2014).Central Java Province has many business clusters in improving the local economy, one of which is the business cluster that develops well that is batik industry cluster.In its development, the batik industry cluster is one of the potential of local economies in Central Java Province that needs to be developed further.The potential of batik industry cluster in developing the local economy is to preserve the ancestral culture while improving the local economy.Besides, almost all cities and regencies in Central Java Province have their own characteristics in batik, which can be seen from the style, color, culture, and more.The local economic development increases followed by the preservation of local culture.The development of batik industry clusters in Central Java Province can be seen in table 1.1 below. In the data in table 1, it can be seen that the batik industry clusters in Central Java Province are highly fluctuating from the turnover revenue of batik industry cluster in Pati Regency.It is characterized by the turnover decrease in 2011 and 2014.In 2011 the turnover revenue from the batik industry cluster in Pati Regency was Rp 300,000,000.00.In 2012 it increased to be Rp 360,000,000.00and in 2013 there was no decrease or increase in the turnover revenue, in other words the turnover in 2012 was as much as that in 2013. In 2014 the turnover revenue decreased to be Rp 148,000,000.00.Compared with the turnover revenue decrease of the batik business cluster in Tegal, the turnover revenue decrease of the Bakaran batik industry cluster in Pati Regency was worse.The turnover decrease of batik industry cluster in Tegal Regency can be seen from the turnover revenue growth in 2012 by 0.14% that decreased in 2013 by 0.06% and continued to decrease in 2014 by 0.03%.While the decrease in turnover revenue of the Bakaran batik industry cluster in Pati Regency can be seen from the turnover growth in 2012 by 0.20%.Afterwards there was no turnover growth in 2013 or 0%, and in 2014 it dropped to -0.59%.So compared with the turnover revenue of the batik business cluster in Pati Regency, the Bakaran batik in Pati Regency has more severe lower revenue from 2011 to 2014.A graph of the turnover development of batik business cluster in Pati Regency can be seen in Figure 1.Besides the amount of turnover, the Bakaran batik industry cluster in Pati Regency also has a number of business units that join the members of the Bakaran batik business cluster in Pati Regency.The development of a number of business units that join Bakaran batik business cluster in Pati Regency can be seen in Table 2. Table 2 shows that the development of the Bakaran batik industry clusters in Bakaran Village in Pati Regency has made a quite good improvement.The development can be seen from the results of the Bakaran batik industry products produced by the Bakaran batik industry clusters that develop their products from only in the form of handmade batik cloths then add their products in the form of Bakaran batik clothes.Besides being seen from the business product, the Bakaran batik industry clusters have been developing quite well in a number of business units, which is incorporated in the Bakaran batik industry clusters.In 2009 -2011 there are six Bakaran batik industry units incorporated in the Bakaran batik industry clusters in Pati Regency.The number of business units continued to increase in 2012 as many as 13 business units of the Bakaran batik, and the next year in 2013 -2014 there was no increase in the number of business units of the Bakaran batik incorporated in the Bakaran batik industry cluster in Pati Regency.Source: The Regional Development Planning Board, Central Java Province All Bakaran batik industry incorporated in the Bakaran batik industry cluster can only be found in Bakaran Kulon Vilage and Bakaran Wetan Village.Besides, the owners of the Bakaran batik industry are the native people of those villages.Many members of the Bakaran batik industry clusters in Pati Regency indicate that there are still many people, especially in Bakaran Kulon Village and Bakaran Wetan Village who want to preserve the culture of the ancestors in the form of the Bakaran batik and also to improve the local economy and the household economy. RESEARCH METHOD The data used in this research is the primary and secondary data.Primary data is the data that is collected and processed by the organization that publish or use it, while the primary data for the formulation of strategic alternatives in Strengths, Weakness, Opportunity, and Threats (SWOT) is obtained through the use of a list of questions (questionnaire) to the batik entrepreneurs who are the members of the Bakaran batik industry clusters. The secondary data is obtained from the Department of Cooperatives and SMEs, the Central Bureau of Statistics (BPS), and the Regional Development Planning Board (Bappeda) of Central Java Province and Pati Regency and the literature that is related to this research. The research uses the analysis method of Strengths, Weakness, Opportunity, and Threats (SWOT) in order to determine the appropriate strategy developing the Bakaran batik industry clusters in Pati Regency.Therefore, this research requires some parties such as the respondents and key-persons.The respondents in this research are the batik entrepreneurs in Bakaran Village, Pati Regency, amounted 13 people who have joined in the Bakaran batik industry cluster.The key-persons in this research are the head of the Bakaran batik industry cluster and the head of business in the Regional Development Planning Board, Pati Regency.SWOT analysis is to identify various factors systematically to formulate the company's strategy.The analysis is based on the logics that can maximize the strengths and opportunities, but simultaneously can minimize the weakness and threats.Based on the SWOT matrix, four main strategies can be composed; SO, WO, ST, and WT.Each of these strategies has its own characteristic, and further the strategies should be implemented together and supporting each other. Strategy of Developing Bakaran Batik Industry Cluster through SWOT Analysis Tools This research discusses the strategy of developing batik industry cluster in Bakaran Village, Pati Regency, using the SWOT analysis (Strengths Weaknesses Opportunities Threats).From the results of research, which are knowing the laws of SMEs, the general guidelines of cluster, seeing the overview of batik industry cluster, the available human resources and natural resources, the strategies that have been taken, and the performance that has been achieved, there are some internal and external factors that can be seen from the batik industry in Bakaran Village. The identification of internal factors (IFAS) is conducted in developing the batik industry cluster.Based on the identification result of the internal factors of the batik industry cluster, there are the strengths and weaknesses that can be found in the batik industry cluster in Bakaran Village as follows: 1. Strengths a.There is a division of labors based on the skills and abilities b.There are activities to improve the quality of labors c.Bakaran village is known as the Handmade Bakaran Batik d.It is easy to access to acquire the raw materials e. Bakaran Batik still maintains the quality of the Handmade Batik f.The labors come from Bakaran Village 2. Weaknesses a.The business owners use the private money for running batik industry b.The procedure to borrow the business capital is too complicated c.The tools used for operating the batik industry is still traditional d.The products of Bakaran Batik have not much known by the people Based on the identification result of the external factors (EFAS) of batik industry cluster in Bakaran Village, there are some opportunities and threats that can be found.The opportunities and threats for batik industry incorporated in the cluster are: 3. Opportunities a.The production of batik is not influenced by the external conditions b. Promoting the production of batik relies on the exhibition c.There are supports from the government (Department of Industry and Trade, Department of Cooperatives, SMEs, Regional Development Planning Board, and other agencies) d.There is a specific policy that makes Bakaran batik as the uniform of the civil servants in Pati Regency e.The marketing is conducted in Java and outside Java f.The total number of consumers will not decrease if the batik price is increased 4. Threats a.The raw materials are booked from outside the city b.The price of raw materials is often increased following the dollars c. There are some competitors of Bakaran Batik Product from outside the region (Lasem and Pekalongan) d.The marketing is not widespread and still waiting for the orders e.There is a lack of role of the cluster to the members Matrix Analysis of IFAS IFAS matrix is used to determine the internal factors of batik industry cluster in Bakaran Village, Regency, related to the strengths and weaknesses that are considered to be important.Having obtained the internal strategic factors of batik industry cluster in Bakaran Village, Pati Regency, which include the weaknesses and strengths, then the questionnaire filling on weighting by using the method of paired comparison matrix is conducted, followed by the ranking (rating) to the variables of the strengths and weaknesses. The following table 1.3 is the analysis result of IFAS matrix on batik industry cluster in Bakaran Village. 3 above shows that the internal strategic factor as the strength indicator that obtains the highest weighting score is that the Bakaran batik still maintains the quality of its handmade batik with the score of 0.45.It is because the batik industry owners really maintain the quality of the Bakaran batik in order to compete with the quality of batik from other regions.Besides, by maintaining its quality, it will make the consumers' trust to the batik products also increased.The high quality of the Bakaran batik products are as follows: the batik paint is not easily diluted, the product can survive long if worn, the product has the unique motifs and contains coastal area and agricultural cultures of Pati Regency, and many others.Preserving the quality of batik products will be an opportunity to develop the batik industry. The result of the internal strategy with the weakness indicator for the aspect factor of the Bakaran batik product that has not known yet by many people becomes the aspect that has the highest weighting score of 0.23.The conditions show that the aspect of the Bakaran batik product being less familiar becomes the biggest weakness in developing the business.Being less familiar, the demand for batik products decreases and batik industry is difficult to develop.The Bakaran batik is only known by the people who live in Pati Regency and the surrounding areas. Analysis of EFAS Matrix EFAS matrix is used to determine the external factors of batik industry cluster in Bakaran Village, Pati Regency, which relates to the threats and opportunities that are considered to be important.Having obtained the external strategic factors of batik industry cluster in Bakaran Village that is considered to be important, the questionnaire filling on weighting by using the method of paired comparison matrix is conducted, followed by giving ranking (rating) to the variables of opportunities and threats.Below is table 1.4 as the analysis result of EFAS matrix on the Batik Industry Cluster in Bakaran Village.shows that the external strategic factor as the opportunity aspect, which is the specific policy to make the Bakaran batik as the uniform of the civil servants in Pati Regency, obtains the highest score of weighting compared with the other factors.The aspect of a specific policy to make the Bakaran batik as the uniform of the civil servants in Pati Regency obtains a score of 0.46.It shows that the specific policy by the government of Pati Regency has encouraged the handicraftsmen and the owners of the Bakaran batik industry to develop their business and involve them in a policy. The aspect that obtains the highest score of weighting on the external strategy with the threat indicator is the price of the raw materials that is often increased following the dollars with the weighted score of 0.31.It shows that the increase in the raw materials will burden the batik industry owners in Bakaran Village.The frequent increase in the price of the raw materials following the dollar foreign currency is because the raw materials for making batik are imported from abroad.The business owners are forced to follow the increased price of the raw materials. Figure 2. Total Score of Strategy Factors Source: Primary Data Processed In the internal-external matrix above, the score of weighting obtained from the internal factor is 2.94 and the external factor is 2.53.It points the coordinates in the growth area V.The right strategy used for developing the batik industry cluster in Bakaran Village, Pati Regency, is the concentration strategy through the horizontal integration or stability, which means the strategy that adopt more defensively, by optimizing the management of the batik industry to avoid the threats.Based on the SWOT analysis, some strategies can be proposed to develop the batik industry clusters in Pati Regency.The first strategy is by applying the SO strategy, which is a strategy that uses the strengths to take advantage of the opportunities in the batik industry cluster in Pati Regency.The SO strategies include: utilizing the labors from Bakaran Village to increase the number of products and the quality of batik products, getting involved actively in supporting the local government in improving the labor quality and the batik product marketing, marketing the batik product out of Java and the entire Java by maintaining the quality of batik products, and optimizing the management of batik industry in Bakaran Village. Analysis of SWOT Matrix A second strategy is the WO strategy, which is minimizing the weaknesses to take advantage of the opportunities.The WO strategies include: the local government provides the facilities and information about the capital loan and the capital aid for the business owners in order to develop their business, the government support in promoting batik needs to be increased and optimized in order to reach greater market, the government role in donation in the form of tools needs to be increased, and the need of innovation and more modern technology utilization. The third strategy is the ST strategy, which is a strategy that uses the strengths to overcome the threats including : maintaining the quality of batik products to compete with the ones from out of the region (Lasem and Pekalongan), the owners have had the customers as the raw material suppliers despite from out of the region, there are labor training despite lack of the cluster's role, it needs to increase and optimize the cluster's role in developing batik industry from the members of the Bakaran batik cluster. The fourth strategy is the WT strategy, which is a strategy that minimizes the weaknesses and avoiding the threats including: following the local government programs that have been made and designed for the business owners in developing batik industry, building partnership cooperation with various parties having competency in their objects to develop batik industry in Bakaran Village CONCLUSION The result of research that has been done can be concluded in the following are the internal strategy factor as the strength aspect in developing the batik industry cluster in Bakaran Wetan Village and Bakaran Kulon Village in Pati Regency is that the Bakaran batik maintains the quality of the handmade batik by obtaining a score of 0.45.The internal strategy factor as the weakness aspect in developing the batik industry cluster in Bakaran Wetan Village and Bakaran Kulon Village in Pati Regency is that the Bakaran batik products have not known yet by many people by obtaining a score of 0.23. The external strategy factor as the opportunity aspect in developing the batik industry cluster in Bakaran Wetan Village and Bakaran Kulon Village in Pati Regency is that there is a specific policy to make the Bakaran batik as the uniform of the civil servants (PNS) in Pati Regency by obtaining a score of 0.46.The external strategy factor as the threat aspect in developing the batik industry cluster in Bakaran Wetan Village and Bakaran Kulon Village in Pati Regency is that raw material price is often increased following the dollar or the foreign currenc by obtaining a score of 0.31. The strategies of developing the batik industry cluster in Bakaran Village that should be conducted among others by optimizing the management of batik industry clusters in Bakaran Kulon Village and Bakaran Wetan Village to avoid the threats such as : the raw materials are ordered from outside the town, the price of the raw materials is often increased following the dollars, there are competitors of the Bakaran batik products from outside the region (Lasem and Pekalongan), the marketing is not broad and still waiting for the order, and the lack of the cluster's role to the members. Based on the analysis result and the discussion, some suggestions can be submitted as follows (1) The batik industry owners in Bakaran Kulon Village and Bakaran Wetan Village should maintain the quality of the Bakaran batik in order to be able to compete with the batik products from other regions, and the owners also should maintain the quality of batik to maintain the consumer's trust in wearing batik.One way to maintain the quality of the Bakaran batik is by using the high quality of the raw materials as well.(2) The batik industry owners in Bakaran Kulon Village and Bakaran Wetan Village should actively participate in the exhibition organized by the government, and through the Bakaran batik industry cluster, the government also should provide space to the members of the Bakaran batik industry clusters to showcase their Bakaran batik products alternately at the exhibition held by the government.The cluster members that are the owners of the Bakaran batik industry should be able to market their batik by taking part in their own exhibition. The Local Government of Pati Regency should not change the Decree (SK) on the wearing of the Bakaran batik as the uniform of the civil servants (PNS) so that the Bakaran batik can be worn sustainably.(4) The government should provide subsidies or aids to the entrepreneurs of the Bakaran batik, especially when the value of the rupiah gets weakened against the dollar to ease the burden of the Bakaran batik entrepreneurs due to the increase in the raw material prices.(5) The parties who have a role in the development of batik industry cluster in Bakaran Village in Pati Regency such as the Local Government (the Regional Development Planning Board, the Department of Cooperativs, SMEs, the Department of Industry and Trade, Pati Regency) and the batik entrepreneurs in Bakaran Village incorporated in the cluster should cooperate in developing the Bakaran batik industry clusters in Pati Regency.To avoid the threats by providing such assistance like subsidies due to the increase in the raw materials prices of the batik making, the batik handicraftsmen should maintain the quality of the Bakaran batik in competing with the batik products from other regions.The government agencies in Pati Regency related to the development of the batik industry clusters should hold the exhibitions frequently, and the owners of the Bakaran batik industry should be able to take part in their own exhibition and expand their marketing area. Table 1 . Turnover Development of Batik Industry Cluster in Central Java Province In 2011 -2014 (in Rupiah and Percentage) Table 2 . Number of Business Unit in Bakaran Batik Industry Clusters in Pati Regency In 2009 -2014 Table 3 . Analysis of IFAX Matrix on Batik Industry Cluster in Bakaran Village Table 4 . Analysis of EFAS Matrix of Batik Industry Cluster in Bakaran Village
4,954.6
2018-03-14T00:00:00.000
[ "Economics" ]
Rapid Impregnating Resins for Fiber-Reinforced Composites Used in the Automobile Industry As environmental regulations become stricter, weight- and cost-effective fiber-reinforced polymer composites are being considered as alternative materials in the automobile industry. Rapidly impregnating resin into the reinforcing fibers is critical during liquid composite molding, and the optimization of resin impregnation is related to the cycle time and quality of the products. In this review, various resins capable of rapid impregnation, including thermoset and thermoplastic resins, are discussed for manufacturing fiber-reinforced composites used in the automobile industry, along with their advantages and disadvantages. Finally, vital factors and perspectives for developing rapidly impregnated resin-based fiber-reinforced composites for automobile applications are discussed. Introduction Recently, lightweight automobiles have been increasingly used to save energy and reduce pollution in light of strict regulations.It has been reported that reducing automobile weight by 10 wt.% could decrease fuel consumption by 6-8% and reduce CO 2 emissions [1].The key to solving this problem is to replace metal components with highperformance, fiber-reinforced, polymer-based composites.As for environmental issues, extensive research has been conducted on biocompatible and environmentally-friendly resins, including the use of the poly (ethylene glycol) diacrylate (PEGDA) monomer [2], their applications in dental materials [3], and their utilization in 3D printed materials [4][5][6].This research focuses on polymer composite molding processes, and understanding these molding processes is essential. Polymer composites, including thermosets and thermoplastic composites, have been used in panels, modules, structures, and other parts of automobiles after being reinforced with continuous or discontinuous fibers that have undergone liquid composite molding (LCM) processes, such as resin transfer molding (RTM), vacuum infusion, and reaction injection molding [7].Takahashi et al. [8] reported that the weight of carbon fiber reinforced polymer (CFRP) composites is one third that of steel panels, and the flexural strength of CFRPs is approximately three times higher.As for the polymer composite, fiber-reinforced thermoset composites (FRTSCs) generally exhibit good mechanical properties, thermal stability, and dimensional stability [9].Thermoset resins have a relatively low viscosity compared with thermoplastic resins, which is an important factor in the RTM process.Traditionally, depending on the part size and geometry, the cycle time of a standard RTM is 30-60 min with a 10-20 bar injection pressure, whereas that of high-pressure resin transfer molding is less than 10 min with a 20-120 bar injection pressure. Fiber-reinforced thermoplastic composites (FRTPCs) are also extensively used because of their high processability and recyclability.However, the high viscosity of thermoplastic resins requires high temperatures and pressures for the materials to be impregnated with fiber reinforcements.For example, thermosetting resins, such as epoxy, can be impregnated with fiber reinforcements and cured below 200 • C, whereas thermoplastic resins must be heated above the melting temperature, which is typically well above 200 • C, and impregnated with high pressures (10-50 bar).Moreover, the high pressure and viscosity of the resins may cause a misalignment of the fiber reinforcements. In the automobile industry, cost and cycle time reductions are key issues.In the case of thermoset resins, there is a need for research into fast-curing resins like rapid curing epoxies and endo-dicyclopentadiene.On the other hand, for thermoplastic resins, research is required on the development of rapidly impregnating resins that can be applied at low temperatures and low pressures.Regarding fast-curing thermoset resins, recent research by Zhang et al. [10], Odom et al. [11], Gan et al. [12], and Reichanadter et al. [13] presented the application of fast-curing epoxy resins and its processes.Boros, Róbert, et al. [14], Ota et al. [15], and Willicombe, K., et al. [16] showed the rapid impregnation process and feasibility of a new approach using thermoplastic resin.In many instances, research tends to emphasize methodologies and approaches that are focused on specific manufacturing systems.Consequently, it can be challenging to find examples and studies that provide a comprehensive exploration of various types of resins. The main objective of the present study is to provide a comprehensive overview of various types of fast-curing and rapidly impregnating resins using a broad array of resin cases.In the present study, a comprehensive overview of commonly used, rapidly impregnating, low-viscosity resins in the automobile industry was provided, including thermosets and thermoplastic resins.Additionally, various reactive processes, and the parameters that influence them, were presented, alongside an examination of the properties associated with Fast-Curing Resin Thermosetting Composites (FRTSCs) and Fast-Curing Resin Thermoplastic Composites (FRTPCs).Finally, the current status of related research and insights into future perspectives in this field were addressed. Knowledge Gaps and Current Challenges Numerous methods for rapidly impregnating thermoset and thermoplastic resins in fiber-reinforced composites have focused their attention on various aspects requiring further development.This section presents the main knowledge gaps that need to be filled in order to address the current challenges regarding the rapid impregnation of thermoset and thermoplastic resins and their fiber-reinforced composites, to understand the development method and the its applications. Thermoset resins, such as epoxy, polyester, and vinyl ester, are used in a wide range of automobile parts, such as headlamp housings, battery covers, and frames for windows or sunroofs.Thermoset composites have excellent dimensional and chemical stabilities and high impact strengths, which are necessary for the interior and exterior parts of automobiles.In this section, thermoset resins and their fiber-reinforced composites are discussed. If the manufacturing cycle time can be reduced by using thermoset resin, several advantages become particularly noteworthy.To address these issues, various state-of-theart methods have been introduced to achieve the fast impregnation of thermoset resin, as shown in Figure 1.The current state-of-the-art method for the rapid impregnation of thermoset resins and their fiber-reinforced composites primarily involves material selection, including the choice of resin and curing agent, as well as the utilization of advanced mixing techniques.Utilizing optimal materials (such as the epoxy resin with phenolic novolac epoxy and bisphenol-A epoxy, and the curing agent aliphatic polyamine dicyandiamide (DICY)), a specific manufacturing process was applied to achieve the rapid impregnation of the resin into the reinforcing fibers for resin transfer molding (RTM), vacuum infusion, and the reaction injection molding process [7].The approach to achieving a fast and cost-epoxy and bisphenol-A epoxy, and the curing agent aliphatic polyamine dicyandiamide (DICY)), a specific manufacturing process was applied to achieve the rapid impregnation of the resin into the reinforcing fibers for resin transfer molding (RTM), vacuum infusion, and the reaction injection molding process [7].The approach to achieving a fast and costeffective impregnation and curing process was closely related to factors such as curing time, gel time for the resin, and curing kinetics.Consequently, finding most suitable resin and curing agent ultimately resulted in a substantial reduction in manufacturing costs (10~20% reduction in fiber cost, mandrel cost, tooling cost, system set up cost/process time under 240 min.)[17].The current state-of-the-art rapidly impregnating thermoplastic resins and their fiber-reinforced composites have been tested with impregnation techniques, material selections, and processing parameters, etc.Several impregnation techniques, such as melt impregnation, powder impregnation, and resin transfer molding, have been explored to achieve the rapid impregnation of thermoplastic resins into fiber reinforcements; they have shown promise in terms of achieving efficient impregnation, reducing cycle times, and enhancing the overall quality of the composite [38,39].Generally, higher temperatures can lower the viscosity of thermoplastic resins and pressure also helps to drive the resin into the fiber reinforcement, thus enabling better fiber impregnation and complete wetting, as well as the removal of void trapped within the fiber reinforced composites.However, excessively high temperatures may lead to resin degradation, and excessive pressure The current state-of-the-art rapidly impregnating thermoplastic resins and their fiberreinforced composites have been tested with impregnation techniques, material selections, and processing parameters, etc.Several impregnation techniques, such as melt impregnation, powder impregnation, and resin transfer molding, have been explored to achieve the rapid impregnation of thermoplastic resins into fiber reinforcements; they have shown promise in terms of achieving efficient impregnation, reducing cycle times, and enhancing the overall quality of the composite [38,39].Generally, higher temperatures can lower the viscosity of thermoplastic resins and pressure also helps to drive the resin into the fiber reinforcement, thus enabling better fiber impregnation and complete wetting, as well as the removal of void trapped within the fiber reinforced composites.However, excessively high temperatures may lead to resin degradation, and excessive pressure may lead to fiber deformation or damage.Therefore, optimizing the processing parameters is essential for achieving uniform resin distribution, complete fiber wetting, and strong interfacial bonding between the matrix and reinforcement, leading to the minimization of void content, as well as enhanced mechanical properties [40,41].Epoxy resins have been used in the automobile industry since the 1980s, owing to their superior mechanical properties, low shrinkage and creep, and outstanding chemical resistance.The estimated size of the global epoxy resin market was USD 12.5 billion in 2021, and it is anticipated to reach approximately USD 23.4 billion by 2030, with an expected annual growth rate (CAGR) of 7.22% during the forecast period from 2022 to 2030 [18].A representative commercial epoxy resin is the epoxy-dicyandiamide system, and many strategies have been implemented to develop low-viscosity, fast-curing epoxy resins. Conventionally, low viscosity can be achieved and controlled by incorporating various diluents, such as epoxy-based reactive diluents, which participate in the polymerization reaction and contribute to the cross-linking network.For example, the preferred viscosity range for the resins used in manufacturing composite materials via liquid molding is generally between 200 and 1000 cP at room temperature for 2~3 h of curing time [42].Epoxy-based reactive diluents come in various forms, including vegetable oil-based epoxy resins, glycidyl ethers of phenol and paraalkyl substituted phenols, vinylcyclohexane dioxide, the phenyl glycidyl ether, and the trimethylol propane triglycidyl ether [19][20][21][22][23][24].In addition, by introducing the catalytic mechanisms wherein epoxy crosslinks with the curing agent, a rapid curing time below 3 h can be obtained with tertiary amines. Fast-curing epoxy resins can be obtained by adding glycol diglycidyl ether (GDE) series.For example, a low-viscosity acrylate-based epoxy resin (AE)/GDE system was developed by Yang et al., and its rheological behavior is shown in Figure 2A [43].Seraji et al. [25,44,45] developed a rapid-curing epoxy amine resin with low viscosity, which consists of the diglycidyl ether of bisphenol F, an epoxy phenolic novolac resin, diethyl toluene diamine, and 2-ethyl-4-methylimidazole. The resin system exhibited good thermal and mechanical properties, and superior flame retardancy.Based on these trends, lowviscosity, fast-curing epoxy resins were obtained using the synthesized epoxies.Two resins, the diglycidyl ether of ethoxylated bisphenol-A (BPA) with two and six oxyethylene units (DGEBAEO-2 and DGEBAEO-6), respectively, were synthesized and characterized; the curing exothermic enthalpy decreased with increasing oxyethylene units (Figure 2B) [26].The viscosities of the blends decreased as the DGEBAEO-6 content increased.In addition, difunctional aromatic epoxy-divinylbenzene dioxide, which was synthesized with epoxidizing divinylbenzene as the catalyst, had a low molecular weight and viscosity, as well as excellent thermal (T g was approximately 201 • C) and mechanical properties (tensile strength was 131.99 MPa).Wu, Xiankun, et al. [27] and Chen et al. [29] developed a series of epoxy systems with a soft butyl glycidyl ether and rigid nano silica, and a viscosity lower than 600 mPa•s, thus providing an excellent processing performance for the large-scale production of composites in automobile manufacturing.In addition, this system demonstrated improvements in terms of tensile strength and modulus, as well as in elongation at break.Wang et al. [46] reported an epoxy resin-1-(cyanoethyl)-2-ethyl-4-methylimidazol system.The epoxy cured in a few minutes at 120 • C with an acceptable pot life and low water absorption. The reaction time decreased with the addition of the various particles.Chikhi et al. [30] developed a modified epoxy resin using liquid rubber (ATBN).All reactivity characteristics (gel time, temperature, curing time, and exothermic peaks) decreased.The addition of ATBN led to a reduction in either the glass transition temperature or the stress at break, accompanied by an increase in the elongation at break and the appearance of yielding.Zhang et al. [48] designed a tetrafunctional eugenol-based epoxy resin with a cyclosiloxane structure.Allyl glycidyl ether was selected as the reference compound to generate a silylation epoxy resin.The viscosity of the silicone-containing tetrafunctional epoxy monomers (<0.315Pa•s) was significantly lower than that of conventional oil-based epoxy resins (14.320Pa•s) (Figure 2C) [43].Moreover, the low viscosity of epoxy resin-based component epoxy systems has recently been obtained for thermal latent curing agents and flame-retardant epoxies (generally below 200 cP at room temperature) [25][26][27]42].Thermal latent curing agents of Imidazole are widely employed to fabricate single-component epoxy systems, and they meet the requirements for large-scale industrial production [49][50][51].Several phosphorus-modified imidazole derivatives have been developed to combine fast curing rates (below 3 h [25][26][27] and great flame retardancy characteristics [52][53][54]. Polyester Regarding polyester resins, low-viscosity polyester resins can be obtained via particle synthesis.Low-viscosity polyester resins can be applied to produce environmentally friendly coatings, as well as to toughen and reinforce unsaturated polymers.The global market size of unsaturated polyester resins was estimated to be USD 12.2 billion in 2022, and it is expected to grow at an annual growth rate (CAGR) of 7.1% from 2023 to 2030 [55]. Traditionally, low-viscosity polyester resins can be obtained through various methods, including the use of solvents, mechanical mixing methods, etc.For instance, solvents such as styrene, methyl ethyl ketone peroxide (MEKP), and cobalt octoate are typically used.Control and mixing methods involving alcohol are frequently utilized.Nurazzi et al. [56] developed a method to reduce the gel time of unsaturated polyester (UPE) by blending it with methyl ethyl ketone peroxide (MEKP) and various percentages of cobalt.Using this method, the gel time can be reduced by up to 36%. Recently, alternative approaches have been employed to achieve a lower viscosity (<300 mPa•s) in the compound [57], which facilitates the formation of crosslinking networks.These methods include synthetic techniques, particle synthesis using nanomaterials, microwave irradiation, among others [58,59]. Chen et al. [31] prepared a series of silica particles with different sizes and surface groups through the sol-gel process, using tetraethyl orthosilicate, and they were directly introduced into polyester polyol resins via in situ polymerization.The resulting nanocomposites exhibited lower viscosities than the resins obtained using the blending method.Viscosity increased as the particle concentration increased (Figure 3A).Zhang et al. [28] examined a low-viscosity unsaturated hyperbranched polyester resin (<10,000 cP) using a synthetic method involving a reaction between a maleic anhydride monoisooctyl alcohol ester and a hydroxyl-ended hyperbranched polyester resin prepared from phthalic anhydride and trimethylolpropane.Zhou et al. [57] synthesized a series of unsaturated polyester resins with low viscosities (<300 mPa•s), for a vacuum infusion molding process, by simply controlling the amount of alcohol used in the reactants.Yuan et al. [60] developed a series of low-viscosity transparent UV-curable polyester methacrylate resins, derived from renewable biologically fermented lactic acid (LA), and they reduced the viscosity from 34,620 mPa•s to 160-756 mPa•s by randomly copolymerizing LA and-caprolactone. The curing time can be reduced using various solvents and applying microwaves.Nasr and Abdel-Azim [33] investigated unsaturated polyester resins, and styrene, methyl ethyl keton peroxide (MEKP), and cobalt octoate were selected as the solvent (monomer), catalyst, and accelerator, respectively.A significant reduction in curing time occurred when the cobalt octoate concentration was increased to 0.02 wt.%.Furthermore, the curing time decreased when the catalyst concentration was increased from zero to 2 wt.%.Mo et al. [32] applied microwave irradiation to the curing of an unsaturated polyester resin with CaCO 3 particles, and they showed that microwave irradiation heated the unsaturated polyester resin evenly and rapidly, causing a chain growth reaction which greatly reduced the curing time (Figure 3B).Chirayil et al. [61] prepared nanocellulose-reinforced unsaturated polyester composites via mechanical mixing.The curing time required for gelation in the nanocellulose-filled unsaturated polyester was lower than that for the neat resin, indicating the catalytic action of nanocellulose in the curing reaction (Figure 3C).Kalaee et al. [34] utilized the nanoparticle of CaCO 3 (nCaCO 3 ) and found that a decrease in the number of carboxyl groups in the formulation leads to a higher degree of crosslinking. Vinyl Ester The extensive use of vinyl ester resins as matrix materials in reinforced composites is due to their low viscosity, rapid curing capabilities at room temperature, and cost-effective advantages.Typically, in highly viscous vinyl ester resins, a low-viscosity environment can be achieved by utilizing dispersants and various acids, which effectively reduce their surface activities. Yong and Hahn [35] conducted a rheological analysis of SiC nanoparticle-filled vinyl ester resin systems using the Bingham, power law, Herschel-Bulkley, and Casson models.The incompatibility between a hydrophilic SiC and a hydrophobic vinyl ester resin can act as the driving force for the formation of SiC aggregates, even when low particle loading occurs (<0.04 volume fraction), resulting in the high viscosity of the resin.The optimum fractional weight percentage of dispersants (wt.% dispersant/wt.%SiC) for dispersion stabilization is 1-3% for particles in the 0.1-3-µm range, and it can be proposed, as follows: the addition of a dispersant at the optimum dosage lowers the viscosity of SiC/vinyl ester suspensions by 50% (Figure 4A).Gaur et al. [36] obtained the zero-shear viscosity of vinyl ester resins containing styrene (40 wt.%) as the reactive diluent.The curing of vinyl ester resins can be controlled by reacting the epoxy novolac resin with methacrylic acid.They found that the curing and decomposition behavior of vinyl ester resins worsened with an increase in methacrylic acid content (11,22,32,38, and 48 mg KOHg −1 solid).The cured product with the lowest acid value was the most thermally stable product.Cook et al. [37] analyzed the gel time and reaction rate of a vinyl ester resin and found that the cobalt species played a dual role in initializing the formation of radicals from MEKP and destroying primary and polymeric radicals.Based on these results, the reaction rate (determined using differential scanning calorimetry, (DSC)) increased and the gel time decreased with increasing concentrations of MEKP.However, cobalt octoate cocatalyst slows the reaction rate, except at very low concentrations.The gel time decreased as MEKP and cobalt octoate concentrations increased.Curing vinyl ester resins with modified silicone-based additives was achieved by Mazali et al. [63].Silicone-based additives were used to modify the properties of the vinyl ester resin.For the resin cured in the absence of N, N-dimethylaniline, the silicone-based additives acted as retardants of the curing reaction, which is a typical diluent effect, whereas in the presence of this promoter, the reaction enthalpy and rate improved. The viscosity of the vinyl ester resin could be reduced by increasing the reactive diluent content.Rosu et al. [64] found a linear correlation between the reactive diluent content and the logarithm of viscosity, showing that the presence of reactive diluents accelerated the curing reaction and diminished the gel time.Dang et al. [62] proposed reinforcements for a comonomer vinyl ester (cVE) resin at different weight fractions of up to 2% Gaur et al. [36] obtained the zero-shear viscosity of vinyl ester resins containing styrene (40 wt.%) as the reactive diluent.The curing of vinyl ester resins can be controlled by reacting the epoxy novolac resin with methacrylic acid.They found that the curing and decomposition behavior of vinyl ester resins worsened with an increase in methacrylic acid content (11,22,32,38, and 48 mg KOHg −1 solid).The cured product with the lowest acid value was the most thermally stable product.Cook et al. [37] analyzed the gel time and reaction rate of a vinyl ester resin and found that the cobalt species played a dual role in initializing the formation of radicals from MEKP and destroying primary and polymeric radicals.Based on these results, the reaction rate (determined using differential scanning calorimetry, (DSC)) increased and the gel time decreased with increasing concentrations of MEKP.However, cobalt octoate cocatalyst slows the reaction rate, except at very low concentrations.The gel time decreased as MEKP and cobalt octoate concentrations increased.Curing vinyl ester resins with modified silicone-based additives was achieved by Mazali et al. [63].Silicone-based additives were used to modify the properties of the vinyl ester resin.For the resin cured in the absence of N, N-dimethylaniline, the silicone-based additives acted as retardants of the curing reaction, which is a typical diluent effect, whereas in the presence of this promoter, the reaction enthalpy and rate improved. The viscosity of the vinyl ester resin could be reduced by increasing the reactive diluent content.Rosu et al. [64] found a linear correlation between the reactive diluent content and the logarithm of viscosity, showing that the presence of reactive diluents accelerated the curing reaction and diminished the gel time.Dang et al. [62] proposed reinforcements for a comonomer vinyl ester (cVE) resin at different weight fractions of up to 2% via a direct polymerization process with a eutectic gallium-indium (EGaIn) alloy and graphene nanoplatelets, showing that sub-micron sized EGaIn (≤1 wt.%) could promote the curing reaction of cVE without changing the curing mechanism (Figure 4B). Polydicyclopentadiene (p-DCPD) Dicyclopentadiene (DCPD) is a commercially available monomer that is derived from low-viscosity petrochemicals, making it easy to impregnate into fibers.Due to its impregnation characteristics, its market revenue reached approximately USD 0.86 billion in 2020 and is expected to grow at a CAGR of 5.7% between 2022 and 2030 [65].Polydicyclopentadiene (PDCPD) is a highly crosslinked polymer formed by the ring-opening metathesis polymerization (ROMP) of its monomer precursor.Exothermic characteristics were observed during the polymerization process because of the relief of the ring strain energy initiated by the transition-metal/alkylidene complexes.Several studies investigated the effects of these catalysts. Li et al. [66] conducted the ROMP of DCPD using the catalyst systems, WCl6-Et2AlCl and (WCl6-PhCOMe)-Et2AlCl, and their polystyrene-supported counterparts.The acetoph enone-modified catalyst system exhibited better catalytic properties than the unmodified system.Moreover, as the polymer yield of ROMP increased, the mechanical properties of notched impact strength (NIS) and the tensile strength (TS) of the synthesized PDCPD increased.Kessler et al. [67] investigated the curing kinetics of PDCPD, prepared via ROMP, with three different concentrations of Grubbs' catalyst using differential scanning calorimetry (Figure 5A).The catalyst concentration had a large effect on the curing kinetics, and the activation energy increased significantly at 30 • C. Yang and Lee [68] investigated the curing kinetics of endo-dicyclopentadiene (DCPD) with two types of Grubbs' catalysts (1st and 2nd generation), using dynamic DSC at different heating rates (Figure 5).Experimental DSC data obtained at different heating rates were used to evaluate the kinetic parameters of the model-free iso-conversional and model-fitting methods.In the single DSC exotherm of the 1st generation system (Figure 5(Ai)), the appearance of a shoulder above the single exotherm of the 2nd generation system (Figure 5(Aii)) suggests that reaction mechanisms other than ROMP, regarding the norbornene and cyclopentene units, may be involved in this catalyst system.The 2nd generation catalyst system showed a slower initiation rate but a faster polymerization rate compared with the 1st generation. Yang and Lee [69] also studied two Grubbs' catalysts that exhibited apparent differences in the isothermal curing of endo-dicyclopentadiene (endo-DCPD) via ROMP, using Experimental DSC data obtained at different heating rates were used to evaluate the kinetic parameters of the model-free iso-conversional and model-fitting methods.In the single DSC exotherm of the 1st generation system (Figure 5(Ai)), the appearance of a shoulder above the single exotherm of the 2nd generation system (Figure 5(Aii)) suggests that reaction mechanisms other than ROMP, regarding the norbornene and cyclopentene units, may be involved in this catalyst system.The 2nd generation catalyst system showed a slower initiation rate but a faster polymerization rate compared with the 1st generation. Yang and Lee [69] also studied two Grubbs' catalysts that exhibited apparent differences in the isothermal curing of endo-dicyclopentadiene (endo-DCPD) via ROMP, using the 1st and 2nd generation Grubbs' catalysts as polymerization initiators.The 2nd generation catalyst was more efficient than the 1st generation catalyst in terms of catalytic activity, as evidenced by the reaction rates and fractional conversions (Figure 6A). Experimental DSC data obtained at different heating rates were used to evaluate the kinetic parameters of the model-free iso-conversional and model-fitting methods.In the single DSC exotherm of the 1st generation system (Figure 5(Ai)), the appearance of a shoulder above the single exotherm of the 2nd generation system (Figure 5(Aii)) suggests that reaction mechanisms other than ROMP, regarding the norbornene and cyclopentene units, may be involved in this catalyst system.The 2nd generation catalyst system showed a slower initiation rate but a faster polymerization rate compared with the 1st generation. Yang and Lee [69] also studied two Grubbs' catalysts that exhibited apparent differences in the isothermal curing of endo-dicyclopentadiene (endo-DCPD) via ROMP, using the 1st and 2nd generation Grubbs' catalysts as polymerization initiators.The 2nd generation catalyst was more efficient than the 1st generation catalyst in terms of catalytic activity, as evidenced by the reaction rates and fractional conversions (Figure 6A).Recent state-of-the-art research on vinyl esters and DCPD has focused on controlling the curing time; this is due to their extremely fast curing times, as shown in Figure 7. Yoo et al. [70] obtained the curing kinetics of endo-DCPD, using isothermal differential scanning calorimetry, by experimentally acquiring kinetic parameters in accordance with model-fitting approaches.Due to the rapid curing of DCPD, a decelerator was included in the manufacturing process.Therefore, the effect of the decelerator was investigated using the curing kinetics of endo-DCPD with different amounts of decelerator solutions, and it was found that the decelerator delayed the reaction and slowed the curing process (Figure 6B,C).Recent state-of-the-art research on vinyl esters and DCPD has focused on controlling the curing time; this is due to their extremely fast curing times, as shown in Figure 7. Yoo et al. [70] obtained the curing kinetics of endo-DCPD, using isothermal differential scanning calorimetry, by experimentally acquiring kinetic parameters in accordance with model-fitting approaches.Due to the rapid curing of DCPD, a decelerator was included in the manufacturing process.Therefore, the effect of the decelerator was investigated using the curing kinetics of endo-DCPD with different amounts of decelerator solutions, and it was found that the decelerator delayed the reaction and slowed the curing process (Figure 6B,C). Rapidly Impregnating Thermoplastic Resins and Their Fiber-Reinforced Composites In recent times, to a certain extent, thermoplastic composites (TPCs) have started to Rapidly Impregnating Thermoplastic Resins and Their Fiber-Reinforced Composites In recent times, to a certain extent, thermoplastic composites (TPCs) have started to replace thermosetting composites and lightweight metal materials.Worldwide, the market value of TPCs increases every year, from 28 billion U.S. dollars in 2019 to an estimated 36 billion U.S. dollars by 2024; this is because they are very tough, the manufacturing process is faster, they are highly processable and recyclable, they are able to be welded, etc. [72]. Generally, the high melting viscosities of thermoplastic polymers require high processing temperatures and pressures to fully impregnate fibers and reduce defects in products [73].Subsequently, in situ polymerization methods for fiber-reinforced TPCs have been developed using low-viscosity monomers or oligomeric precursors, such as caprolactam [74][75][76][77], laurolactam [78,79], methylmethacrylate (MMA) [80], and cyclic butylene terephthalate (CBT) [81,82], to fabricate fiber-reinforced polyamide 6 (PA6), polyamide 12 (PA12), polymethylmethacrylate (PMMA), and polybutylene terephthalate (PBT) composites, respectively.The global market size for PA 6 is estimated at USD 12.7 billion, and for PA 12, it is estimated at USD 19.43 billion [83].For the PMMA, the global market size was expected to reach USD 8.33 billion by 2032 and USD 5382 million by 2029 [84].These monomers (or oligomeric precursors) are polymerized via the addition of catalysts and activators.Table 1 lists several processing parameters and applications of commonly used monomers (or oligomeric precursors) with low viscosities that are suitable for LCM.In this section, we mainly introduce PA6, PA12, PMMA, and PBT thermoplastic composites, and we provide an overview of thermoplastic composites fabricated via in situ polymerization during LCM.Moreover, the effects of reactive processing parameters on the mechanical properties are discussed. Polyamide 6 (PA6) PA6 was synthesized via the anionic ring-opening polymerization of ε-caprolactam, which is a crystalline cyclic amide with a melting temperature of 70 • C, and it is polymerized at 130-170 • C in the presence of a catalyst and activator [85] (Figure 8).PA6 based fiber-reinforced composites can be fabricated within 3-60 min, depending on the type and amount of the catalyst and activator used.Ahmadi et al. [91] suggested that the correct ratio of monomer, catalyst, and activator is a key component in anionic-caprolactam polymerization, and it provides the lowest monomer residue and best properties for the PA6 samples.In addition, polymerization time directly affects the production cycle and cost.Our previous research [76] focused on the effect of polymerization temperature on the degree of polymerization and polymerization time in order to produce perfect products with the shortest molding cycle time.The results showed that the polymerization and crystallization of PA6 occurred simultaneously during heating.As the heating rate increased, the crystallinity decreased, but the degree of polymerization increased.Furthermore, the viscosity of ε-caprolactam varied almost linearly with time in the early stages, whereas it increased exponentially from 20 s after the start of polymerization, indicating the presence of the injection molding cycle time.Ben et al. fabricated glass and carbon fiber hybrid PA6 composites (Figure 9) with A (caprolactam and activator) and B (caprolactam and catalyst) mixtures via vacuum-assisted resin transfer molding (VaRTM) in order to evaluate their mechanical properties when applied to automobile structures [75].The results showed that the bending, tensile, and compressive strengths of the hybrid-fiber-reinforced PA6 were 594, 315, and 297 MPa, respectively, which were comparable to those of the hybrid, fiber-reinforced fast-curing epoxy (597, 327, and 318 MPa, respectively).However, the flammability of polyamides, which is a key issue in the automobile industry, limits their widespread application.The main challenges include the inhibition of in situ polymerization in the presence of flame retardants and the insolubility of flame retardants due to the filtration of reinforcements such as titanium dioxide, multiwalled carbon nanotubes, phosphorus compounds, etc. [92,93].In addition, recycling is also an issue for PA6.As a non-degradable plastic, PA6 is extremely challenging to recycle, and it cannot be recycled using traditional methods.Recently, Wursthorn et al. [94] developed lanthanide trisamido catalysts, with which, PA6 can be depolymerized to ε-caprolactam with a high selectivity (more than 95%) and yield (more than 90%), and no solvents or toxic chemicals are used in the whole process.The generated ε-caprolactam can be used as monomers to obtain new PA6, thus, it is feasible to employ this method to recycle PA6 products. Polyamide 12 (PA12) PA12 is called nylon 12, and it is synthesized via the anionic ring-opening polymerization of ω-laurolactam, as shown in Figure 10.ω-laurolactam has a low initial viscosity above its melting point at 153 °C, facilitating the easy and complete impregnation of fibers In addition, recycling is also an issue for PA6.As a non-degradable plastic, PA6 is extremely challenging to recycle, and it cannot be recycled using traditional methods.Recently, Wursthorn et al. [94] developed lanthanide trisamido catalysts, with which, PA6 can be depolymerized to ε-caprolactam with a high selectivity (more than 95%) and yield (more than 90%), and no solvents or toxic chemicals are used in the whole process.The generated ε-caprolactam can be used as monomers to obtain new PA6, thus, it is feasible to employ this method to recycle PA6 products. Polyamide 12 (PA12) PA12 is called nylon 12, and it is synthesized via the anionic ring-opening polymerization of ω-laurolactam, as shown in Figure 10.ω-laurolactam has a low initial viscosity above its melting point at 153 • C, facilitating the easy and complete impregnation of fibers in the mold.Similar to PA6, PA12-based fiber-reinforced composites can be fabricated using LCM processes, such as thermoplastic resin transfer molding (T-RTM).The desired injection temperature was found to be 170-205 • C, and polymerization started at 180-250 • C, after introducing the catalyst and initiator.Mairtin et al. [79] developed carbon-fiber-reinforced PA12 composites, with a 60% carbon fiber volume fraction, which exhibited high tensile strength (788.3MPa) and high compression strength (365.7 MPa).It is also reported that the polymerization time is related to polymerization temperature, that is, it takes 8.5 min and 20 min at molding temperatures of 240 • C and 200 • C, respectively. ing LCM processes, such as thermoplastic resin transfer molding (T-RTM).The desired injection temperature was found to be 170-205 °C, and polymerization started at 180-250 °C, after introducing the catalyst and initiator.Mairtin et al. [79] developed carbon-fiberreinforced PA12 composites, with a 60% carbon fiber volume fraction, which exhibited high tensile strength (788.3MPa) and high compression strength (365.7 MPa).It is also reported that the polymerization time is related to polymerization temperature, that is, it takes 8.5 min and 20 min at molding temperatures of 240 °C and 200 °C, respectively.As listed in Table 1, PA12 is commonly used in fuel filter housing and fuel pipe connectors, which are close to the engine and exposed to fuel and high service temperatures.Therefore, fuel uptake and aging behavior are important factors.Wei et al. [95] found that pure PA12 showed fast and remarkably high fuel uptake when exposed to a mixture of ethanol and gasoline at 120 °C; however, a lower uptake was observed for glass-fiberreinforced PA12 composites.As shown in Figure 11, the PA12 and glass-fiber-reinforced PA12 composites gradually changed color from white to yellow as the exposure time increased; this is due to the oxidation of PA12, and the cracks in PA12 were larger than those in glass-fiber-reinforced PA12, indicating a suppression effect of glass-fiber on fuel uptake.In addition, PA12 can be recycled and used in automobiles.It has been reported that an automobile fuel-line clip, produced with recycled PA12 through a selective laser As listed in Table 1, PA12 is commonly used in fuel filter housing and fuel pipe connectors, which are close to the engine and exposed to fuel and high service temperatures.Therefore, fuel uptake and aging behavior are important factors.Wei et al. [95] found that pure PA12 showed fast and remarkably high fuel uptake when exposed to a mixture of ethanol and gasoline at 120 • C; however, a lower uptake was observed for glass-fiberreinforced PA12 composites.As shown in Figure 11, the PA12 and glass-fiber-reinforced PA12 composites gradually changed color from white to yellow as the exposure time increased; this is due to the oxidation of PA12, and the cracks in PA12 were larger than those in glass-fiber-reinforced PA12, indicating a suppression effect of glass-fiber on fuel uptake. ing LCM processes, such as thermoplastic resin transfer molding (T-RTM).The desired injection temperature was found to be 170-205 °C, and polymerization started at 180-250 °C, after introducing the catalyst and initiator.Mairtin et al. [79] developed carbon-fiberreinforced PA12 composites, with a 60% carbon fiber volume fraction, which exhibited high tensile strength (788.3MPa) and high compression strength (365.7 MPa).It is also reported that the polymerization time is related to polymerization temperature, that is, it takes 8.5 min and 20 min at molding temperatures of 240 °C and 200 °C, respectively.As listed in Table 1, PA12 is commonly used in fuel filter housing and fuel pipe connectors, which are close to the engine and exposed to fuel and high service temperatures.Therefore, fuel uptake and aging behavior are important factors.Wei et al. [95] found that pure PA12 showed fast and remarkably high fuel uptake when exposed to a mixture of ethanol and gasoline at 120 °C; however, a lower uptake was observed for glass-fiberreinforced PA12 composites.As shown in Figure 11, the PA12 and glass-fiber-reinforced PA12 composites gradually changed color from white to yellow as the exposure time increased; this is due to the oxidation of PA12, and the cracks in PA12 were larger than those in glass-fiber-reinforced PA12, indicating a suppression effect of glass-fiber on fuel uptake.In addition, PA12 can be recycled and used in automobiles.It has been reported that an automobile fuel-line clip, produced with recycled PA12 through a selective laser In addition, PA12 can be recycled and used in automobiles.It has been reported that an automobile fuel-line clip, produced with recycled PA12 through a selective laser sintering method, provides an 8% reduction in life-cycle global warming potential and life-cycle primary energy demand compared with conventional PA66 [96], thus improving sustainability properties. Polymethyl Methacrylate (PMMA) (Elium ® ) PMMA is extensively used in the automobile industry to produce various parts and components of vehicles, such as external, rear, and indicator light covers; decorative trims; ambient lighting; door entry strips; and automobile glazing.This is due to its light weight, high scratch resistance, and low stress birefringence.As shown in Figure 12, PMMA was synthesized via the free radical vinyl polymerization of methylmethacrylate (MMA) in the presence of peroxide initiators.The melting temperature of MMA is −48 • C, and it is polymerized at relatively low temperatures (120-160 • C); however, the boiling temperature of MMA is 100 • C, which means that it boils easily and can cause voids in the final products.Moreover, a long cycle time (>900 min) was required to fully polymerize MMA below its boiling temperature. sintering method, provides an 8% reduction in life-cycle global warming potential and life-cycle primary energy demand compared with conventional PA66 [96], thus improving sustainability properties. Polymethyl Methacrylate (PMMA) (Elium ® ) PMMA is extensively used in the automobile industry to produce various parts and components of vehicles, such as external, rear, and indicator light covers; decorative trims; ambient lighting; door entry strips; and automobile glazing.This is due to its light weight, high scratch resistance, and low stress birefringence.As shown in Figure 12, PMMA was synthesized via the free radical vinyl polymerization of methylmethacrylate (MMA) in the presence of peroxide initiators.The melting temperature of MMA is −48 °C, and it is polymerized at relatively low temperatures (120-160 °C); however, the boiling temperature of MMA is 100 °C, which means that it boils easily and can cause voids in the final products.Moreover, a long cycle time (> 900 min) was required to fully polymerize MMA below its boiling temperature.Recently, a novel liquid-reactive MMA has been developed by Arkema, named Elium ® .It has low viscosity and low processing temperature (room temperature), and it is used in conjunction with a dibenzoyl peroxide initiator [88].Elium ® can also be used to impregnate fibers via the LCM process, which is the same method used as traditional MMA.Several studies have been conducted to evaluate its mechanical properties.Kazemi et al. [97] studied the dynamic response of carbon-fiber-reinforced Elium ® and carbon fiber-reinforced epoxies (Epolam, Sikafloor) using low-velocity impact tests.This study demonstrated the higher plasticity of Elium ® -based composites compared with epoxybased composites, resulting in less structural loss and less absorbed energy, as shown in Figure 13.In addition, many studies have reported on the good mechanical properties of Elium ® -based fiber-reinforced composites, such as good toughness, flexural and tensile strength, welding performance, etc. [98][99][100].However, Elium ® has a much higher shrinkage rate than that of common PMMA due to its fast polymerization, which is a problem to be solved in future studies [101].Recently, a novel liquid-reactive MMA has been developed by Arkema, named Elium ® .It has low viscosity and low processing temperature (room temperature), and it is used in conjunction with a dibenzoyl peroxide initiator [88].Elium ® can also be used to impregnate fibers via the LCM process, which is the same method used as traditional MMA.Several studies have been conducted to evaluate its mechanical properties.Kazemi et al. [97] studied the dynamic response of carbon-fiber-reinforced Elium ® and carbon fiber-reinforced epoxies (Epolam, Sikafloor) using low-velocity impact tests.This study demonstrated the higher plasticity of Elium ® -based composites compared with epoxy-based composites, resulting in less structural loss and less absorbed energy, as shown in Figure 13.In addition, many studies have reported on the good mechanical properties of Elium ® -based fiber-reinforced composites, such as good toughness, flexural and tensile strength, welding performance, etc. [98][99][100].However, Elium ® has a much higher shrinkage rate than that of common PMMA due to its fast polymerization, which is a problem to be solved in future studies [101].In addition, some recycling technologies for Elium ® , such as the mechanical recycling method and chemical recycling method, have already been developed to obtain recycled materials or they have been recovered as monomers [102].Generally, these recycled materials are reused with virgin materials to enhance mechanical properties, however, recovered monomers could be polymerized to obtain new products.Though a few studies on In addition, some recycling technologies for Elium ® , such as the mechanical recycling method and chemical recycling method, have already been developed to obtain recycled materials or they have been recovered as monomers [102].Generally, these recycled materials are reused with virgin materials to enhance mechanical properties, however, recovered monomers could be polymerized to obtain new products.Though a few studies on the characterization and analysis of recycled products have been investigated, more intensive work should be needed to evaluate the life cycle of these recycled products. Polybutyleneteraphthalate (PBT) PBT is widely used in the automobile industry owing to its high stiffness and strength.Indeed, 1,4 butanediol and dimethyltetrephthalate are used as monomers to produce macrocyclic oligomers of CBT with two to seven repeat units [103], and this is followed by the polymerization of semicrystalline PBT in the presence of an initiator (Figure 14).The initial viscosity of CBT is 20 mPa•s at 190 • C, which is suitable for the LCM processing of, for instance, RTM [104].It is reported that PBT polymerized from CBT via RTM is more brittle than that of conventional PBT due to the high crystallinity of the polymerized PBT [105].Its toughness could be improved with the addition of nanoparticles [106,107], fibers [108], etc. Baets et al. found that the addition of 0.05-0.1 wt.% of multi-walled carbon nanotubes (MWCNTs) could increase the toughness, stiffness, and strength of PBT composites [109].They also prepared polycaprolactone-blended CBT/glass-fiber composites to improve the toughness of the composites [109].Yang et al. [110] found that woven carbon fabric and glass fabric hybrid PBT composites, which are fabricated via a vacuum assisted prepreg process, have a higher impact resistance than that of PBT/carbon fiber (CF) composites, although the presence of fibers may reduce the conversion of CBT.Furthermore, non-isothermal production processes, solvent blending, the addition of plasticizers, and chemical modification can enhance the toughness of CBT composites [81,111,112]. In addition, PBT can be recycled via depolymerization into CBT or monomers (1,4butanediol and dimethyltetrephthalate) which exhibit properties comparable to those of baseline materials.Cao et al. [113] prepared super tough PBT/MWCNT/epoxidized elastomer composites with excellent mechanical properties for a wide range of PBT applications in the automobile industry. Despite significant progress, several challenges persist in terms of the rapid impregnation of thermoplastic resins to produce fiber-reinforced composites in the automobile industry.These challenges include achieving uniform resin distribution, controlling fiber wetting, minimizing void content, and maintaining mechanical properties.Recent advancements have focused on addressing these challenges through innovative approaches [41,114].Furthermore, thermoplastic-based automobile parts are also required to increase automotive plastic reuse, recycling, and recovery, in order to reduce overall automotive plastic waste generation for environmental sustainability.Many studies have reported that plastic parts could be reused or recovered from end-of-life vehicles, and some parts could be recycled via high-vacuum extraction, melt filtration, introducing additives, and so on.However, it is challenging to recycle fiber-reinforced plastic composites or multi-component-blended composites.Recently, a physicochemical recycling method has been developed to recover matrices and fibers which preserve the fibers' lengths [115].Furthermore, more intensive work on the environmental impacts and life cycle assessments of these recycled products should be investigated [116,117].In addition, PBT can be recycled via depolymerization into CBT or monomers (1,4butanediol and dimethyltetrephthalate) which exhibit properties comparable to those of baseline materials.Cao et al. [113] prepared super tough PBT/MWCNT/epoxidized elastomer composites with excellent mechanical properties for a wide range of PBT applications in the automobile industry. Despite significant progress, several challenges persist in terms of the rapid impregnation of thermoplastic resins to produce fiber-reinforced composites in the automobile industry.These challenges include achieving uniform resin distribution, controlling fiber wetting, minimizing void content, and maintaining mechanical properties.Recent advancements have focused on addressing these challenges through innovative approaches [41,114].Furthermore, thermoplastic-based automobile parts are also required to increase automotive plastic reuse, recycling, and recovery, in order to reduce overall automotive plastic waste generation for environmental sustainability.Many studies have reported that plastic parts could be reused or recovered from end-of-life vehicles, and some parts could be recycled via high-vacuum extraction, melt filtration, introducing additives, and so on.However, it is challenging to recycle fiber-reinforced plastic composites or multicomponent-blended composites.Recently, a physicochemical recycling method has been developed to recover matrices and fibers which preserve the fibers' lengths [115].Furthermore, more intensive work on the environmental impacts and life cycle assessments of these recycled products should be investigated [116,117]. Current Research Gaps and Future Research Outlook The current research concerning rapidly impregnating resins and the production of fiber-reinforced composites in the automobile industry has identified several research gaps.Addressing these gaps and focusing on future research can lead to advancements and improvements in this field.Here are some of the current research gaps and potential future research directions [118][119][120]. Enhanced impregnation efficiency: Achieving the uniform impregnation of reinforced fibers with resin is critical for high-quality composites.Research has focused on exploring different impregnation techniques and parameters to minimize voids, ensure uniformity, and enhance interfacial adhesion.Although progress has been made in terms of impregnation techniques, there is a need to further enhance the impregnation efficiency of rapidly impregnating resins, which would enhance the amount of pore space penetrated by the resin [121][122][123][124]. Future research should focus on improving resin flow and wetting behavior to achieve the better impregnation of reinforcing fibers.This includes studying the effects of resin viscosity, fiber architecture, and processing parameters on impregnation efficiency. Optimization of curing processes: A rapid curing process is crucial for the efficient production of fiber-reinforced composites [10,125].Future research should aim to optimize curing processes by investigating advanced heating methods, optimizing curing temperatures and times, and exploring the use of catalysts or additives to accelerate the curing reaction.Such research will help reduce cycle times and improve the overall productivity of composite manufacturing. Characterization and optimization of mechanical properties: Understanding and tailoring the mechanical properties of rapidly impregnating resins for specific automotive applications is essential.Future research should focus on exploring the development of new resin formulations that are specifically designed for rapid impregnation; this will involve modifying the viscosity, curing kinetics, or surface tension of the resin to improve its flowability and fiber wetting characteristics.In addition, the resin composition, placement of fiber reinforcement, and processing conditions should be optimized to achieve the desired mechanical properties.This can be achieved through a combination of experimental testing, numerical modeling, and material characterization techniques [126][127][128]. Durability and long-term performance: Regarding environmental issues, sustainable fibers are of great interest to the automotive industry.However, automotive parts are usually exposed to environmental factors, such as UV radiation, temperature, humidity, and chemical exposure, resulting in poor interfacial properties, water absorption, swelling, etc. [129,130].Therefore, future research should focus on developing bio-based materials and enhancing the resistance of the composites, as well as their degradation mechanisms [131,132]. Environmental sustainability and recyclability: Given the increasing environmental concerns, future research should focus on developing sustainable and recyclable rapidly impregnating resins [133].This includes exploring the use of bio-based or recycled materials as resin matrices, investigating recycling techniques for end-of-life composites, and assessing the environmental impact of these materials throughout their lifecycle. By addressing these research gaps in future research, the automobile industry can benefit from improved rapidly impregnating resins that offer enhanced impregnation efficiency, optimized curing processes and mechanical properties, and improved durability and sustainability. Conclusions and Outlook The impregnation performance of thermoset resin and thermoplastic resin is crucial for manufacturing composite materials like glass-fiber or carbon-fiber-reinforced polymers.The resins act as a matrix that bind the fibers together, providing strength, stiffness, and durability to the composites.They enable the production of lightweight, yet highperformance, materials that are widely used in aerospace, automotive, and construction industries.One key aspect of impregnating resins is their ability to improve the structural integrity of composite materials.By impregnating fibers or porous structures, the resins enhance the strength, stiffness, and impact resistance of the composite.This is particularly important in industries where lightweight and high-performance materials are sought after, such as in the aerospace and automotive sectors.Therefore, impregnating resins are of significant importance in the polymer resin market due to their ability to enhance the mechanical properties, durability, and protection of materials; this highlights their value and the demand for such specialized resins. In this paper, rapidly impregnating resins for fiber-reinforced composites are discussed as alternatives to high-performance metal components.An overview of suitable rapidly impregnating resins with low viscosities are introduced, and the differences between thermoset and thermoplastic composites are identified. Thermoset resins, such as epoxy, polyester, vinyl ester, and DCPD, have excellent dimensional and chemical stabilities and high impact strengths.The epoxy-dicyandiamide system, as a representative commercial epoxy resin, has many strategies which have been implemented to develop low-viscosity, fast-curing epoxy resins, such as the addition of a GDE series and synthesized epoxies.The reaction time decreased with the addition of various particles.The reinforcement of low-viscosity unsaturated polyester resins has also been introduced.Low-viscosity polyester resins can be obtained via particle synthesis.The curing time can be reduced by using various solvents and applying microwaves.Low viscosity, coupled with a rapid curing rate at room temperature, and the relatively low cost of vinyl ester resins, can be obtained using dispersants and various acids to reduce the surface-active properties. Regarding the thermosetting resins, PA-6, PA-12, PMMA (Elium ® ), and PBT were introduced, which have high melting viscosities, and they require a high processing temperature and pressure to fully impregnate the fibers and reduce defects in the products.Therefore, in situ polymerization methodologies for fiber-reinforced thermoplastic composites with low viscosities have been developed, and they are suitable for liquid molding processes. Overall, extensive studies have been conducted on the characterization, analysis, and simulation of rapidly impregnating, resin-based, fiber-reinforced composites.However, the large-scale production of such composites has been rare.Therefore, future research should focus on the large-scale production of composites for the automobile industry, a reduction in their manufacturing time, and an improvement in their performance.In addition, as environmental regulations become stricter, the requirements of automobile materials are also becoming stricter.Some heavy metals and organic substances are banned or restricted to use in automobiles, and automobile parts which cannot be further divided should be merged with homogeneous resins so that they can be recycled more efficiently.Therefore, alternative materials should satisfy the harmlessness to the human body and the environment, and the material itself also needs to satisfy certain performance criteria so that they are comparable to fiber-reinforced or polymer-blended composites.As technologies and industries continue to advance, the importance of rapidly impregnating resins is expected to grow, driven by the need for improved performance, longevity, and reliability of materials and products. Figure 3 . Figure 3. (A) Effect of silica content on the viscosity of nanocomposite resins embedded with silica sol S2 or S9.Reproduced with permission [31].Copyright 2005, Elsevier.(B) DSC curves of liquid UPR, and cured samples with microwave curing and thermal curing, respectively.Reproduced with permission [32].Copyright 2022, MDPI.(C) Variation of viscosity over time for NC filled composites.Reproduced with permission [61].Copyright 2014, Elsevier. Figure 3 . Figure 3. (A) Effect of silica content on the viscosity of nanocomposite resins embedded with silica sol S2 or S9.Reproduced with permission [31].Copyright 2005, Elsevier.(B) DSC curves of liquid UPR, and cured samples with microwave curing and thermal curing, respectively.Reproduced with permission [32].Copyright 2022, MDPI.(C) Variation of viscosity over time for NC filled composites.Reproduced with permission [61].Copyright 2014, Elsevier. Figure 4 . Figure 4. (A) (i) Viscosity curves of SiC/vinyl ester resin systems with and without MPS/W966.(ii) Viscosity curves of SiC/vinyl ester resin systems with and without 1-octanol.Reproduced with permission [35].Copyright 2006, John Wiley and Sons.(B) The DSC graphs (a), degree of conversion at 60 • C (b), graphs of TGA (c), and DTG (d) at a heating rate of 5 • C/min for the LM filled and unfilled comonomer vinyl ester composites.Reproduced with permission [62].Copyright 2022, MDPI. Figure 5 . Figure 5. (A) The DSC curves for (i) low concentration (ii) medium-concentration, and (iii) high−concentration DCPD and Grubbs' catalyst samples; (iv) predictions for isothermal curing at 30 • C based on the model−free iso−conversional method for low, medium, and high catalyst concentrations.Reproduced with permission [67].Copyright 2002, John Wiley and Sons.(B) DSC scans at different heating rates for endo−DCPD with (i) 1st generation and (ii) 2nd generation Grubbs' catalysts (inset shows the shoulder region).Reproduced with permission [68].Copyright 2013, Elsevier. Figure 8 . Figure 8. Schematic of the anionic ring open polymerization of PA6. Figure Schematic of the anionic ring open polymerization of PA6. Figure 8 . Figure 8. Schematic of the anionic ring open polymerization of PA6. Figure 10 . Figure 10.Schematic of the anionic ring open polymerization of PA12. Figure 10 . Figure 10.Schematic of the anionic ring open polymerization of PA12. Figure 10 . Figure 10.Schematic of the anionic ring open polymerization of PA12. Figure 12 . Figure 12.Schematic of the vinyl polymerization of PMMA. Figure 12 . Figure 12.Schematic of the vinyl polymerization of PMMA. Figure 14 . Figure 14.Schematic of the anionic ring open polymerization of PBT. Figure 14 . Figure 14.Schematic of the anionic ring open polymerization of PBT. Table 1 . Processing temperatures and processing times of various monomers.
12,184.2
2023-10-01T00:00:00.000
[ "Engineering", "Environmental Science", "Materials Science" ]
Shortwave infrared-absorbing squaraine dyes for all-organic optical upconversion devices ABSTRACT Shortwave infrared (SWIR) optical sensing and imaging are essential to an increasing number of next-generation applications in communications, process control or medical imaging. An all-organic SWIR upconversion device (OUC) consists of an organic SWIR sensitive photodetector (PD) and an organic light-emitting diode (OLED), connected in series. OUCs directly convert SWIR to visible photons, which potentially provides a low-cost alternative to the current inorganic compound-based SWIR imaging technology. For OUC applications, only few organic materials have been reported with peak absorption past 1000 nm and simultaneous small absorption in the visible. Here, we synthesized a series of thermally stable high-extinction coefficient donor-substituted benz[cd]indole-capped SWIR squaraine dyes. First, we coupled the phenyl-, carbazole-, and thienyl-substituted benz[cd]indoles with squaric acid (to obtain the SQ dye family). We then combined these donors with the dicyanomethylene-substituted squaraine acceptor unit, to obtain the dicyanomethylene-functionalized squaraine DCSQ family. In the solid state, the absorbance of all dyes extended considerably beyond 1100 nm. For the carbazole- and thienyl-substituted DCSQ dyes, even the peak absorptions in solution were in the SWIR, at 1008 nm and 1014 nm. We fabricated DCSQ PDs with an external photon-to-current efficiency over 30%. We then combined the PD with a fluorescent OLED and fabricated long-term stable OUCs with peak sensitivity at 1020 nm, extending to beyond 1200 nm. Our OUCs are characterized by a very low dark luminance (<10−2 cd m−2 at below 6 V) in the absence of SWIR light, and a low turn-on voltage of 2 V when SWIR light is present. Introduction SWIR photodetection and imaging offer new application fields in passive night vision, airborne remote sensing or machine vision solutions, including silicon wafer inspection, product quality control and sorting [1,2]. In bio-imaging applications, the so-called second biological window between 1000 nm and 1700 nm allows for deep penetration of light with low autofluorescence and high spatial resolution, because absorption and scattering from (de-)oxygenated blood, skin and tissue is low compared to the first biological window between 700 nm and 950 nm. The great benefits of SWIR light for deep-tissue bioimaging could be revealed exploring fluorescent carbon nanotubes, rare-earth materials, quantum dots and organic materials emitting above 1000 nm [3][4][5][6][7][8]. The SWIR band matches the spectral sensitivity range of the semiconductor compound indium gallium arsenide (InGaAs). Most SWIR cameras have an InGaAs sensor and the highest performance ones typically detect light between around 900 nm and 1700 nm [9]. InGaAs sensor arrays are still cost prohibitive for most consumer and low-end applications, although the growing market and improvements in the sensor fabrication have resulted in a decrease of the technology costs [10]. A SWIR-to-visible upconversion device, also named upconversion PD [11], upconversion OLED [12] or SWIR visualization device [13], is made by integrating an SWIR PD with a visible light-emitting unit. Such devices potentially offer an alternative route to true lowcost, pixel-free SWIR imaging. The basic idea of any upconverter is that photocurrent generated in the SWIR PD layer drives the serial connected visible lightemitting unit. Upconversion devices convert lowenergy SWIR photons directly into a visible image, avoiding intermediate electronics and an external display for image visualization. Note that the functionality of an upconversion device is different from the several known photon upconversion processes. Photon upconversion describes a process that converts two or more sequentially absorbed low-energy photons into a photon of higher energy. The status and progress until 2018 for optical upconverters that are entirely made with organic and hybrid materials [14], including perovskites and quantum dots, is summarized in reference [15]. Since then, several solution-processed upconverters based on quantum dots were reported [11,16,17]. A colloidal lead sulfide quantum dot layer harvested the nearinfrared (NIR)/SWIR light, and a cadmium selenide quantum dot layer was used for visible light emission. In one example, the device detected SWIR photons out to 1600 nm and the NIR photon-to-visible photon conversion efficiency (940 nm to 525 nm) was 6.5% [11]. A broad-band absorbing polymer-based PD was combined with a phosphorescent OLED and the device sensitivity extended to 1100 nm [12]. In a similar manner, an upconverter was demonstrated by monolithic integration of a low bandgap polymer: SWIR dye blend PD with sensitivity out to around 1400 nm and a perovskite light-emitting (at 516 nm) diode [13]. Cyanine dye-based PDs with NIR-selective absorption between 600 nm and 1000 nm were integrated with a fluorescent OLED. Devices converted light at 830 nm to green light, and the luminance turnon was at a low voltage of 2 V [18]. For an all-organic SWIR upconverter (OUC) it is advantageous to use an organic PD material with selective absorption in the SWIR region. This is because visible light absorption of a broad-band absorber material results in a non-selective response of the device, and visible emitted light from the OLED can be reabsorbed by the PD unit. While low-bandgap organic materials with broad-band absorption extending out to around 1700 nm are known [19][20][21][22], relatively few organic materials with selective SWIR absorption have been reported [23][24][25]. As an extension to a recent review on NIR absorbing organic dyes [26], a list of representative dyes with peak absorption in the SWIR region, i.e. not merely a tail in the absorption spectrum extending beyond 1000 nm, is compiled in the Supporting Information, Table S1. Most of the dyes belong to the families of cyanines [27], rylenes [28] or are charge-transfer chromophores [29]. For optoelectronic device applications, these SWIR dyes possess some potential drawbacks, such as a decreased photo-and thermal stability as well as incompatibility with neutral electron acceptors in blend films (cyanines), limited solubility (rylenes) or low extinction coefficients (charge-transfer dyes). Here, we report the synthesis and OUC device integration of SWIR squaraine dyes. Squaraines are known for their straightforward and scalable synthesis, narrow and intense absorption and emission properties in the NIR [30,31], as well as good thermaland photostability [30,32,33]. The squaraine family is a large material library suitable for a variety of applications in chemical sensing, optoelectronic devices, photodynamic therapy and bioimaging [30,31,34,35]. Squaraines contain an electron-deficient central four-membered ring and two electron-donating groups in a donor-acceptor-donor configuration with a resonance stabilized π-conjugated zwitterionic structure. Appropriate choice of the donor and acceptor moieties allows tuning the optical properties of the dyes, and the combination of strong donors with strong acceptors as well as a high degree of conjugation leads to a bathochromic shift of the absorption maximum [36,37]. Recently, we investigated the synthesis and properties of the symmetrical benz[cd]indolium capped squaraine dye, in this work referred to as SQ1. The dye exhibits strong absorption in the NIR and we demonstrated visibly transparent and solutionprocessed OUCs with peak sensitivity at 980 nm [38,39]. Here, we expand the family of SQ dyes and increased the donor strength of the benz[cd]indolium unit by coupling with phenyl-, carbazole-, and thiophene moieties at the 6-position of the substituent. In a second series of squaraines (the DCSQ family), we coupled theses donors with the dicyanomethylenesubstituted squaraine acceptor unit. We found that the carbazole-and thiophene-substituted DCSQ dyes in solution have peak absorptions above 1000 nm, and therefore represent the first SWIR-absorbing squaraine dyes, to the best of our knowledge. We then fabricated DCSQ-fullerene PDs with a maximum external photon-to-current conversion efficiency (EQE) at 1025 nm. By combining this PD with a fluorescent Alq 3 -based OLED, OUCs were obtained that convert SWIR photons directly to visible green photons, with good performance in terms of a low turn-on voltage, a low dark luminance and a highly linear optical response. Synthesis To study the influence of the donor substitution on the properties of the squaraine dyes, four different donor building blocks based on the benz[cd]indole heterocycle were synthesized as shown in Scheme 1. Lactam 1 was first N-alkylated using octylbromide to yield compound 2 that was used as an intermediate for the brominated lactam 3 and the unsubstituted iminium iodide 7. The bromine-functionality allowed the introduction of different (hetero-)aromatic substituents. The phenyl-and carbazole-substituted benz[cd] indoles 4 and 5 were synthesized via a Suzuki type cross-coupling reaction. The thiophene-substituent (6) was introduced in a direct heteroarylation (DHA) coupling using excess thiophene, potassium carbonate as base and palladium(II) acetate as catalyst. The DHA coupling was chosen due to its high atom economy, cheap reactants, and avoidance of toxic organometallic precursors. Methylation of compounds 2, 4, 5, and 6 with methyl magnesium chloride, followed by a condensation and an iodine ion exchange reaction lead to the formation of iminium iodides 7-10 in 45% -79% isolated yield. The squaraine dyes SQ1 -SQ4 were synthesized according to an adapted procedure from literature [40]. Iminium salts 7-10 were condensed with squaric acid by refluxing in a mixture of n-butanol and toluene. Recrystallization from ethanol resulted in the corresponding dyes in yields of 42% -69%. For SQ1 -SQ4, six different stereoisomers are conceivable ( Figure S1). In Scheme 1, the dyes are drawn as the most stable 'trans-anti-out' isomer. Here, the heterocycles are attached on opposite sides of the polymethine chain ('trans'), the nitrogen atoms of the benz[cd]indolium cycle are facing in opposite directions ('anti'), both away from the squaric acid core ('out'). A reversible trans-cis exchange process could be observed in solution. From NMR experiment we concluded that in these cases a minor SQ isomer (cis-syn-out) was present, in agreement with a population analysis using DFT calculations (SI, isomers of squaraine dyes). The occurrence of squaraine isomers has been reported on several occasions [41][42][43]. Dicyanomethylene-substituted squaraine dyes DCSQ1 -DCSQ4 were synthesized by condensation of dicyano squarate 11 and iminium iodides 7-10 in a mixture of n-butanol and toluene that was heated to 130°C -140°C for 2-4 hours (Scheme 2) [37]. Dyes were isolated after purification by column chromatography and precipitation from DCM/n-heptane in 45% -61% yield. The steric demand of the dicyanomethylene group forces the DCSQ dyes into the cis-syn-out conformation and no other isomers were detectable from NMR spectra. The synthesis of the N-ethylsubstituted DCSQ1 derivative was reported recently [44]. In that case, the dye was synthesized via a stepwise condensation of the iminium salt with the diethyl squarate, followed by introduction of the dicyanomethylene group to the semisquaraine, and finally the condensation with the iminium salt. Experimental details for the synthetic procedure of all our dyes are compiled in the SI. Molecular properties Absorption spectra of SQ1 -SQ4 in toluene solution are shown in Figure 1. The narrow absorption peaks with vibronic shoulders are characteristic for squaraines [45]. The absorption maxima increased from 900 nm for SQ1 to 948 nm for SQ4 (Table 1). This indicates that the aromatic substituents enlarge the π-conjugated system, which is confirmed by quantum chemical calculations of the electron density distributions in the frontier molecular orbitals ( Figure S4). Typical for squaraine dyes are the very high (> 150000 M −1 cm −1 ) molar absorption coefficients of the main S 0 → S 1 (HOMO → LUMO) optical transition. This band shows a negative solvatochromism and, for example for SQ4, the wavelength of the absorption maximum is at 920 nm in ethanol, compared to 935 nm in chloroform or 928 nm in acetonitrile ( Figure S5). The hypsochromic shift with increasing solvent polarity indicates a relatively more polar ground state and is well-known for squaraine dyes [46,47]. The absorption spectra show an additional weak absorption at around 500 nm, as well as a band between 300 nm and 400 nm that is attributed to the absorption of the (substituted) benz[cd]indole moieties [48]. The dicyanomethylene acceptor group induced a substantial redshift of the absorption maxima, and the corresponding dyes absorb between 958 nm (DCSQ1) and 1014 nm (DCSQ4), see Table 1. To our knowledge, DCSQ3 and DCSQ4 are the first squaraine dyes with an absorption maximum beyond 1000 nm. The bathochromic shift of the dicyanomethylene-substituted squaraines comes along with a decrease of the experimental molar absorption coefficient at λ max , also confirmed by the calculated oscillator strengths for the SQ and DCSQ dyes ( Table 1). The additional hypsochromic absorption band in the 450 nm -600 nm wavelength range for the DCSQ dyes (molecular symmetry C 2v ) can be assigned to the HOMO → LUMO+1 transition; this transition is symmetry forbidden for the SQ dyes (C 2h symmetry) and therefore very weak (Table 1, Figure 1a) [37,49]. Figure 1 also shows the fluorescence spectra of the dyes. The Stokes' shifts are larger for the DCSQ dyes with a discernible trend that the shifts increase with the π-conjugation and donor strength of the aromatic substituents for both dye families. An increase of the Stokes' shift indicates a more pronounced change of the molecular structure in the excited state. Experimental constraints have prevented to determine the fluorescence quantum yields, which are <0.05% for DCSQ1 and DCSQ4. A low fluorescence quantum yield in our case is likely due to the exponential increase of non-radiative losses for molecules with smaller energy gap [50]. It has been shown that squaraines can undergo trans-cis photoisomerization via a twisted intramolecular charge transfer state, a reaction that provides a nonemissive decay channel of the excited state. However, this process is inhibited if rotations are hindered and dyes are conformationally locked, as it applies to our dyes [51,52]. The absorbance spectra of spin coated films ( Figure 2) were considerably broadened and the maxima were red-shifted (by 60-90 nm) compared to the solution spectra. This can be explained with strong intermolecular interactions and increased molecular ordering in the solid state. For DCSQ2 and DCSQ3, a broad dimer peak covering the vibrational band appeared at shorter wavelength. The pronounced attenuance feature for DCSQ1 (at 854 nm) and DCSQ4 (at 908 nm) is characteristic for H-aggregates, suggesting that these dyes self-organize during film formation [53]. Dye aggregation in the film is disrupted when blended with the fullerene derivative PCBM ([6,6]-phenyl-C 61 -butyric acid methyl ester), see below. The cyclic voltammograms and thermal gravimetric analysis graphs for the DCSQ dyes are shown in Figure 3, the corresponding data for the SQ dyes are compiled in Figure S6 and S7. In the cyclic voltammograms, all dyes showed two reversible one-electron oxidations and one reversible one-electron reduction. Elongation of the π-system at the benz[cd]indol substituents resulted in a narrowing of the electrochemical gap in each series. Assuming that the half-wave oxidation and reduction potentials correspond to the HOMO and LUMO levels and with an energy level of −5.1 eV vs. vacuum for the ferrocene/ferrocenium redox couple, the redox levels vs. vacuum can be calculated ( Table 2). Introduction of the dicyanomethylene acceptor group decreased both the HOMO and LUMO levels, while the second half wave oxidation potential was hardly influenced. The optical band gaps from the onset absorption edge (from Figure 1) differ from the electrochemical band gaps by around +0.1 eV for the SQ dyes, and by around +0.7 eV for the DCSQ family [55]. Thermal gravimetric analysis under nitrogen atmosphere showed that both dye classes are stable up to 200 °C. Onset decomposition temperatures of around 200 °C have been reported for related squaraine dyes with different donor substituents [47,56], pointing to the lability of the central four-membered ring. As a reference point also the often cited temperatures at 5% mass loss are included in Table 2. It is clear that these values scale with the molecular weights of the corresponding dyes and therefore overestimate the thermal stability of the higher molecular weight squaraine dyes. Details of the thermal analysis, including Differential Scanning Calorimetry (DSC) data, are discussed in Figure S8. SWIR upconversion devices The OUC consisted of a SWIR-sensitive PD and an OLED, stacked in series ( Figure 4a). As SWIR absorber for the PD part, we chose DCSQ1 from the DCSQ dye family and charges were photogenerated using a dye donor/PCBM acceptor heterojunction. From the absorbance spectra shown in Figure 4(b), it can be seen that dye aggregation in the blend films is suppressed for all DCSQ dyes, as opposed to the pure dye films shown in Figure 2(b). OUC devices were completed by combining the PD with a fluorescent tris(8-hydroxyquinolinato)aluminium (Alq 3 )-based OLED. The functionality of the OUC can be explained as follows: both in the dark (off-state) and in the presence of NIR/SWIR light (on-state), a voltage bias is applied to the device. In the off-state, holes are blocked at the ITO/TiO 2 interface and electrons are blocked at the N, N'-bis (3-methylphenyl)-N,N'-diphenylbenzidine (TPD)/Alq 3 interface. Therefore, in the dark, no current is flowing and no visible light is emitted. With increasing voltage bias, a rising dark current can result in an undesirable dark current-induced luminance. In the on-state, light is absorbed in the PD unit and free charges are photogenerated. Electrons are extracted via the TiO 2 anode and holes are driven via the holetransporting MoO 3 layer in the OLED where they recombine with electrons from the cathode under the emission of green light. Figure 4(c) shows EQE spectra of the PD for different bias voltages, limited to a cutoff wavelength of 1100 nm by our instrument. The active layer thickness was around 65 nm, optimized in terms of a low dark current and high EQE value ( Figure S9). We ascribe the apparent EQE peak at 480 nm to an optical interference effect (weak microcavity) because the absorbing layer is sandwiched between a weakly (glass/ITO) and strongly (Ag) reflecting interface ( Figure S9) [58]. In the NIR/SWIR spectral range, the EQE of the PD followed the film absorbance spectrum. The EQE increased linearly with the applied voltage and reached a value of 33% at 1025 nm for −8 V. The reproducibility of device fabrication is demonstrated in Figure S10. Figure 4(d) shows the corresponding EQE spectra for the OUC. Again, the EQE followed the film absorbance spectrum, which confirms that the device is sensitive in the SWIR range, out to a wavelength of around 1200 nm (Figure 4b). For the OUC, the EQE increased superlinear and reached a value of 13.3% for 8 V. When evaluating the EQE dependence on the electric field for the two devices, we found that values matched for high fields, but the EQE of the OUC dropped below the linear trend of the PD for lower fields. We ascribe this to small energetic barriers for carrier transport in the OUC that can be effectively overcome when the electric field is increased. The luminance vs voltage trend of the OUC is shown in Figure 5. In the dark, the luminance stayed below the detection limit of our setup (10 −2 cd m −2 ) up to 6 V, and the dark luminance increased to a small value of 0.3 cd m −2 at 12 V. A low dark luminance is an important performance metrics of an OUC. Under ambient light conditions, a dark luminance level of below 10 −2 cd m −2 is hardly detectable by the human eye. Therefore, even a small SWIR light-induced luminance results in a high image contrast that can visually be clearly differentiated against the (black) background. We quantified the OUC on-state by using a light source at 980 nm. Device turn-on was at a low voltage of 2 V. The luminance increased with increasing voltage bias and reached, for example, 27 cd m −2 at 6 V, 76 cd m −2 at 8 V, or 260 cd m −2 at 12 V. The reproducibility of device fabrication is demonstrated in Figure S10. During light illumination, the luminance is limited by the number of hole charge carriers that are photogenerated in the PD and that are injected into the OLED. In our case, we do not observe a saturation of the luminance at higher voltage, because the EQE of the PD steadily increases when the voltage is increased. The luminance on-off ratio was evaluated in the voltage range where the dark luminance could be measured. Between 6 V and 12 V, the ratio peaked at a value of 3200 at 8 V. Note that the actual value of the on-off ratio depends on the NIR light intensity. If the light intensity is increased, more charges are generated and consequently the luminance increases. The inset of Figure 5 shows the linear response of the luminance when the NIR light intensity was varied over two orders of magnitude. A linear device response is beneficial for a direct imaging OUC. These results indicate that high-contrast images can be obtained by using a simple low-voltage battery to drive such OUCs. The photon-to-photon conversion efficiency (P2PCE) describes the ratio between the number of visible photons emitted to the number of incident NIR photons. For the data shown in Figure 5, P2PCE increased with bias voltage and reached a value of 0.1% at 8 V and 0.3% at 12 V [38]. This low value is clearly limited by the EQE (~1% [59]) of the OLED. The P2PCE ≈ EQE(PD) x EQE(OLED) can be approximated from the individual EQEs of the PD and OLED. The EQE of the PD part was ~12% at an electric field of 8 V/active layer thickness. Therefore, the value of the approximated P2PCE ≈ 0.12 × 0.01 = 0.12% is expected to be in the range of 0.1%, in agreement with the experimental results. Initial OUC stability tests are promising. As a qualitative statement, we found that the device performance is stable over a period of several weeks, when stored under inert conditions between subsequent measurements. We also stressed devices under constant NIR light illumination and voltage bias over a period of 1 day and found that the luminance output was constant ( Figure S11). Conclusions We synthesized a series of squaraines with the aim to obtain dyes with selective light absorption extending considerably into the SWIR wavelength range. We demonstrated efficient SWIR-sensitive PDs and OUCs. In general, OUCs can be fabricated using low-cost manufacturing processes on large-area flexible substrates and can be operated at room temperature. Therefore, it is anticipated that OUCs can provide an interesting alternative to the existing SWIR imaging technology for novel consumer and low-end applications. In ongoing synthetic work, we are trying to shift the dye absorption further into the SWIR range by increasing the donor strength and using different acceptors on the squaric acid core. A major advantage of narrowband polymethine dyes compared to colloidal quantum dot absorbers is that light absorption in the visible is small, resulting in upconverters with selective SWIR response. For squaraines with extended absorption into the SWIR, it is fair to mention a potential tradeoff between a further bathochromic shift and SWIR selectivity. As we found for the DCSQ family, bulky substitution at the acceptor unit locks the dyes in the cis conformation, resulting in allowed optical transitions in the visible. Therefore, the synthetic challenge lies in the design of next-generation SWIR squaraine dyes with a most stable trans conformation.
5,301.8
2021-04-13T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Algorithmic Aspects of Some Variations of Clique Transversal and Clique Independent Sets on Graphs : This paper studies the maximum-clique independence problem and some variations of the clique transversal problem such as the { k } -clique, maximum-clique, minus clique, signed clique, and k -fold clique transversal problems from algorithmic aspects for k -trees, suns, planar graphs, doubly chordal graphs, clique perfect graphs, total graphs, split graphs, line graphs, and dually chordal graphs. We give equations to compute the { k } -clique, minus clique, signed clique, and k -fold clique transversal numbers for suns, and show that the { k } -clique transversal problem is polynomial-time solvable for graphs whose clique transversal numbers equal their clique independence numbers. We also show the relationship between the signed and generalization clique problems and present NP-completeness results for the considered problems on k -trees with unbounded k , planar graphs, doubly chordal graphs, total graphs, split graphs, graphs, and dually chordal graphs. Introduction Every graph G = (V, E) in this paper is finite, undirected, connected, and has at most one edge between any two vertices in G. We assume that the vertex set V and edge set E of G contain n vertices and m edges. They can also be denoted by V(G) and E(G). A graph G = (V , E ) is an induced subgraph of G denoted by G[V ] if V ⊆ V and E contains all the edge (x, y) ∈ E for x, y ∈ V . Two vertices x, y ∈ V are adjacent or neighbors if (x, y) ∈ E. The sets N G (x) = {y | (x, y) ∈ E} and N G [x] = N G (x) ∪ {x} are the neighborhood and closed neighborhood of a vertex x in G, respectively. The number deg G (x) = |N G (x)| is the degree of x in G. If deg G (x) = k for every x ∈ V, then G is k-regular. Particularly, cubic graphs are an alternative name for 3-regular graphs. A subset S of V is a clique if (x, y) ∈ E for x, y ∈ S. Let Q be a clique of G. If Q ∩ Q = Q for any other clique Q of G, then Q is a maximal clique. We use C(G) to represent the set {C | C is a maximal clique of G}. A clique S ∈ C(G) is a maximum clique if |S| ≥ |S | for every S ∈ C(G). The number ω(G) = max{|S| | S ∈ C(G)} is the clique number of G. A set D ⊆ V is a clique transversal set (abbreviated as CTS) of G if |C ∩ D| ≥ 1 for every C ∈ C(G). The number τ C (G) = min{|S| | S is a CTS of G} is the clique transversal number of G. The clique transversal problem (abbreviated as CTP) is to find a minimum CTS for a graph. A set S ⊆ C(G) is a clique independent set (abbreviated as CIS) of G if |S| = 1 or |S| ≥ 2 and C ∩ C = ∅ for C, C ∈ S. The number α C (G) = max{|S| | S is a CIS of G} is the clique independence number of G. The clique independence problem (abbreviated as CIP) is to find a maximum CIS for a graph. The CTP and the CIP have been widely studied. Some studies on the CTP and the CIP consider imposing some additional constraints on CTS or CIS, such as the maximum-clique independence problem (abbreviated as MCIP), the k-fold clique transversal problem (abbreviated as k-FCTP), and the maximum-clique transversal problem (abbreviated as MCTP). Definition 4. Suppose that G is a graph. A function f is a signed clique transversal function (abbreviated as SCTF) of G if the domain and range of f are V(G) and {−1, 1}, respectively, and f (C) ≥ 1 for C ∈ C(G). If the domain and range of f are V(G) and {−1, 0, 1}, respectively, and f (C) ≥ 1 for C ∈ C(G), then f is a minus clique transversal function (abbreviated as MCTF) of G. The number τ s C (G) = min{ f (V(G)) | f is an SCTF of G} is the signed clique transversal number of G. The minus clique transversal number of G is τ − C (G) = min{ f (V(G)) | f is an MCTF of G}. The signed clique transversal problem (abbreviated as SCTP) is to find a minimum-weight SCTF for a graph. The minus clique transversal problem (abbreviated as MCTP) is to find a minimum-weight MCTF for a graph. Lee [4] introduced some variations of the k-FCTP, the {k}-CTP, the SCTP, and the MCTP, but those variations are dedicated to maximum cliques in a graph. The MCTP on chordal graphs is NP-complete, while the MCTP on block graphs is linear-time solvable [7]. The MCTP and SCTP are linear-time solvable for any strongly chordal graph G if a strong elimination ordering of G is given [5]. The SCTP is NP-complete for doubly chordal graphs [6] and planar graphs [5]. According to what we have described above, there are very few algorithmic results regarding the k-FCTP, the {k}-CTP, the SCTP, and the MCTP on graphs. This motivates us to study the complexities of the k-FCTP, the {k}-CTP, the SCTP, and the MCTP. This paper also studies the MCTP and MCIP for some graphs and investigates the relationships between different dominating functions and CTFs. Definition 5. Suppose that k ∈ N is fixed and G is a graph. A set S ⊆ V(G) is a k-tuple dominating set (abbreviated as k-TDS) of G if |S ∩ N G [x]| ≥ 1 for x ∈ V(G). The number γ ×k (G) = min{|S| | S is a k-TDS of G} is the k-tuple domination number of G. The k-tuple domination problem (abbreviated as k-TDP) is to find a minimum k-TDS for a graph. Notice that a dominating set of a graph G is a 1-TDS. The domination number γ(G) of G is γ ×1 (G). Definition 6. Suppose that k ∈ N is fixed and G is a graph. A function f is a {k}-dominating function (abbreviated as {k}-DF) of G if the domain and range of f are V(G) and {0, 1, 2, . . . , k}, respectively, and The {k}-domination problem (abbreviated as {k}-DP) is to find a minimum-weight {k}-DF for a graph. Definition 7. Suppose that G is a graph. A function f is a signed dominating function (abbreviated as SDF) of G if the domain and range of f are V(G) and {−1, 1}, respectively, and f (N G [x]) ≥ 1 for x ∈ V(G). If the domain and range of f are V(G) and {−1, 0, 1}, respectively, and f (N G [x]) ≥ 1 for x ∈ V(G), then f is a minus dominating function (abbreviated as MDF) of G. The number γ s (G) = min{ f (V(G)) | f is an SDF of G} is the signed domination number of G. The minus domination number of G is γ − (G) = min{ f (V(G)) | f is an MDF of G}. The signed domination problem (abbreviated as SDP) is to find a minimum-weight SDF for a graph. The minus domination problem (abbreviated as MDP) is to find a minimum-weight MDF for a graph. Our main contributions are as follows. 1. We prove in Section 2 that γ − (G) = τ − C (G) and γ s (G) = τ s C (G) for any sun G. We also prove that We prove in Section 3 that τ We also prove that the SCTP is a special case of the generalized clique transversal problem [8]. Therefore, the SCTP for a graph H can be solved in polynomial time if the generalized transversal problem for H is polynomial-time solvable. 3. We show in Section 4 thatγ ×k (G) = τ k C (G) and γ {k} (G) = τ {k} C (G) for any split graph G. Furthermore, we introduce H 1 -split graphs and prove that γ − (H) = τ − C (H) and γ s (H) = τ s C (H) for any H 1 -split graph H. We prove the NP-completeness of SCTP for split graphs by showing that the SDP on H 1 -split graphs is NP-complete. 4. We show in Section 5 that τ {k} C (G) for a doubly chordal graph G can be computed in linear time, but the k-FCTP is NP-complete for doubly chordal graphs as k > 1. Notice that the CTP is a special case of the k-FCTP and the {k}-CTP when k = 1, and thus 5. We present other NP-completeness results in Sections 6 and 7 for k-trees with unbounded k and subclasses of total graphs, line graphs, and planar graphs. These results can refine the "borderline" between P and NP for the considered problems and graphs classes or their subclasses. Suns In this section, we give equations to compute τ Let p ∈ N and G be a graph. An edge e ∈ E(G) is a chord if e connects two nonconsecutive vertices of a cycle in G. If C has a chord for every cycle C consisting of more than three vertices, G is a chordal graph. A sun G is a chordal graph whose vertices can be partitioned into W = {w i | 1 ≤ i ≤ p} and U = {u i | 1 ≤ i ≤ p} such that (1) W is an independent set, (2) the vertices u 1 , u 2 , . . . , u p of U form a cycle, and (3) every w i ∈ W is adjacent to precisely two vertices u i and u j , where j ≡ i + 1 (mod p). We use S p = (W, U, E) to denote a sun. Then, |V(S p )| = 2p. If p is odd, S p is an odd sun; otherwise, it is an even sun. Figure 1 shows two suns. Proof. It is straightforward to see that U is a minimum 2-FCTS and W ∪ U is a minimum 3-FCTS of S p . This lemma therefore holds. We define a function h : W ∪ U → {0, 1, . . . , k} by h(w i ) = 0 for every w i ∈ W, h(u i ) = k/2 for u i ∈ U with odd index i and h(u i ) = k/2 for every u i ∈ U with even index i. Clearly, a maximal clique Q of S n is either the closed neighborhood of some vertex in W or a set of at least three vertices in U. We show the weight of h is pk/2 by considering two cases as follows. Case 1: p is even. We have Case 2: p is odd. We have Following what we have discussed above, we know that h is a minimum {k}-CTF of S n and thus τ {k} C (S p ) = pk/2 . Lemma 3. For any sun S Theorem 1 (Lee and Chang [9]). Let S p be a sun. Then, Corollary 1. Let S p be a sun. Then, Proof. The corollary holds by Lemmas 1-3 and Corollary 1. Clique Perfect Graphs Let G be the set of all induced subgraphs of G. If τ C (H) = α C (H) for every H ∈ G, then G is clique perfect. In this section, we study the {k}-CTP for clique perfect graphs and the SCTP for balanced graphs. Proof. Assume that D is a minimum CTS of G. Then, |D| = τ C (G). Let x ∈ V(G) and let f be a function whose domain is V(G) and range is {0, 1, . . . , k}, and Hence, the theorem holds by Lemma 4. Corollary 2. The {k}-CTP is polynomial-time solvable for distance-hereditary graphs, balanced graphs, strongly chordal graphs, comparability graphs, and chordal graphs without odd suns. Definition 8. Suppose that R is a function whose domain is C(G) and range is {0, 1, . . . , ω(G)}. If R(C) ≤ |C| for every C ∈ C(G), then R is a clique-size restricted function (abbreviated as CSRF) of G. A set D ⊆ V(G) is an R-clique transversal set (abbreviated as R-CTS) of G if R is a CSRF of G and |D ∩ C| ≥ R(C) for every C ∈ C(G). Let τ R (G) = min{|D| | D is an R-CTS of G}. The generalized clique transversal problem (abbreviated as GCTP) is to find a minimum R-CTS for a graph G with a CSRF R. Theorem 3. The SCTP on balanced graphs can be solved in polynomial time. Proof. Suppose that a graph G has n vertices v 1 , v 2 , . . . , v n and maximal cliques C 1 , C 2 , . . . , C . Let i ∈ {1, 2, . . . , } and j ∈ {1, 2, . . . , n}. Let M be an × n matrix such that an element M(i, j) of M is one if the maximal clique C i contains the vertex v j , and M(i, j) = 0 otherwise. We call M the clique matrix of G. If the clique matrix M of G does not contain a square submatrix of odd order with exactly two ones per row and column, then M is a balanced matrix and G is a balanced graph. We formulae the GCTP on a balanced graph G with a CSRF R as the following integer programming problem: , . . . , R(C )) is a column vector and X = (x 1 , x 2 , . . . , x n ) is a column vector such that x i is either 0 or 1. Since the matrix M is balanced, an optimal 0-1 solution of the integer programming problem above can be found in polynomial time by the results in [15]. By Lemma 5, we know that the SCTP on balanced graphs can be solved in polynomial time. Split Graphs Let G be such a graph that V(G) = I ∪ C and I ∩ C = ∅. If I is an independent set and C is a clique, G is a split graph. Then, every maximal of G is either C itself, or the closed neighborhood N G [x] of a vertex x ∈ I. We use G = (I, C, E) to represent a split graph. The {k}-CTP, the k-FCTP, the SCTP, and the MCTP for split graphs are considered in this section. We also consider the {k}-DP, the k-TDP, the SDP, and the MDP for split graphs. For split graphs, the {k}-DP, the k-TDP, and the MDP are NP-complete [16][17][18], but the complexity of the SDP is still unknown. In the following, we examine the relationships between the {k}-CTP and the {k}-DP, the k-FCTP and the k-TDP, the SCTP and the SDP, and the MCTP and the MDP. Then, by the relationships, we prove the NP-completeness of the SDP, the {k}-CTP, the k-FCTP, the SCTP, and the MCTP for split graphs. We first consider the {k}-CTP and the k-FCTP and show in Theorems 4 and 5 that τ k C (G) = γ ×k (G) and τ {k} C (G) = γ {k} (G) for any split graph G. Chordal graphs form a superclass of split graphs [19]. The cardinality of C(G) is at most n for any chordal graph G [20]. The following lemma therefore holds trivially. Proof. Let S be a minimum k-FCTS of G. Consider a vertex y ∈ I. By the structure of G, N G [y] is a maximal clique of G. Then, |S ∩ N G [y]| ≥ k. We now consider a vertex x ∈ C. If C ∈ C(G), then there exists a vertex y ∈ I such that N G Hence, S is a k-TDS of G. We have γ ×k (G) ≤ τ k C (G). Let D be a minimum k-TDS of G. Recall that the closed neighborhood of every vertex in I is a maximal clique. Then, D contains at least k vertices in the maximal clique N G [y] for every vertex y ∈ I. If C ∈ C(G), D is clearly a k-FCTS of G. Suppose that C ∈ C(G). We consider three cases as follows. Case 1: y ∈ I \ D. Then, |D ∩ C| ≥ |D ∩ N G (y)| ≥ k. The set D is a k-FCTS of G. By the discussion of the three cases, we have τ k C (G) ≤ γ ×k (G). Hence, we obtain that γ ×k (G) ≤ τ k C (G) and τ k C (G) ≤ γ ×k (G). The theorem holds for split graphs. Proof. We can verify by contradiction that G has a minimum-weight {k}-CTF f and a minimum-weight {k}-DF g of G such that f (y) = 0 and g(y) = 0 for every y ∈ I. By the structure of G, N G [y] ∈ C(G) for every y ∈ I. Then, f (N G [y]) ≥ k and g(N G [y]) ≥ k. Since f (y) = 0 and g(y) = 0, f (N G (y)) ≥ k and g(N G (y)) ≥ k. For every y ∈ I, N G (y) ⊆ C and f (C) ≥ f (N G (y)) ≥ k. For every x ∈ C, f (N G [x]) ≥ f (C) ≥ k. Therefore, the function f is also a {k}-DF of G. We have γ {k} (G) ≤ τ {k} C (G). We now consider g(C) for the clique C. If C ∈ C(G), the function g is clearly a {k}-CTF of G. Suppose that C ∈ C(G). Notice that g is a {k}-DF and g(y) = 0 for every y ∈ I. Then, g(C) = g(N G [x]) ≥ k for any vertex x ∈ C. Therefore, g is also a {k}-CTF of G. Proof. The corollary holds by Theorems 4 and 5 and the NP-completeness of the {k}-DP and the k-TDP for split graphs [16,18]. A graph G is a complete if C(G) = {V(G)}. Let G be a complete graph and let x ∈ V(G). The vertex set V(G) is the union of the sets {x} and V(G) \ {x}. Clearly, {x} is an independent set and V(G) \ {x} is a clique of G. Therefore, complete graphs are split graphs. It is easy to verify the Lemma 7. Lemma 7. If G is a complete graph and k ∈ N, then (1) τ k C (G) = γ ×k (G) = k for k ≤ n; For split graphs, however, the signed and minus domination numbers are not necessarily equal to the signed and minus clique transversal numbers, respectively. Figure 2 shows a split graph G with τ s C (G) = τ − C (G) = −3. However, γ s (G) = γ − (G) = 1. We therefore introduce H 1 -split graphs and show in Theorem 6 that their signed and minus domination numbers are equal to the signed and minus clique transversal numbers, respectively. H 1 -split graphs are motivated by the graphs in [17] for proving the NP-completeness of the MDP on split graphs. Figure 3 shows an H 1 -split graph. Definition 9. Suppose that G = (I, C, E) is a split graph with 3p + 3 + 2 vertices. Let U, S, X, and Y be pairwise disjoint subsets of V(G) such that The graph G is an H 1 -split graph if V(G) = U ∪ S ∪ X ∪ Y and G entirely satisfies the following three conditions. (1) I = S ∪ Y and C = U ∪ X. Proof. We first prove τ s C (G) = γ s (G). Let G = (I, C, E) be an H 1 -split graph. As stated in Definition 9, I can be partitioned into S = {s i | 1 ≤ i ≤ } and Y = {y i | 1 ≤ i ≤ p + + 1}, and C can be partitioned into U = {u i | 1 ≤ i ≤ p} and X = {x i | 1 ≤ i ≤ p + + 1}. Assume that f is a minimum-weight SDF of G. For each y i ∈ Y, |N G [y i ]| = 2 and y i is adjacent to only the vertex x i ∈ X. Then, f (x i ) = f (y i ) = 1 for 1 ≤ i ≤ p + + 1. Since C = U ∪ X and |U| = p, we know that f (C) = f (U) + f (X) ≥ (−p) + (p + + 1) ≥ + 1. Notice that f (N G [y]) ≥ 1 and N G [y] ∈ C(G) for every y ∈ I. Therefore, f is also an SCTF of G. We have τ s C (G) ≤ γ s (G). Assume that h is a minimum-weight SCTF of G. For each y i ∈ Y, |N G [y i ]| = 2 and y i is adjacent to only the vertex x i ∈ X. Then, h(x i ) = h(y i ) = 1 for 1 ≤ i ≤ p + + 1. Consider the vertices in I. Since N G [y] ∈ C(G) for every y ∈ I, h(N G [y]) ≥ 1. We now consider the vertices in C. Recall that C = U ∪ X. Let u i ∈ U. Since |U| = p and |S| = , we know that . Following what we have discussed above, we have τ s C (G) = γ s (G). The proof for τ − C (G) = γ − (G) is analogous to that for τ s C (G) = γ s (G). Hence, the theorem holds for any H 1 -split graphs. Theorem 7. The SDP on H 1 -split graphs is NP-complete. We construct an H 1 -split graph G = (I, C, E) by the following steps. (1) Let I = S ∪ Y be an independent set and let C = U ∪ X be a clique. (2) For each vertex s i ∈ S, a vertex u ∈ U is connected to s i if u ∈ C i . (3) For 1 ≤ i ≤ p + + 1, the vertex y i is connected to the vertex x i . (4) For 1 ≤ i ≤ , the vertex s i is connected to the vertex x i . Let τ h (3, 2) be the minimum cardinality of a (3,2)-hitting set for the instance (U, C). Assume that U is a minimum (3,2)-hitting set for the instance (U, C). Then, |U | = τ h (3, 2). Let f be a function whose domain is V(G) and range is {−1, 1}, and Assume that f is minimum-weight SDF of G. For each y i ∈ Y, |N G [y i ]| = 2 and y i is adjacent to only the vertex x i ∈ X. Then, f (x i ) = f (y i ) = 1 for 1 ≤ i ≤ p + + 1. For any ). It contradicts the assumption that the weight of f is minimum. Therefore, there exists a minimum-weight There are at least two vertices in C i with the function value 1. Then, the set U = {u ∈ U | h(u) = 1} is a (3,2)-hitting set for the instance (U, C). We have Following what we have discussed above, we know that γ s (G) = p + + 2τ h (3, 2) + 2. Hence, the SDP on H 1 -split graphs is NP-complete. Proof. The corollary holds by Theorems 6 and 7 and the NP-completeness of the MDP on split graphs [17]. Doubly Chordal and Dually Chordal Graphs Assume that G is a graph with n vertices x 1 , x 2 , . . . , x n . Let i ∈ {1, 2, . . . , n} and let , then the ordering (x 1 , x 2 , . . . , x n ) is a maximum neighborhood ordering (abbreviated as MNO) of G. A graph G is dually chordal [21] if and only if G has an MNO. It takes linear time to compute an MNO for any dually chordal graph [22]. A graph G is a doubly chordal graph if G is both chordal and dually chordal [23]. Lemma 8 shows that a dually chordal graph is not necessarily a chordal graph or a clique perfect graph. Notice that the number of maximal cliques in a chordal graph is at most n [20], but the number of maximal cliques in a dually chordal graph can be exponential [24]. Lemma 8. For any dually graph G, τ C (G) = α C (G), but G is not necessarily clique perfect or chordal. Proof. Brandstädt et al. [25] showed that the CTP is a particular case of the clique rdomination problem and the CIP is a particular case of the clique r-packing problem. They also showed that the minimum cardinality of a clique r-dominating set of a dually chordal graph G is equal to the maximum cardinality of a clique r-packing set of G. Therefore, Assume that H is a graph obtained by connecting every vertex of a cycle C 4 of four vertices x 1 , x 2 , x 3 , x 4 to a vertex x 5 . Clearly, the ordering (x 1 , x 2 , x 3 , x 4 , x 5 ) is an MNO and thus H is a dually chordal graph. The cycle C 4 is an induced subgraph of H and does not have a chord. Moreover, τ C (H) = α C (H) = 1, but τ C (C 4 ) = 2 and α C (C 4 ) = 1. Hence, a dually chordal graph is not necessarily clique perfect or chordal. Assume that S is a minimum (k − 1)-FCTS of G. By the construction of H, each maximal clique of H contains the vertex x. Therefore, S ∪ {x} is a k-FCTS of H. Then τ k C (H) ≤ τ k−1 C (G) + 1. By contradiction, we can verify that there exists a minimum k-FCTS D of H such that Following what we have discussed above, we have τ k C (H) = τ k−1 C (G) + 1. Notice that τ C (G) = τ 1 C (G) and the CTP on chordal graphs is NP-complete [14]. Hence, the k-FCTP on doubly chordal graphs is NP-complete for doubly chordal graphs. Proof. The clique r-dominating problem on doubly chordal graphs can be solved in linear time [25]. The CTP is a particular case of the clique r-domination problem. Therefore, the CTP on doubly chordal graphs can also be solved in linear time. By Lemmas 4 and 8, the theorem holds. ] is a clique for 1 ≤ i ≤ n, then the ordering (x 1 , x 2 , . . . , x n ) is a perfect elimination ordering (abbreviated as PEO) of G. A graph G is chordal if and only if G has a PEO [26]. A chordal graph G is a k-tree if and only if either G is a complete graph of k vertices or G has more than k vertices and there exists a PEO (x 1 , x 2 , . . . , x n ) such that N i [x i ] is a clique of k vertices if i = n − k + 1; otherwise, N i [x i ] is a clique of k + 1 vertices for 1 ≤ i ≤ n − k. Figure 4 shows a 2-tree with the PEO (v 1 , v 2 , . . . , v 13 ). In [3], Chang et al. showed that the MCTP is NP-complete for k-trees with unbounded k by proving γ(G) = τ M (G) for any k-tree G. However, Figure 4 shows a counterexample that disproves γ(G) = τ M (G) for any k-tree G. The graph H in Figure 4 is a 2-tree with the perfect elimination ordering (v 1 , v 2 , . . . , v 13 ). The set {v 5 , v 10 } is the minimum dominating set of H and the set {v 5 , v 10 , v 11 } is a minimum MCTS of H. A modified NP-completeness proof is therefore desired for the MCTP on k-tree with unbounded k. Proof. The CTP and the CIP are NP-complete for k-trees with unbounded k [8]. Since every maximal clique in a k-tree is also a maximum clique [27], an MCTS is a CTS and an MCIS is a CIS. Hence, the MCTP and the MCIP are NP-complete for k-trees with unbounded k. Theorem 11. The SCTP is NP-complete for k-trees with unbounded k. Let Q be a clique with k 1 + 1 vertices. Let H be a graph such that V(H) = V(G) ∪ Q and E(H) = E(G) ∪ {(x, y) | x, y ∈ Q} ∪ {(x, y) | x ∈ Q, y ∈ V(G)}. Let X i = C i ∪ Q be a clique for 1 ≤ i ≤ . Clearly, C(H) = {X i | 1 ≤ i ≤ }. Let k 2 = 2k 1 + 1. Then, H is a k 2 -tree and |X i | = k 2 + 1 = 2k 1 + 2 for 1 ≤ i ≤ . Clearly, we can verify that there exists a minimum-weight SCTF h of H of such that h(x) = 1 for every x ∈ Q. Then, C i = X i \ Q contains at least one vertex x with h(x) = 1 for 1 ≤ i ≤ . Let S = {x | x ∈ V(H) \ Q and h(x) = 1}. Then, S is a CTS of G. Since τ s C (H) = |Q| + 2|S| − |V(G)|, we have |Q| + 2τ C (G) − |V(G)| ≤ τ s C (H). Assume that D is a minimum CTS of G. Let f be a function of H whose domain is V(H) and range is {−1, 1}, and (1) f (x) = 1 for every x ∈ Q, (2) f (x) = 1 for every x ∈ D, and (3) f (x) = −1 for every x ∈ V(G) \ D. Each maximal clique of H has at least k 1 + 2 vertices with the function value 1. Therefore, f is an SCTF. We have τ s C (H) ≤ |Q| + 2τ C (G) − |V(G)|. Following what we have discussed above, we know that τ s C (H) = |Q| + 2τ C (G) − |V(G)|. The theorem therefore holds by the NP-completeness of the CTP for k-trees [8]. Theorem 12. Suppose that κ ∈ N the κ-FCTP is NP-complete on k-trees with unbounded k. Proof. Assume that k 1 ∈ N and G is a k 1 -tree with |V(G)| > k 1 . Let H be a graph such that V(H) = V(G) ∪ {x} and E(H) = E(G) ∪ {(x, y) | y ∈ V(G)}. Clearly, H is a (k 1 + 1)tree and we can construct H in linear time. Following the argument analogous to the proof of Theorem 8, we have τ κ C (H) = τ κ−1 C (G) + 1. The theorem therefore holds by the NP-completeness of the CTP for k-trees [8]. Theorem 13. The SCTP and κ-FCTP problems can be solved in linear-time for k-trees with fixed k. Proof. Assume that κ ∈ N and G is a graph. The κ-FCTP is the GCTP with the CSRF R whose domain is C(G) and range is {κ}. By Lemma 5, τ s C (G) can be obtained from the solution to the GCTP on a graph G with a particular CSRF R. Since the GCTP is linear-time solvable for k-trees with fixed k [8], the SCTP and κ-FCTP are also linear-time solvable for k-trees with fixed k. Planar, Total, and Line Graphs In a graph, a vertex x and an edge e are incident to each other if e connects x to another vertex. Two edges are adjacent if they share a vertex in common. Let G and H be graphs such that each vertex x ∈ V(H) corresponds to an edge e x ∈ E(G) and two vertices x, y ∈ V(H) are adjacent in H if and only if their corresponding edges e x and e y are adjacent in G. Then, H is the line graph of G and denoted by L(G). Let H be a graph such that V(H ) = V(G) ∪ E(G) and two vertices x, y ∈ V(H ) are adjacent in H if and only if x and y are adjacent or incident to each other in G. Then, H is the total graph of G and denoted by T(G). Lemma 9 ([28] ). The following statements hold for any triangle-free graph G. (1) Every maximal clique of L(G) is the set of edges of G incident to some vertex of G. (2) Two maximal cliques in L(G) intersect if and only if their corresponding vertices (in G) are adjacent in G. Theorem 14. The MCIP is NP-complete for any 4-regular planar graph G with the clique number 3.
7,940.8
2021-01-13T00:00:00.000
[ "Mathematics" ]
Population Genomics of the Blue Shark, Prionace glauca, Reveals Different Populations in the Mediterranean Sea and the Northeast Atlantic ABSTRACT Populations of marine top predators have been sharply declining during the past decades, and one‐third of chondrichthyans are currently threatened with extinction. Sustainable management measures and conservation plans of large pelagic sharks require knowledge on population genetic differentiation and demographic connectivity. Here, we present the case of the Mediterranean blue shark (Prionace glauca, L. 1758), commonly found as bycatch in longline fisheries and classified by the IUCN as critically endangered. The management of this species suffers from a scarcity of data about population structure and connectivity within the Mediterranean Sea and between this basin and the adjacent Northeast Atlantic. Here, we assessed the genetic diversity and spatial structure of blue shark from different areas of the Mediterranean Sea and the Northeast Atlantic through genome scan analyses. Pairwise genetic differentiation estimates (F ST) on 203 specimens genotyped at 14,713 ddRAD‐derived SNPs revealed subtle, yet significant, genetic differences within the Mediterranean sampling locations, and between the Mediterranean Sea and the Northeast Atlantic Ocean. Genetic differentiation suggests some degree of demographic independence between the Western and Eastern Mediterranean blue shark populations. Furthermore, results show limited genetic connectivity between the Mediterranean and the Atlantic basins, supporting the hypothesis of two distinct populations of blue shark separated by the Strait of Gibraltar. Although reproductive interactions may be limited, the faint genetic signal of differentiation suggests a recent common history between these units. Therefore, Mediterranean blue sharks may function akin to a metapopulation relying upon local demographic processes and connectivity dynamics, whereby the limited contemporary gene flow replenishment from the Atlantic may interplay with currently poorly regulated commercial catches and large‐scale ecosystem changes. Altogether, these results emphasise the need for revising management delineations applied to these critically endangered sharks. . This is far from easy, as intermediate scenarios between panmixia and absence of gene flow (Waples and Gaggiotti 2006) are particularly difficult to discern in large pelagic species (Bailleul et al. 2018;Puncher et al. 2018;Rodríguez-Ezpeleta et al. 2019).Mismatches between biologically independent entities and management units are also common, with negative effects on the conservation of population complexes managed as a single entity (Reiss et al. 2009).The identification of these boundaries is essential for an accurate estimation of individual stock delineation. The Mediterranean Sea harbours a high percentage of threatened sharks and rays, with more than half of the species being threatened with extinction (Walls and Dulvy 2021).Overfishing, including bycatch (non-target species caught incidentally), is the main cause of the decline of shark populations (Pacoureau et al. 2021;Dulvy et al. 2014Dulvy et al. , 2021)), and as several sharks and rays are top predators, their demographic decline is expected to affect the functioning of marine ecosystems (Estes et al. 2011;Myers et al. 2007). The blue shark Prionace glauca, L. 1758 is no exception.Besides being targeted by commercial fishing, this viviparous K-selected species with an average generation time of 9.8 years in the North Atlantic (Cortés et al. 2015;Nakano and Stevens 2008) is a major bycatch of longline and driftnet fisheries (Parra et al. 2023;Megalofonou, Damalas, and Yannopoulos 2005). As a result of the impact of fishing on blue shark global populations (Fowler et al. 2005), the species has been classified as globally 'Near Threatened' on the IUCN Red List (Rigby et al. 2019).More importantly, blue shark is classified as 'Critically Endangered' in the Mediterranean Sea (Sims et al. 2016), where high fishing pressure is associated with a dramatic decrease in estimated abundance over the last decades (Ferretti et al. 2008).Yet, the population genetic structure, the spatial dynamics and the level of connectivity of the Mediterranean blue shark with the Atlantic are still poorly understood, despite the importance of this information for the correct management of the species in the region. Previously published tagging studies and the analysis of fisheries-dependent data (Kohler, Casey, and Turner 1998;Kohler et al. 2002;Kohler and Turner 2008;Ferretti et al. 2008;Megalofonou et al. 2009) suggest that the vast majority of blue sharks tagged in the Mediterranean Sea were immature and remained in the tagging area, with no migration movements towards the adjacent southern areas of the Northeast Atlantic.The only exception was one subadult female that moved a short distance to reach the adjacent Northeast Atlantic area (Kohler et al. 2002). These tag-recapture surveys, carried out from 1962 to 2000, suggest that North Atlantic blue sharks form a single stock, separate from the Mediterranean Sea stock, and that migratory movements within the Atlantic basin are quite frequent (Kohler et al. 2002). The analysis of two mitochondrial markers highlighted an apparent lack of geographical differentiation between the Mediterranean and the Northeast Atlantic on the basis of haplotype networks (Leone et al. 2017).However, the use of Ф ST integrating haplotype divergence detected significant genetic structure among four geographical groups, suggesting that the analysis of spatial genetic structure in relation to sex ratio and size could indicate some level of sex/age-biased migratory behaviour (Leone et al. 2017). On the contrary, distribution and behavioural data suggest widespread panmixia, and the first genetic data using microsatellites confirmed this hypothesis (Veríssimo et al. 2017;Vandeperre et al. 2014). Genetic studies have been carried out on Atlantic and Pacific blue shark populations using microsatellites, suggesting restricted gene flow between oceans (Ussami et al. 2011;Fitzpatrick et al. 2010;Veríssimo et al. 2017).However, the analysis of juvenile specimens (<2 year) from Atlantic Ocean nurseries (Western Iberia, Azores and South Africa) using both mitochondrial and microsatellite markers reported a lack of genetic differentiation, suggesting the presence of a panmictic population in the whole Atlantic Ocean (Veríssimo et al. 2017). Similar results were reported by Bailleul et al. (2018), with microsatellite data supporting the occurrence of a single panmictic worldwide blue shark population, except for hints of faint genetic differentiation of Mediterranean populations compared with Pacific populations.As the level of exchange required to maintain genetic homogeneity is much lower than that required to maintain demographic interdependency, particularly for large populations (Waples and Gaggiotti 2006), Bailleul et al. (2018) performed simulations suggesting that the apparent panmixia in blue shark could be explained by a genetic lag-time effect.In other terms, demographic changes are not likely detectable using standard genetic analysis before a long transitional period of time (coined the 'population grey zone effect').More recent worldwide scale population genomic studies detected a subtle but significant level of differentiation between the Mediterranean and the North Atlantic (F ST comprised between 0.0007 and 0.0010; Nikolic et al. 2023).These results, including a handful of Mediterranean specimens, confirmed the hypothesis previously made by Bailleul et al. (2018) that a more granular genome-representation approach would allow exiting the 'grey zone of population differentiation' and reveal genetic differentiation if present.Nevertheless, these recent studies only included a limited sample of Mediterranean origin, particularly in the Eastern part, which precludes a thorough understanding of microevolutionary dynamics in the basin.The International Commission for the Conservation of Atlantic Tunas (ICCAT), which assesses the blue shark stocks, manages the species as separate stocks in the Atlantic Ocean and Mediterranean Sea, solely based on the results of previous tagging studies with a limited number of sharks tagged in the Atlantic and recaptured in the Mediterranean Sea (ICCAT 2009;Fitzmaurice et al. 2005).However, the need for more data to better delineate stock boundaries has been stressed (ICCAT 2023). As the blue shark population structure within the Mediterranean Sea remains largely unknown, this work aimed to fill this knowledge gap, while also shedding further light on the connectivity between the Atlantic and Mediterranean-using a large set of genome-wide SNPs-with the aim to contribute to the improved management and conservation of this species, and further expanding our understanding of how marine populations are formed and maintained. | Sampling A total of 291 individuals were sampled in four areas, mostly as bycatch from commercial fisheries (Figure 1; Appendix S4): the Mediterranean (East Mediterranean, EMED: n = 111; West Mediterranean, WMED: n = 116), adjacent Northeast Atlantic areas from Gibraltar to Azores (Northeast Atlantic, EATL: n = 34) and from Southern Ireland and Great Britain (Celtic Sea, CELT: n = 30). Muscle or skin tissue samples (ca 0.1-0.2g) were collected using sterile scissors or tweezers and stored in 96% ethanol at −20°C.Specimens biological data as fork Length (in cm) and sex (female/male) as well as sampling data such as fishing date, geographical coordinates (longitude/latitude) and depth (in m) were collected whenever possible (Appendix S4). | Genomic Libraries Preparation and Sequencing Genomic DNA (gDNA) was extracted using a modified salting-out extraction protocol (Cruz et al. 2017).A modified ddRAD sequencing protocol was used to simultaneously genotype individuals at thousands of SNPs (Peterson et al. 2012;Brown et al. 2016). Three ddRAD libraries were constructed, including individuals from different geographical areas distributed across three different libraries (Table S2) to avoid library bias.Briefly, for each individual, a standard quantity of 30 ng of gDNA was digested with Sbf I and SphI (0.43 U of each, New England Biolabs).P1 and P2 barcoded adapters, compatible with the Sbf I and SphI overhangs respectively, were mixed with T4 ligase and added to each sample.After enzyme heat inactivation, individual samples were pooled and cleaned up with MinElute PCR Clean Up Kit (Qiagen, Venlo, Netherlands). Each library was run on an agarose gel (1.1%), to select fragments of 200-300 bp.Size-selected DNA was then extracted from the gel.The eluted library was PCR amplified with generic P1 and P2 complementary primers after optimising the PCR conditions.The amplified library was purified using AMPure XP Magnetic Beads (Beckman Coulter, Pasadena, California, USA).Two reference individuals were included as replicates in each library to assess the sequencing/genotyping error rate.The obtained ddRAD libraries were paired-end (PE) sequenced in three lanes using an Illumina HiSeq 4000.Demultiplexed reads are available on the NCBI Short Read Archive BioProject PRJNA1053301. | Bioinformatic Analysis and Loci Filtering Raw sequencing data were checked for quality using FASTQC (version 0.11.8, Andrews 2010).Reads were demultiplexed using the program 'process_radtags' implemented by STACKS v. 1.42 (Catchen et al. 2011(Catchen et al. , 2013) ) avoiding -c and -q parameters, as suggested by the dDocent pipeline manual.The dDocent pipeline (www.ddoce nt.com; Puritz et al., 2014aPuritz et al., , 2014b) ) was then used for reference construction, mapping reads and SNP calling.The pipeline dDocent has been specifically designed to analyse ddRADseq data of marine species, which are often characterised by high diversity and low differentiation (Puritz, Gold, and Portnoy 2016;Hollenbeck et al. 2017). Characterising genotype data without the help of a reference genome presents several challenges, such as the pipeline trade-off between splitting or lumping alleles into different clusters or a single locus, inflating homozygosity and heterozygosity, respectively.Similar issues have been addressed at the clustering step level using a high sequence similarity, from which a consensus sequence is derived.Additionally, haplotyping informative variants identified by dDocent using the rad_haplotyper.plscript by Willis et al. (2017) allowed for resolving any artificial clustering due to physical linkage between SNPs within locus typical at low levels of divergence among populations (Figures S1-S4).The haplotyping post-clustering step mitigated also the effect of high levels of repeats and duplications expected in shark genomes. Detailed assembly, SNP calling and filtering steps are described in the Appendices S1 and S2.Genomic data were then converted to the appropriate file format for subsequent population genetic analysis with PDGSpider (Lischer and Excoffier 2012).The final SNPs dataset was screened for outlier loci with three different approaches: the software Bayescan v. 2.1 (Foll and Gaggiotti 2008), the packages pcadapt (Luu, Bazin, and Blum 2017) and OutFLANK (Whitlock and Lotterhos 2015), implemented in the R environment version 4.0.5 (R Core Team 2021).See Appendix S3 for a detailed explanation of each of the three genome scan methods for outlier detection.All the resulting outlier loci were annotated for specific functions by matching the SNP flanking regions against the GenBank database (www.ncbi.nlm.nih.gov/ genba nk/ ) using BLAST (Altschul et al. 1990), and then removed from the dataset, producing a neutral loci dataset used for downstream analysis. | Population Genetics Analysis Basic statistics of genetic diversity, heterozygosity, homozygosity and Hardy-Weinberg test were computed using the diveRsity R package (Keenan et al. 2013). Genetic differentiation and population structure were inferred using three distinct families of approaches.First, pairwise F ST and relative p-values, following the Weir and Cockerham model (1984), were computed using the StAMPP R package (Pembleton, Cogan, and Forster 2013).Second, principal components analysis (PCA) and discriminant analysis of principal components (DAPC) were performed using the R package adegenet (Jombart 2008;Jombart, Devillard, and Balloux 2010;Jombart and Ahmed 2011) and plotted using the ggplot2 package (Wickham 2016).Third, the genetic ancestry of each individual was estimated using the admixture model as implemented in the Bayesian clustering approach in STRUCTURE version 2.3.4 (Pritchard, Stephens, and Donnelly 2000).Results were obtained for K values (i.e., number of distinct genetic clusters) set from 1 to 5, and from 300,000 iterations following a burn-in period of 100,000 iterations.The output from each K value (K from 1 to 5) was examined with (Jakobsson and Rosenberg 2007) to identify common modes, and results were plotted using DISTRUCT (Rosenberg 2004).The value of K that best fits the data was identified according to the Evanno method (Evanno, Regnaut, and Goudet 2005), as implemented in StructureHarvester (Earl and vonHoldt 2012), and according to Puechmaille (2016). A Mantel test was used to test for isolation by distance per population.Four geographical points were chosen to be representative of the Celtic Sea, Northeast Atlantic, Western Mediterranean and Eastern Mediterranean (see Appendix S1 for details), and the minimum distance possible by seaway (Figure S5) was estimated using the R package marmap (Pante and Simon-Bouhet 2013). | Results Among the 291 specimens initially collected (Appendix S4), the sex of 263 individuals (118 males and 145 females) was determined, while for 28 individuals, no information was gathered, and after selecting for DNA extractions that met the quality standard for RAD sequencing, libraries were built and sequenced for a total of 212 blue sharks, plus four replicates (n = 216).This led us to discard 79 samples with poor preservation state that did not permit to obtain the high-quality DNA extraction required by the protocol.Steps with sample selection and discard due to quality post-sequencing are detailed in the Appendix S1. Of these 14,729 SNPs, no SNPs were identified as outliers by BayeScan, 1 by OutFLANK and 15 SNPs by pcadapt, representing in total 0.11% of the retained SNPs.After removing these outliers, a final dataset in vcf format of 14,713 SNPs was created.Annotation of each of the outlier SNPs is available in Table S4. Overall, the allelic richness observed was higher in the Mediterranean Sea than the Atlantic Ocean (Table 1).Heterozygosity values were similar among localities (Table 1).Significant Hardy-Weinberg disequilibrium and heterozygote deficiency were observed in the Western Mediterranean sample.The highest value of heterozygosity was observed in the Eastern Mediterranean samples (0.159), whereas the lowest value was observed in the two Atlantic samples (0.151) (Table 1). Pairwise F ST values were low but significant for most comparisons after false discovery rate correction for multiple tests (Benjamini and Yekutieli 2001;Benjamini and Hochberg 1995; Table 2), with the exception of the comparisons between the Celtic Sea and either the Northeast Atlantic or the Western Mediterranean. Overall, multivariate PCA and DAPC analyses did not show any clear pattern of genetic structure among areas, despite a few Eastern Mediterranean individuals being genetically different from the rest (Figures 2 and 3).Similarly, the STRUCTURE Bayesian clustering, using the best K values according to the Puechmaille and Evanno methods (K = 2 and 4, respectively), showed no clear geographic clustering when K = 2, yet highlighted a few well-differentiated individuals from the Eastern Mediterranean (Figure 4), in agreement with the PCA (Figure 2), while the DAPC more closely reflected the results observed with the pairwise F ST analysis (Figure 3, Table 2). A significant correlation between geographical and genetic distance, expressed as pairwise F ST , was detected through the Mantel test performed on the four geographical regions (Mantel statistic r = 0.7790, y = −0.00031+ 4e-07x, R 2 = 0.61, p = 0.0417, Figure S6). | Discussion Our study reveals the existence of subtle yet significant genetic differentiation between the Mediterranean and the Northeast Atlantic blue shark populations, confirming the Mediterranean singularity recently reported by Nikolic et al. (2023). Our findings also suggest some substructure within the Mediterranean.These results contrast with previous studies based on low-density genotyping, where no departure from largescale panmixia was detected in the entire Northeast Atlantic and Mediterranean areas (Bailleul et al. 2018;Veríssimo et al. 2017). Furthermore, the use of larger sample sizes, including both adult and juvenile specimens, and denser sampling in the Mediterranean allowed the present study to highlight a faint but significant genetic differentiation between Western and Eastern Mediterranean groups (Table 2, Figures 2-4). These results support the phylogeographic signal previously suggested based on mitochondrial DNA (Leone et al. 2017). The limited heterozygote deficiency and F IS values in our study are comparable to results obtained by Bailleul et al. (2018) using microsatellites.When comparing our findings to those obtained with SNPs by Nikolic et al. (2023), we observed lower values of F IS in both Northeast Atlantic and Mediterranean areas (Table 1). The genetic diversity of subsampled groups (Figure S7) confirms the patterns observed in Table 1 (Table S5), and pairwise F ST values among subgroups confirm the significant differentiation of the Eastern Mediterranean blue sharks.Some comparisons, however, show nonsignificant values after correction for multiple test, possibly due to the limited sample size and associated statistical power of split groups. Interestingly, the pairwise value between the Eastern Ionian Sea verses Adriatic Sea within the Eastern Mediterranean is still significant.However, these results may also be affected by low sample size, and more samples are needed to better resolve any other substructuring within the Mediterranean Sea (Table S6). Similar to Nikolic et al. (2023), these significant F ST values were accompanied by a lack of clear clustering pattern (when using multivariate and model-based clustering methods), likely due to the low genetic signal of differentiation. A significant exception in the present study is the remote position of some eastern Mediterranean individuals that seem to be well differentiated from all others (Figures 2-4), associated with higher F ST values between the Eastern Mediterranean and all other areas (Table 2).Of these divergent individuals, three are from the Adriatic Sea, one from the Eastern Ionian Sea and one TABLE 1 | Genetic diversity estimates of blue sharks per geographic areas-Allelic richness (Ar) with the low and high CI, number of individuals (Nb), observed heterozygosity (Hobs), expected heterozygosity (Hexp), unbiased expected heterozygosity (Hexp_un), inbreeding coefficient (F IS ) with the low and high CI on F IS wrapper, p-values from chi-squared test for goodness-of-fit to Hardy-Weinberg equilibrium globally (hwe_glb), test significance for directional HWE on homozygote and heterozygote deficiency (hwe_hom; hwe_het). Ar Nb from Crete (Figure S7).The amount of divergence observed in these Eastern Mediterranean specimens may also suggest cases of Lessepsian migration from the Red Sea by this species, although this has never been reported.Such migrations are more commonly associated with small bony fishes and invertebrates (via ships/cargos), but have also been observed in elasmobranchs, such as Carcharhinus melanopterus and the Carcharhinus brevipinna (Bradai, Saidi, and Enajjar 2012).A dedicated study with larger sample sizes, including samples from the Red Sea, would be necessary to test this hypothesis.Even removing the five most divergent specimens from the Eastern Mediterranean (see Appendix S1 for details), the overall genetic diversity and divergence do not change significantly (Tables S6 and S7).This suggests that the genetic structure observed in the present study is not the result of just a few divergent individuals alone, but rather the result of a genuine, subtle population structuring of blue shark populations within the Mediterranean Sea. The genetic divergence of the Eastern Mediterranean sharks is also observed in split groups within the Mediterranean Sea (Figure S7) in both PCA and in DAPC analysis using a priori number of groups, supporting genetic differentiation within the Mediterranean (Figures S8 and S9).The correct number of clusters cannot be ascertained through the successive K-means as in adegenet (Jombart 2008;Jombart, Devillard, and Balloux 2010;Jombart and Ahmed 2011), probably due to the subtle signal detected.In fact, generalised linear models applied on the relationship between clustering success and F ST values on simulated data, examining the influence of a priori versus de novo group designations in DAPC analysis, highlight that the successive K-means method does not reliably detect signal when F ST between groups is not very high, particularly for large pelagic species (<0.1).This pleads for the use of a priori number of clusters based on the knowledge of the biology and behaviour of species under study (Miller, Cullingham, and Peery 2020). Recent observations relating ecological data on blue shark distribution showed that large females may be more tolerant to cooler waters (Druon et al. 2022).This raises questions about the influence of sex on the spatial distribution of genetic diversity, as previously suggested based on mitochondrial phylogeography (Leone et al. 2017). In the present study, the Celtic Sea is the only area where such a sex ratio (and life stage) bias is observed, as the majority of Celtic specimens sampled are large females.However, a larger dataset and a wider range of sampling will be needed to better investigate the relationship between genetic structure and sex. Long-term (four decades) tagging studies suggest that the large majority of blue sharks tagged in the Mediterranean Sea are immature and remain in the tagging area, avoiding movements towards the adjacent Northeast Atlantic.The only exception is one subadult female that moved a short distance to reach the adjacent Northeast Atlantic area (Kohler et al. 2002).Similarly, on the other side of the Strait of Gibraltar, only one adult male tagged in the Northeast Atlantic has been recaptured in the Mediterranean Sea (Kohler et al. 2002). Telemetry data from blue sharks equipped with satellite tagging in the Western Mediterranean suggest a lack of connectivity with the Northeast Atlantic and with the adjacent Eastern Mediterranean blue sharks (Poisson et al. 2024).Altogether, these observations indicate a limited level of exchange among those areas, reflecting weak differentiation between these major basins (Northeast Atlantic, Western and Eastern Mediterranean; Nikolic et al. 2023; present study). Furthermore, the result from the Mantel test is consistent with the existence of an isolation by distance in blue sharks, which implies non-random mating and restricted gene flow among individuals from different sampled locations (see Results and Figures S5 and S6). The lack of panmixia within the Mediterranean Sea may be explained by the environmental factors of the western and eastern Mediterranean, respectively.In fact, the Mediterranean Sea is characterised by different seas with very different oceanographic conditions (Tanhua et al. 2013).An environmental niche and habitat analysis of the blue shark on a global scale highlighted how biotic and abiotic factors may shape blue shark population distribution (Druon et al. 2022).In other pelagic species with similar spatial ecology, such as swordfish (Xiphias gladius), significant genetic structure has been observed between the Mediterranean Sea and the Atlantic Ocean, and within the Mediterranean Sea (Righi et al. 2020;Viñas et al. 2010). Philopatric behaviour was suggested to be the main driver of swordfish population differentiation within the Mediterranean Sea because of distinct phylogeographic histories of populations in the eastern and the western Mediterranean basins, maintained by contemporary life-history traits (Viñas et al. 2010).Evidence of philopatry and regional site fidelity has been observed in blue sharks, with interannual resighting of blue sharks in the same spots in the mid-North Atlantic (Fontes et al. 2024;Vandeperre et al. 2014).This philopatric behaviour, in combination with local demographic dynamics and potential site fidelity, may have shaped the current population differentiation of the blue shark between the Northeast Atlantic and the Mediterranean, and within the Mediterranean Sea. | Evolutionary Perspective of Subtle Genetic Structure Accounting for the limited dispersal through the Gibraltar Strait observed with tagging data, the allele frequencies among even distant locations can be maintained at similar levels by very few migrants per generation.This can partially mask the existence of different demographic stocks (Palsbøll, Bérubé, and Allendorf 2007).In fact, even low migration rates, combined with a relatively large effective population size, can mask the existence of two demographically independent populations, suggesting a near-panmictic scenario (Waples and Gaggiotti 2006). This observed pattern could be explained by a marine metapopulation model as proposed by Kritzer and Sale (2004), in which genetic drift and gene flow determine 'the dynamics of local populations strongly dependent upon local demographic processes, but also influenced by a nontrivial element of external replenishment'.If the dynamics of each potential population can be modelled per se (i.e., neglecting any potential external influence), the metapopulation scenario is not appropriate (Kritzer and Sale 2004).Otherwise, if the potential populations dictate their own population dynamics together with an external replenishment that cannot be ignored, then the metapopulation scenario is appropriate (Kritzer and Sale 2004).In these cases, it is the amount of demographic connectivity among potential populations set by migrant individuals that determine whether they form a metapopulation or not, and the rate of gene flow among units will determine the shape and fate of a given metapopulation complex and its components. However, in the presence of small values of genetic differentiation, such as the F ST values observed in the present study, the amount of gene flow is difficult to estimate under an island model of migration.This difficulty arises because of the relationship between F ST and number of migrants among populations per generation (Lowe and Allendorf 2010).Furthermore, many biological assumptions necessary to estimate the gene flow under an island model of migration, are unrealistic and will be violated (Whitlock and McCauley 1999). A metapopulation model has been used to explain the recent decline observed in three species of sharks when assuming unstructured demographic models, with the presence of a neglected population structure (Lesturgie, Planes, and Mona 2021).Beyond speculating about the existence of a metapopulation structure, even faint but significant genetic structure implies limited demographic exchange between populations.This is evident in the results observed in the present study and is in line with the stronger signal recently reported between the Atlantic and the Mediterranean by Nikolic et al. (2023).Furthermore, the pattern of isolation by distance (Figure S6) and those two concordant studies increase the confidence in the biological relevance of such subtle, yet significant, genetic structure (Palumbi 2003). Based on the above results, the Mediterranean and Northeast Atlantic populations should be considered demographically independent, subject to area-related population processes and different vulnerabilities to exploitation.Furthermore, even within the Mediterranean Sea (western Mediterranean vs. eastern Mediterranean), there is evidence of substructuring, with the presence of at least two subpopulations with independent demographic dynamics. | Management Implications of Multiple Discrete Population The small number of sharks tagged in the Atlantic and recaptured in the Mediterranean Sea led management organisations to consider the Mediterranean as a separate stock (Kohler and Turner 2008;ICCAT 2005;Fitzmaurice et al. 2005;Kohler et al. 2002).For stock assessment purposes, separate analyses have been carried out for the North Atlantic and the Mediterranean for more than a decade.The ICCAT Sub Committee on bycatches assumed three different stocks in the North Atlantic, South Atlantic and Mediterranean (ICCAT 2005).The limited amount of tagging data made the separation of the Northeast Atlantic and Mediterranean blue shark in two different stocks a precautionary approach, as limited data from the Mediterranean blue shark were available.There was thus an acknowledged need for targeted studies to fill the knowledge gap about the existence of two separated populations on both sides of the Strait of Gibraltar (ICCAT 2016), a gap now filled through population genomics (present study; Nikolic et al. 2023), confirming the validity of this precautionary approach. The present study may also serve to update future stock assessment and management plans.In fact, the genetic differentiation with significant F ST values supports the existence of independent demographic entities for the blue shark within the Mediterranean as well, calling for a revision of recognised management units.The present study, echoing results from Nikolic et al. (2023), confirms the importance of using genomewide markers and dedicated sampling design in resolving the population genetic structure of the Northeast Atlantic and Mediterranean blue shark populations, especially considering the potential 'grey zone' effect in studies based on a handful of molecular markers (Bailleul et al. 2018). Another possible area of future research would be to increase the sample size in the Mediterranean and in the Atlantic Ocean, including also the westernmost and the easternmost distribution of blue sharks from the Atlantic and Mediterranean Sea.This would clarify the relationships between western Atlantic and eastern Atlantic blue sharks with those from the western and eastern Mediterranean Sea.This is especially important in light of the extensive transatlantic migration observed, with consequent gene flow that follows. A similar pattern of genetic differentiation within the Mediterranean has been reported thus far for the benthic smallspotted catshark using both mitochondrial and microsatellite markers (Gubili et al. 2014;Kousteni et al. 2014;Melis et al. 2023), and in the black mouth catshark using microsatellite markers (Di et al. 2022).The significant differentiation observed in blue sharks between the Eastern and the Western Mediterranean suggests that the presence of discrete populations within the Mediterranean may also extend to pelagic sharks.Given the implications of such independence for the management of exploited or impacted populations, extending this study to other chondrichthyan species would be important for the conservation of these often declining groups. FIGURE 1 | FIGURE 1 | Sampling locations of blue sharks in the Celtic Sea (green dots), North Eastern Atlantic (red dots), Western Mediterranean (purple dots) and Eastern Mediterranean (blue dots).Blue shading indicates bathymetry (i.e., depth, in metres). FIGURE 4 | FIGURE 4 | Resulting plot of the genetic clustering using STRUCTURE software for K = 2 and 4 as suggested by Puechmaill and Evanno's method respectively.CELT, Celtic Sea; EATL, Northeast Atlantic; EMED, Eastern Mediterranean; WMED, Western Mediterranean.Each bar on both plots represents the same individual. TABLE 2 | Pairwise F ST values (below diagonal) and associated pvalues (above diagonal) between blue shark samples based on the 14,713 neutral SNPs. *Values significant after false discovery correction for multiple tests.
6,988.8
2024-09-01T00:00:00.000
[ "Environmental Science", "Biology" ]
Study on a susceptible–exposed–infected–recovered model with nonlinear incidence rate 1 Model formulation The spread and control of infectious diseases [1–3] was described by governing the mathematical models, aiming to investigate the dynamical properties. Since the pioneering work was established by Kermack and McKendrick [4], the epidemic models have been paid more attention at present. Usually, epidemic models included three states within a total population: the susceptible S, the infected I , and the recovered R. For instance, Cai et al. [5] introduced an SIS model incorporating media coverage to investigate the effects of environment fluctuations. However, for some diseases, such as Hepatitis B, Hepatitis C, and AIDS, the exposed hosts E took a vital role when dynamical behaviors were expected to be discussed. As mentioned in recent literature, a class of epidemic models was considered by some scholars; it was called susceptible–exposed–infected–recovered model (short for SEIR model, see [6–18]). During the development of epidemic models, incidence rates which describe the relationship between the susceptible and the infected/the exposed play an important role, and meanwhile change its form ranging from bilinear case to nonlinear case when investigation of epidemic models is conducted. For example, [19–22] governed bilinear incidence rate βSI to explore epidemic models with fluctuation in epidemic models. If the number of the infected within a population was large, then three types of saturated incidence rates were usually used in epidemiological models: the mixing standard incidence rate βSI N [23– 25], the nonlinear incidence rate βSqIp [26, 27], and the saturation incidence rate βSI 1+αS In this paper, we still use four states of epidemic models, that is, the susceptible S, the exposed E, the infected I, and the recovered R, to describe our model with environmental fluctuations. Motivated by the above-mentioned discussions, we assume that individuals within total population are well mixed and settle in the same environment. Our model goes with the idea that the susceptible and the infected contact with constant rate β; after contacting with the infected, the susceptible turns into the exposed when time exceeds incubation period (also called the latent period, see [8,11,18]); that the exposed individuals become the infected and then the recovered; and that part of recovered individuals enter again into the susceptible state. According to this spread cycle, we establish our model by equations, and we start with an equation of the susceptible as follows: S(t) = A -μS(t) + δR(t) -βS(t)I(t) ϕ(I(t)) , where A and μ respectively denote the new recruitment rate and the disease-free death rate, δ is the rate at which the recovered individuals become susceptible, βSI ϕ(I) is a nonlinear incidence rate with property that ϕ(I) is increasing and ϕ(0) = 1, and that, for some constant l > 0, the following property is valid: For the exposed, we have thaṫ E(t) = βS(t)I(t) ϕ(I(t)) where σ is the rate at which exposed individuals become infected individuals. Further, the changes of infected and recovered individuals at time t are assumed to follow two ordinary differential equations: where ρ is the death rate caused by diseases and γ is the recovery rate of infected individuals. Now, we derive a system that consists of four ordinary differential equations: Then lim sup t→+∞ (S(t) + E(t) + I(t) + R(t)) ≤ A μ . Thus the feasible region for system (1) is Let Int Ω denote the interior of Ω. It is easy to verify that the region Ω is positively invariant with respect to system (1) (i.e., the solutions with initial conditions in Ω remain in Ω). Hence, system (1) will be considered mathematically and epidemiologically well posed in Ω. One concern for further investigation is to find out an expression for the basic reproduction number R 0 of model (1) by using of the next generation matrix (see [34]). The basic reproduction number, sometimes called basic reproductive rate or basic reproductive ratio, is one of the most useful threshold parameters that characterize mathematical problems concerning infectious diseases. This metric is useful because it helps determine whether or not an infectious disease will spread through a population. Next, we calculate the basic reproduction number of system (1). Let x = (S, E, I, R) T , where T denotes the transpose of matrix (or vector). Then model (1) can be written as So the infected classes can be referred to as m = 2, that is, the exposed compartment (E) and the infected compartment (I), and the disease-free equilibrium of model (1) is x 0 = ( A μ , 0, 0, 0) T . Based on the detailed documentations in [34][35][36], we can easily get and the inverse matrix of V is Therefore FV -1 is the next generation matrix for model (1). It follows that the spectral radius of matrix FV -1 is ρ(FV -1 ) = Aβσ μ(μ+σ )(μ+ρ+γ ) . According to Theorem 2 in [34], the basic reproduction number is When diseases attack total population in real circumstances, the effects of comprehensive and external fluctuation are inevitable and distinct. We here assume that the effects are proportional to states of models. For instance, the temperature, air humidity, and other factors normally are regarded as comprehensive and external fluctuations. Therefore we consider the following epidemic model with environmental fluctuations and nonlinear incidence rate: where B i (t) are standard one-dimensional independent Wiener processes, σ i are the intensities of white noise for σ i > 0 and i = 1, 2, 3, 4. Throughout the paper, unless otherwise specified, let (Ω, {F t } t≥0 , P) be a complete probability space with a filtration {F t } t≥0 satisfying the usual conditions, that is, it is increasing and right continuous, while F 0 contains all P-null sets. The rest of this paper is organized as follows. In Sect. 2, we show that model (2) admits a unique global positive solution with any initial value. In Sect. 3, we establish sufficient conditions for extinction of the disease. In Sect. 4, we verify the persistence in the mean under some conditions. Finally, we prove that there is an ergodic stationary distribution of model (2) by constructing suitable Lyapunov functions. Existence and uniqueness of positive solution Throughout this paper, we set inf ∅ = ∞. It is obvious that τ m is increasing as m → ∞ (details can be seen in [37]). We also denote lim m→∞ τ m = τ ∞ . Obviously, there is τ ∞ ≤ τ e . If we confirm that τ ∞ = ∞, then we get τ e = ∞ for all t ≥ 0. The proof goes by contradiction. Assuming that τ ∞ = ∞, then there exists a pair of constants T > 0 and ε ∈ (0, 1) such that P{τ ∞ ≤ T} ≥ ε. Hence there exists an integer m 1 ≥ m 0 such that P{τ m ≤ T} ≥ ε for each integer m ≥ m 1 . We define a C 2 -function V : R 4 + → R + as follows: where b is a positive constant that will be determined later. Then, making use of Itô's formula on V (S, E, I, R), we obtain that dV S(t), E(t), I(t), R(t) where LV S(t), E(t), I(t), R(t) We choose b = μ+ρ β and denote K : For any t ∈ [0, T] and m ≥ m 1 , we take integration from 0 to τ m ∧ T, and then take expectation on both sides, which gives Let m → ∞, which implies the contradiction as a consequence, we have τ ∞ = ∞. The proof is complete. Extinction of diseases Extinction and persistence are two most important issues in the study of epidemic models. For the sake of simplicity, we denote Lemma 3.1 For any initial value (S(0), E(0), I(0), R(0)) ∈ R 4 + , the solution (S(t), E(t), I(t), R(t)) has the following properties: The proof of Lemma 3.1 is similar to the approach used in [38,39], and we omit the proof here. (2) with any initial value in R 4 + . If the basic reproduction number satisfies Proof Let Theorem 3.1 Let (S(t), E(t), I(t), R(t)) be the solution of model we obtain that Taking integration on both sides of (3) from 0 to t and according to Lemma 3.1, we see that Now we define a C 2 -function W : R 2 + → R + as follows: where Making use of Itô's formula, we have Based on the fundamental inequality (a 2 + b 2 )(c 2 + d 2 ) ≥ (ac + bd) 2 for positive a, b, c, and d, we obtain that Therefore, from (5) and (6) L ln W E(t), I(t) Now we take integration on both sides of (7) and divide it by t, which then implies the following expression: where are local martingales, whose quadratic variations are Then taking the upper limit on both sides, from (7), (8), and (9), we can get If ν < 0, we obtain that lim sup t→∞ ln I(t) t < 0, a.s., which suggests that lim t→∞ I(t) = 0. This indicates that the disease would tend to extinction. The proof is complete. Persistence in the mean In this section, we will demonstrate some useful results about the persistence of the diseases. Theorem 4.1 Let (S(t), E(t), I(t), R(t)) be a solution of system (2) with any initial value in > 1, then system (2) has the following property: . That is to say, the disease will be prevalent. Proof In order to testify the persistence, we establish a C 2 -function V 1 : R 4 + → R as follows: where c 1 , c 2 , c 3 are positive constants to be determined later. Next we apply Itô's formula to (10). Then we get the following result: where Stationary distribution In this section, we will establish sufficient conditions for the existence of a unique ergodic stationary distribution. First of all, we present a lemma which will be used later. Let x(t) be a homogeneous Markov process in E l (E l denotes an l-dimensional Euclidean space) and be described by the following stochastic differential equation: The diffusion matrix is defined as follows: Lemma 5.1 ([40]) The Markov process x(t) has a unique ergodic stationary distribution μ(·) if there exists a bounded domain U ⊂ E l with regular boundary Γ and (A1) There is a positive number M such that for all x ∈ E l , where f (·) is a function integrable with respect to the measure μ. Proof The diffusion matrix of system (2) is given by , k] and k > 1 is a sufficiently large integer. Then condition (A1) holds, where E l = R 4 + , U = D k . Next we construct a nonnegative C 2 -functionV : R 4 + → R in the following form: It is easy to check that is a continuous function. Hence, V (S, E, I, R) must admit a minimum point (S * , E * , I * , R * ) in the interior of R 4 + . Then we define a nonnegative C 2 -functionV as follows: where V 1 is presented in (10), and where n is a sufficiently small constant and M > 0 satisfying the following condition: According to similar discussions as shown in Theorem 4.1, we have and Therefore where N = 5μ + δ + ρ + where ε i > 0 (i = 1, 2, 3, 4) are sufficiently small constants satisfying the following conditions: - where P, Q, T, F, G, H, L are presented in (21), (22), (23), (24), (25), (26), (27), respectively. For convenience, we divide R 4 + \ D into eight domains: Obviously, D C = D 1 ∪ D 2 ∪ · · · ∪ D 8 . Next we only need to show that LV (S, E, I, R) ≤ -1 on D C . Case 1. If (S, E, I, R) ∈ D 1 , by (13) we get that Case 2. If (S, E, I, R) ∈ D 2 , by (14) we have that Case 3. If (S, E, I, R) ∈ D 3 , by (15) we have that Case 4. If (S, E, I, R) ∈ D 4 , by (16) we get that Case 5. If (S, E, I, R) ∈ D 5 , by (17) we get that Case 6. If (S, E, I, R) ∈ D 6 , by (18) we get that Case 7. If (S, E, I, R) ∈ D 7 , by (19) we get that Case 8. If (S, E, I, R) ∈ D 8 , by (20) we get that The proof is complete. Figure 1 A realization of extinction of the exposed and infected to model (2) Figure 2 A realization of extinction of the exposed and infected to model (2) Figure 3 Histogram of the susceptible, the exposed, the infected, and the recovered to model (2) We take the parameters of model (2) Fig. 1, when n = 25,000. At the same time, we get that the disease will reach extinction faster as the environmental disturbance increases. For example, when σ 1 = 0.054, σ 2 = 0.6, σ 3 = 0.6, σ 4 = 0.6, for the corresponding dynamics see Fig. 2, when n = 5000. Conclusions In this paper, we intend to investigate an epidemic model of having four stages: the susceptible, the exposed, the infected, and the recovered. And we focus on extinction, persistence, and stationary distribution of a positive solution to epidemic model with nonlinear incidence rate and independent environmental fluctuations. We firstly, by constructing an appropriate function, show that model (2) admits a unique global positive solution with any initial value. Moreover, we also find that the extinction of disease depends on the basic reproduction number R 0 (a threshold for its corresponding deterministic model). When R 0 < 1 and ν < 0, the disease under independent environmental fluctuations dies out as demonstrated in Theorem 3.1, and its corresponding dynamics could be found in Fig. 1. While, by constructing several C 2 -functions, under the condition R 0 > 1, we derive sufficient conditions for persistence and existence of a unique ergodic stationary distribution to model (2), the corresponding realizations could be found in Fig. 2 and Fig. 3, respectively. We further present numerical simulations on ergodicity of model (2) at the end of this paper and point out that extinction time of infected individuals decreases when intensities of environmental fluctuations σ i (i = 1, 2, 3, 4) increase. These results provide readers a biological perspective when understanding an epidemic model with fluctuated environments.
3,456.6
2020-05-12T00:00:00.000
[ "Mathematics" ]
Anti-Insulin Receptor Autoantibodies Are Not Required for Type 2 Diabetes Pathogenesis in NZL/Lt Mice, a New Zealand Obese (NZO)-Derived Mouse Strain The New Zealand obese (NZO) mouse strain shares with the related New Zealand black (NZB) strain a number of immunophenotypic traits. Among these is a high proportion of B-1 B lymphocytes, a subset associated with autoantibody production. Approximately 50% of NZO/HlLt males develop a chronic insulin-resistant type 2 diabetes syndrome associated with 2 unusual features: the presence of B lymphocyte–enriched peri-insular infiltrates and the development of anti-insulin receptor autoantibodies (AIRAs). To establish the potential pathogenic contributions ofBlymphocytes and AIRAs in this model, a disrupted immunoglobulin heavy chain gene (Igh-6) congenic on the NZB/BlJ background was backcrossed 4 generations into the NZO/HlLt background and was then intercrossed to produce mice that initially segregated for wild-type versus the mutant Igh-6 allele and thus permitted comparison of syndrome development. A new flow cytometric assay (AIRA binding to transfected Chinese hamster ovary cells stably expressing mouse insulin receptor) showed IgM and IgG subclass AIRAs in serum from Igh-6 intact males, but not in Igh6null male serum. However, the absence of B lymphocytes and antibodies distinguishing mutant from wild-type males failed to significantly affect diabetes-free survival. The Igh6nullmales gained weight less rapidly than wild-type males, probably accounting for a retardation, but not prevention, of hyperglycemia. Thus, AIRA and the Blymphocyte component of the peri-insulitis in chronic diabetics were not essential either to development of insulin resistance or to eventual pancreatic beta cell failure and loss. A new substrain, designated NZL, was generated by inbreeding Igh-6 wild-type segregants. Currently at the F10 generation, NZL mice exhibit the same juvenile-onset obesity as NZO/HlLt males, but develop type 2 diabetes at a higher frequency (> 80%). Also, unlike NZO/HlLt mice that are difficult to breed, the NZL/Lt strain breeds well and thus offers clear advantages to obesity/diabetes researchers. New Zealand obese (NZO) is an inbred mouse strain derived in New Zealand from outbred stock from the Imperial Cancer Research Fund Laboratories in London [1]. NZO mice have been studied primarily as a mouse model of obesity-induced insulin-resistant diabetes [2]. Males of the NZO/HlLt strain develop diabetes at a frequency of 40% to 50% whereas NZO/HlLt females develop marked obesity without diabetes [3]. Because of the early development of obesity, the strain is difficult to breed and has not been widely studied. Thus, despite the relatedness of the NZO strain to the autoimmune-prone New Zealand black (NZB) and New Zealand white (NZW) strains, only limited analysis of the immune system of NZO mice has been reported [4,5]. These studies indicated that NZO shared some of the classic autoimmune abnormalities found in the NZB strain, and suggested that the development of diabetes is associated with autoimmunity. Among the immune anomalies reported were the development of immunoglobulin M (IgM) autoantibodies to native and single-stranded DNA as well as IgM immune complex deposition in kidney [4]. Another immune anomaly shared with the NZB strain was splenic hypertrophy with increased basal unstimulated splenocyte proliferation in vitro, but reduced mitogen-stimulated proliferation. However, this hypoproliferative phenotype was reversed by insulin administration only in NZO, but not in NZB mice [5], suggesting that the immune anomaly in NZO was related to insulin resistance in these mice. That autoimmunity may contribute, in part, to the metabolic syndrome in NZO mice was suggested by the report that this strain spontaneously develops autoantibodies to the insulin receptor [6]. Recent work from our laboratory found a high frequency of B1 B lymphocytes in NZO/HlLt mice, a subset associated with autoantibody production [7]. Furthermore, pancreatic histopathology of the peri-insular infiltrates commonly observed in chronically diabetic NZO/HlLt males showed a higher frequency of B lymphocytes than T lymphocytes and included plasma cells (antibody producers) [8]. To establish whether the type 2 diabetes syndrome developing in NZO/HlLt males was dependent upon humoral immunity, we developed a new flow cytometry-based methodology for detecting anti-insulin receptor autoantibody (AIRA) and generated a B lymphocyte-deficient stock on the NZO strain background for comparison of syndrome development in the presence versus absence of AIRAs and other autoantibodies. Mice NZO/HlLt and C57BL/6J mice were housed in a specific pathogen-free (SPF) environment at The Jackson Laboratory. NZO/HlLt male progeny were aged to 24 weeks, with body weight measured every 2 weeks and plasma glucose (glucose analyzer; Beckman Instruments, Palo Alto, CA) measured every 4 weeks. Diabetes (plasma glucose levels >250 mg/dL) was 50% in NZO males by 24 weeks of age. All animals were handled in accordance with the guidelines of the National Institutes of Health, and the Institutional Animal Care and Use Committee of The Jackson Laboratory. Generation of a New Recombinant Congenic Strain (NZL/Lt) Segregating for B-Lymphocyte Deficiency NZB/BlnJ mice congenic for a disrupted Igh-6 allele on chromosome 12 encoding the IgM heavy chain [9] (formal designation NZB,129-Igh-6 tm1Cgn ) were kindly provided by Dr. L. D. Shultz (The Jackson Laboratory) [10]. Males from this stock (at N11) were outcrossed with NZO/HlLt females and F1 females were backcrossed to NZO/HlLt males. In 4 subsequent backcross cycles, heterozygous Igh-6 null carriers were identified by polymerase chain reaction (PCR) for the neomycin resistance cassette used in gene targeting. At N5, heterozygotes were intercrossed, and N5F1 Igh-6 null homozygotes were identified by flow cytometric demonstration of the absence of B220 + B lymphocytes in peripheral blood (monoclonal antibody [mAb] clone RA3-6B2; BD PharMingen). Homozygous mice generated between N5F1 and N5F4 were analyzed. At N5F2, a control line, now designated NZL, was selected that was wild type (e.g., NZO alleles) at the Igh-6 locus and at flanking polymorphic microsatellite markers distinguishing NZO from NZB. Analysis of Diabetic Syndrome Progression Wild-type NZL and NZL-Igh-6 null males were accumulated between N5F1 and N5F4 and longitudinally profiled for biweekly changes in body weight (BW) and nonfasting plasma glucose (PG) determined on a glucose analyzer (Beckman Instruments, Fullerton, CA). Mice were maintained on a 6% fatcontaining chow (NIH-31; Purina, Richmond, IN) and acidified drinking water. Because the immunodeficient NZL-Igh-6 null mice were very susceptible to infections, both genotypes were maintained in pressurized individually ventilated (PIV) caging and received sulfamethoxazole thiomethoprim (Goldline Laboratories, Ft. Lauderdale, FL) for 3 days per week. Serum samples for AIRA analysis from both genotypes were collected at the 20-week termination point for analysis in the AIRA assay (see below). Chinese hamster ovary (CHO) cells (American Type Culture Collection, Rockville, MD) were transfected with 10 µg of the pREP4/mIR construct using LipofectAMINE PLUS (Gibco BRL, Gaithersburg, MD) per manufacturer's instructions. Transfected CHO cells were grown in F12K media (Gibco) with 10% fetal calf serum and selected with 600 µg/mL hygromycin B (Calbiochem, LaJolla, CA). Hygromycin-resistant clones were screened for IR expression by 125 I-insulin-binding assay [12,13]. HIR3.5 (Jonathan Whittaker; NYU School of Medicine, NY, NY), stably transfected NIH 3T3 fibroblasts expressing 10 6 human IRs per cell [14], served as a positive control in the screening experiments. Untransfected parental CHO cells were used as a negative control to account for any background expression of endogenous hamster IRs. Clone 36, the transfected CHO clone with the highest level of mouse IR expression, was labeled with fluorescein isothiocyanate (FITC)labeled insulin (Sigma, St. Louis, MO) and cells expressing high density of IRs were positively sorted using a Becton-Dickinson flow cytometer (Wayne State University, Detroit, MI). Sorted cells were subjected twice to limiting dilution until a subclone (mIR36.11.1) with stable cell surface expression of mIR was obtained. Subclone mIR36.11.1, untransfected parental CHO cells, and positive control HIR3.5 were retested for IR expression by 125 I-insulin-binding assay [12,13]. Functionality of the mIR on the mIR 36.11.1 CHO Cells-Transfected Clone Specific binding of 125 I-insulin (human recombinant; Amersham Biosciences, Piscataway, NJ) on mIR36.11.1 and control HIR 3.5 cells was defined by competition with unlabeled insulin as previously described [12,13]. Briefly, 125 I-insulin (50,000 cpm, 11.36 × 10 −12 M) was added to cells (1 × 10 6 per well) either alone to determine maximum binding or in the presence of increasing concentrations of unlabeled insulin (10 −10 to 10 −6 M) in a final incubation volume of 1 mL. Following binding overnight at 4 • C, cells were washed with phosphatebuffered saline (PBS; JRH Biosciences, Lenexa, MO), and the amount of bound 125 I-insulin in 0.4 N NaOH-treated cells was counted in a Beckman Gamma 5000 counter (Medical College of Ohio, Toledo, OH). All samples were done in duplicate. Specific binding of biotinylated insulin was also defined by competition with unlabeled insulin using the fluorescence activated cell sorter (FACS) analysis. mIR36.11.1 cells at 5 × 10 5 were incubated with 5 × 10 −6 M biotinylated insulin (a gift from Dr. Francis Finn, University of Pittsburg, retired) either alone to determine maximum binding or in the presence of increasing concentrations of unlabeled insulin (10 −9 to 10 −3 M) for 45 minutes on ice followed by washing. The secondary reagent streptavidin-phycoerythrin (SA-PE; Pharmingen, San Diego, CA) was then added for 45 minutes on ice. Control cells were treated with 30 µg/mL unlabeled insulin and SA-PE following the procedure above to determine background binding. After washing, cells (10,000 per sample) were analyzed immediately using a Becton-Dickinson flow cytometer (University of Toledo, Toledo, Ohio) to determine the geometric mean fluorescence (GMF) of each sample. Anti-mIR Antibody Assay Using mIR-Transfected CHO Cells and Flow Cytometry Serum AIRA was quantified by FACS analysis of mouse Ig binding to mIR 36.11.1 compared to untransfected CHO cells. NZO/HlLt sera used in flow cytometry were precipitated by 25% saturated ammonium sulfate (SAS) to obtain an IgMenriched fraction. The supernatant was then subjected to an additional 45% SAS precipitation and purified further by protein A conjugated beads to enrich for IgG antibodies. Sample binding to untransfected CHO cells was used to establish a baseline in FACS analysis. Incubation with biotinylated insulin and SA-PE served as a positive control for staining. Serum samples were pooled from 5 to 6 mice from each of the following: C57BL/6J female 30 weeks of age, C57BL/6J males 30 weeks of age, NZO/HlLt female 16 to 17 weeks of age, NZO/HlLt females 34 weeks of age, NZO/HlLt males 16 to 17 weeks of age, and NZO/HlLt males 20 to 28 weeks of age, all with normal blood glucose levels. An additional group of 9 diabetic males between 20 and 28 weeks of age (plasma glucose >250 mg/dL) were also analyzed. In a separate set of experiments serum was collected from 20-week-old male NZL (pool of 5), NZL-Igh-6 null (pool of 10), and NZO diabetic mice (pool of 5). IgM and IgG fractionation on pooled serum samples was performed using a protein G spin chromatography kit (Pierce, Rockford, IL) according to manufacturer's instructions. Fractionated antibody (Ab) was added to 0.5 × 10 6 cells for 45 minutes on ice, washed off and replaced by secondary antibody, affinity purified F(ab') 2 donkey anti-mouse IgG (H + L)-PE (minimal cross-reactivity with bovine, chicken, goat, guinea pig, hamster, horse, human, rabbit, or sheep serum proteins; Jackson Immunologicals, West Grove, PA). After an incubation of 45 minutes on ice, cells were washed, iced, and analyzed immediately using a Coulter Elite flow cytometer (Medical College of Ohio, Toledo, OH) or a Becton-Dickinson flow cytometer (University of Toledo, Toledo, OH). No detectable shift was seen with parental or transfected mIR 36.11.1 cells labeled with secondary antibody alone or SA-PE alone. Radioimmunoassay with the β Subunit of the Human IR AIRAs that were detected by flow cytometry analysis of Ig-binding to mIR were also detected by radioimmunoassay. Sera from NZO/HlLt and control mice were used to precipitate the in vitro transcribed/translated 35 S-Met-labeled β subunit of the human IR. Protein G was applied to precipitate IgG immunocomplexes and several washes were preformed to remove any unbound antigen. Results are expressed in cpm. A rabbit anti-human IR β subunit polyclonal antisera was used as positive control (Research Diagnostic, Flanders, NJ) and normal rabbit serum as negative control. Statistics Survival statistics were performed using Kaplan-Meier analysis (STATVIEW; Abacus Software, Palo Alto, CA). Significance of differences for other data were analyzed using ANOVA with Bonferroni/Dunn correction or the Student's t test with significant differences set at a P value of less than .05. Functionality of the mIR on the mIR36.11.1-Transfected CHO Clone In order to develop an assay for the detection of antibodies to the IR, we transfected CHO cells with an IR-coding plasmid. IR-expressing cells were screened by 125 I-insulin-binding assay [12,13], sorted by flow cytometry using FITC-insulin, cloned, and subcloned by limiting dilution until a stable IR-expressing CHO transfectant clone was isolated. Positive control HIR3.5 and mIR-transfected CHO subclone 36.11.1 (mIR36.11.1) cells bound 125 I-insulin to a comparable extent and significantly more effectively than untransfected CHO cells (P < .03) ( Figure 1A). The specific binding of 125 I-insulin to the IR on HIR3.5 and mIR36.11.1 cells was analyzed by competition with increasing concentrations of unlabeled insulin (10 −10 to 10 −6 M). As shown in Figure 1B (HIR3.5) and Figure 1C (mIR36.11.1), increasing concentrations of unlabeled insulin competed for the insulin binding site and prevented binding of a constant concentration of 125 I-insulin. 125 I-insulin binding was completely inhibited with 2 × 10 −6 M unlabeled insulin in both cell lines. Furthermore, based on Scatchard analysis, the HIR3.5 cell line expressed 0.5 × 10 6 IRs per cell as compared to the mIR36.11.1 clone that expressed 2 × 10 6 IRs per cell. In comparison to the flow cytometric profile of untransfected CHO cells, a rightward shift of fluorescence intensity was observed following labeling of the stable mIR36.11.1 transfectant subclone with biotinylated insulin and SA-PE, confirming cell surface expression of mIR ( Figure 1D). Specific binding of biotinylated insulin was confirmed by competitive binding in the presence of increasing concentrations of unlabeled insulin (10 −9 to 10 −3 M). Unlabeled insulin competed with biotinylated insulin for the insulin binding site on mIR36.11.1 cells ( Figure 1E). Binding of biotinylated insulin was completely inhibited in the presence of 5 × 10 −4 M unlabeled insulin ( Figure 1E). Overall, the results from Figure 1 show that mIR36.11.1 cells express IR ( Figure 1A and D) comparable to that expressed in the positive-control HIR3.5 cells ( Figure 1A and Scatchard results) and insulin specifically binds the IR expressed in mIR36.11.1 cells ( Figure 1C and E). Furthermore, the transfected IR in mIR36.11.1 was tyrosine phosphorylated in the presence of insulin (data viewable at the website http://mbc.pharm.utoledo.edu/mbc/mfm.html), indicating that IR in mIR36.11.1 cells is functionally capable of insulin signal transduction. AIRAs in NZO Mice The use of the IR-transfected CHO cell system permitted confirmation of the previous report of low-affinity IgM AIRAs present in NZO serum [6]. Figure 2A (extreme right hand bars) documents that the enriched IgM fraction (IgM concentration 9.7 µg/mL) from NZO/HlLt males at 16 to 17 weeks of age contained AIRAs. Most pathological autoimmune antibodies undergo a class switch from IgM to IgG isotypes. Therefore, supernatants from 25% SAS cuts were precipitated with 45% SAS and this fraction was passed over protein A columns to obtain samples enriched in IgG. The major protein peak of the acid eluate was collected in a single fraction for all samples except the sample from diabetic NZO/HlLt males, which was sufficiently large to collect in multiple fractions. Specific AIRA was detected in the diabetic 20-to 28-week NZO/HlLt male pool only in the leading fraction from the column that had a relatively low Ig concentration (e.g., IgM 1.3 µg/mL, IgG1 14 µg/mL) (Figure 2A). However, IgG AIRA was not present in agematched (20-to 28-week) normoglycemic NZO/HlLt male mice that had more than double the amount of IgG1 (36 µg/mL). Nor was IgG AIRA detected in any other tested pools when compared to background B6 mice (Figure 2A). When immunoprecipitate fractions (IgG enriched) were tested in a radioimmunoassay with 35 S-Met-labeled recombinant β chain of the human IR, only the NZO/HlLt serum obtained from diabetic males contained significant amounts of AIRAs ( Figure 2B). (Diabetic 20-to 28-week NZO/HlLt male versus normal 20-to 28-week NZO/HlLt male, P < .0005; diabetic 20-to 28-week versus 30-week C57BL/6J males, P < .0009). Therefore, the same results on IgG-enriched AIRAs in diabetic NZO/HlLt male mice were obtained in 2 independent assay systems. Furthermore, at least some NZO/HlLt AIRAs are cross reactive, because AIRAs from diabetic NZO/HlLt males bound to the β chain of the human IR. B Lymphocyte-Deficient NZL-Igh-6 null Mice Do Not Express AIRAs In order to determine if AIRAs are pathogenic, NZB/BlnJ mice congenic for a disrupted Igh-6 allele on chromosome 12 encoding the IgM heavy chain [9] were crossed with NZO/HlLt mice as described above to obtain NZL-Igh-6 null mutant and NZL wild-type that express NZO Igh-6 alleles. Analysis of serum IgM and IgG fractions obtained from NZL wild-type, NZL-Igh-6 null , and NZO males is shown in Figure 3A. As expected, B cell-deficient NZL-Igh-6 null mice did not express either IgM or IgG AIRAs whereas wild-type NZL and NZO mice both expressed IgM, and to a lesser extent IgG, AIRAs ( Figure 3A). . IgG1 concentrations are given in parentheses on the graph label on the x-axis for each group tested as an indicator of relative immunoglobulin concentrations. A 25% SAS IgM-enriched fraction, from the 16-to 17-week-old male NZO pool, was also tested and is labeled on the x-axis of the graph as NZO M 16-17 wks (IgM). The IgM concentration of this 25% SAS fraction from 16-to 17-week-old NZO males was 9.7 µg/mL (see parentheses). The y-axis is the mean fluorescence intensity. The IgG results are representative of duplicate samples and the IgM results are representative of 2 experiments. (B) Immunoprecipitation of 35 S-Met-labeled β subunit of the human IR by serum fractions from control female and male C57BL/6J mice and female and male NZO nondiabetic and diabetic mice of various ages as described in A above. Protein G was applied to precipitated IgG-enriched immunocomplexes. Results are expressed as cpm (mean ± SD) of at least 2 independent sets of experiments performed in triplicate. * Significance (P < .0009) for the difference between diabetic male NZO mice and other strains of mice (C57BL/6J) as well as nondiabetic male and female NZO mice (Student's t test). B-Lymphocyte Deficiency Retards, But Does Not Prevent, Diabetes in NZL Males The genome of the NZL strain (selected for the absence of the targeted Igh-6 gene and closely linked alleles deriving from 129S2/SvPas on chromosome 12) should derive approximately 97% of its genome from NZO and 3% from the NZB donor strain. The NZL genome was typed at F9 for 38 informative MIT microsatellite mark-ers (Research Genetics, Huntsville, AL) and 116 informative single nucleotide polymorphisms (SNPs; KBioscience, Oxford, England). Although analytic gaps remain on some chromosomes, most of the genome scanned is NZO derived. NZBderived genome was found only on Chromosome 9 (Mb 34 to 103) and Chromosome 18 (Mb 55 to 80). A map is viewable at http://www.jax.org/staff/leiter/labsite/type2.html. Consistent with the genome scan data, development of severe obesity in There is no significant difference between the 2 genotypes for diabetes incidence. n = 14 for NZL and n = 19 for NZL-Igh-6 null males. NZL males (and females, data not shown) was typical of that observed in NZO/HlLt males. NZL-Igh-6 null males, although markedly obese as well, weighed significantly less between 8 and 16 weeks of age (P < .01) ( Figure 3B). Both NZL wild-type and Igh-6 null mutant males developed maturity-onset hyperglycemia ( Figure 3C). At the 8-week sampling, the NZL wild-type mean was significantly higher than the mutant mean; however, the means did not differ at the later time points. Of the groups shown in Figure 3C, 13 of 14 wild-type males (93%) and 15 of 19 Igh-6 null males (79%) exhibited hyperglycemia >250 mg/dL at the 20-week sampling point. These frequencies are higher than the 50% frequency of diabetes observed in standard NZO/HlLt males [15]. Diabetes-free survival analysis showed no significant differences between the 2 genotypes (Figure 3D). Hence, elimination of humoral autoimmunity slightly retarded but did not prevent development of hyperglycemia. Although NZL wild-type mice of both sexes did not differ from standard NZO/HlLt mice in terms of rapid development of juvenile obesity and maturity-onset development of hyperglycemia, a major difference was observed in reproductive performance. Whereas 40% or fewer of NZO matings were productive, almost all NZL pair matings established at weaning were productive. Thus, the recombinant congenic strain exhibits reproductive vigor that makes this strain much easier to breed. The results in Figure 3 show that the absence of B lymphocytes and correspondingly AIRAs did not protect NZL-Igh-6 null mice from the development of diabetes. Therefore, AIRAs are not essential for the pathogenicity associated with insulin resistance and hyperglycemia in type 2 diabetes in the NZL mouse. DISCUSSION Previously published preliminary results showed that a small cohort of B lymphocyte-deficient NZL-Igh-6 null males did not transit to overt diabetes [7]. However, 2 factors indicated caution in interpretation of this finding. First, diabetes frequency in standard NZO/HlLt males is only 50%. Therefore, the small sample size of B lymphocyte-deficient males (n = 4) available for anlaysis was inconclusive. Second, these immunodeficient males were highly susceptible to infections, with all 4 dying of undiagnosed illness before 20 weeks of age. It has been our further experience maintaining the NZO and NZL strains that the males are susceptible to urogenital tract infections, sperm plugs, and pyelonephritis. We avoided these complications in the present study by supplementing drinking water with an antibiotic and maintaining breeding stock and aging males in pressurized, individually ventilated cages. Under these conditions, a group of 19 NZL-Igh-6 null males were produced for aging to 20 weeks without age-associated loss in body weight. With the larger group size, we failed to confirm the preliminary observation that B-lymphocyte deficiency significantly protected against eventual development of diabetes. A significant retardation of plasma glucose rise was observed at the 8-week sampling. Although mean plasma glucose level was also lower at the 12-and 16-week intervals, overall diabetic frequencies attained by 20 weeks were not significantly different between genotype (NZL 93%, NZL-Igh-6 null 79%). This result is consistent with an earlier finding in another mouse diabesity model, the C57BLKS/J-Lepr db mouse. Numerous studies had documented immune anomalies associated with this strain background susceptibility to diabesity; however, elimination of T-and Blymphocyte components by genetic means only retarded severity without preventing establishment of chronic hyperglycemia [16]. Similarily, in the current study, B-lymphocyte immunodeficiency only retarded, but failed to prevent, the ultimate development of diabesity in NZL-Igh-6 null males maintained in PIV caging with antibiotic supplement to prevent infections. Hence, we conclude that our initial finding of suppressed diabesity [7] likely reflected the compromised health status (including severe pyelonephritis) of the initial small group of males in the absence of special procedures to protect against infections. Interestingly, diabetes development in NZL males achieves the same high frequency as the (NZO × NON)F1 hybrid male [17]. This increased penetrance of diabesity, coupled with the improved reproductive performance despite comparable obesity, makes the new NZL strain (without the Igh-6 null mutation) an attractive substitute for the NZO/HlLt strain. The presence of low affinity IgM AIRAs in the sera of male NZO mice has been reported [6]. Using the mIR-transfected CHO cell system (mIR36.11.1) and flow cytometry, the presence of AIRAs in sera from young male NZO/HlLt was confirmed. In sera from 16-to 17-week-old NZO/HlLt males, when the diabetes incidence is about 10%, AIRAs were present in the IgM-enriched 25% SAS precipitate, but not in the IgG-enriched protein A eluate. This indicates that the initial appearance of AIRAs in NZO/HlLt males is primarily in the form of IgM. The presence of IgG AIRA activity in the sera of diabetic NZO/HlLt males by flow cytometry and radioimmunoassay, and its absence in mature normoglycemic NZO/HlLt males, appeared to correlate AIRA activity with the development of type 2 diabetes. In humans, AIRAs are associated with a rare type B syndrome of extreme insulin resistance. AIRAs block insulin binding and mimic the biological effects of insulin leading to insulin resistance and receptor desensitization [18,19]. Recently, it was shown in a human patient that AIRAs produced insulin resistance by desensitizing signaling through the IR via inducing a stable association of the IR with IR substrates 1 and 2 [20]. In summary, comparison of NZL/Lt males with and without the ability to produce AIRAs has established that although these autoantibodies may be useful as markers of the disease state, AIRAs are not essential for development of type 2 diabetes in this model.
5,702.2
2004-07-01T00:00:00.000
[ "Biology", "Medicine" ]
Rotation and Turbulent Instability in Peripheral Heavy Ion Collisions In recent years fluid dynamical processes became a dominant direction of research in high energy heavy ion reactions. The Quark-gluon Plasma formed in these reactions has low viscosity, which leads to significant fluctuations and special instabilities or flow patterns. One has to study and separate these two effects, but this is not done yet in a systematic way. This presentation presents the most interesting collective flow instabilities, their possible ways of detection and separation form random fluctuations arising from the randomness of the initial configuration in the transverse plane. nucleus should be also taken into account. Only a few models satisfy fully, all conservation laws, and we will discuss the construction of realistic initial state configurations for the Global Collective flow component. Figure 1. The initial state energy density distribution shown in the Reaction Plane, in the [x, z], plane. This initial state is constructed based on a Glauber model, via fire-streaks, which extend longitudinally. This extension is slowed down by the attractive, chromo-electric, coherent Yang-Mils fields. The resulting string-rope tension is smaller when we have less color charges at the end of the streaks, and this results in a longer streaks and smaller energy density at the top (projectile) and bottom (target) sides. The central streaks that stopped stronger, start a 1D Riemann scaling expansion. This initial state conserves energy, momentum, angular momentum, and shows initial vorticity and longitudinal shear. From ref. [4,5]. There are few realizations where the conservation laws are fully satisfied. The models generating the initial state from a realistic molecular dynamics or cascade model may reach states close to equilibrium, and the smooth average of such states can serve as an realistic 3+1D initial state. One can also construct a good analytic initial state by taking into account of all symmetries and all conserved quantities and their conservation laws. Such an initial state is described in [4,5] and presented in Fig. 1. Splitting of Global Collective Flow and Fluctuations The high multiplicities at high energy heavy ion collisions have enabled us to study fluctuations and the distribution of the azimuthal harmonic components. Due to traditional reasons the azimuthal distributions are parametrized in therms of cosine functions and a separate event-by-event fitted Event Plane azimuth, which did not correlate with the Reaction Plane and had nontrivial correlation among the Event Planes of the different harmonic components. The non-fluctuating Global Collective (background) 1 flow, if the event-by-event center of mass and Reaction plane are identified, (which can be done experimentally, see e.g. ref. [6]) can be written in the form Ψ RP and y CM can be determined experimentally event-by event, as described in ref. [6]. Notice that the event-by event c.m. fluctuates strongly in the beam direction as due to the large rapidity difference between the projectile and target leading to y CM 0, but also in the transverse plane leading to modified Ψ RP . This second effect was taken into account in ref. [7], without referring to [6], but the stronger longitudinal fluctuations were not studied, and were considered just as "dipole like initial fluctuations". In contrast to the above formulation fluctuating flow patterns are analysed by using the ansatz which is adequate for exactly central collisions where the Global Collective flow does not lead to azimuthal asymmetries. Here Ψ EP n maximizes v n (y, p t ) in a rapidity range, and both φ and Ψ EP n are measured in the laboratory (collider) frame. If this formulation is used for peripheral collisions the analysis is rather problematic, because Global Collective flow patterns and fluctuations are getting mixed up. This is actually also true in central, spherical or cylindrical events but there the separation is more subtle, and it does not show up directly in the azimuthal flow harmonics. Still in special model calculations fluctuations in the transverse plane were studied, and Global Collective flow (background flow) was separated from fluctuations [8]. He we show that the ansatz of flow analysis can be reformulated in a way which makes the splitting or separation of the Global Collective flow from Fluctuations easier, based on the symmetry requirements arising from the symmetries of the peripheral heavy ion collisions. This formulation is also an ortho-normal series expansion for both φ-even and φ-odd functions. Considering the relation cos(α−β) = cosα cosβ + sinα sinβ, we can write each of the terms of the harmonic expansion into the form v n cos[n(φ − Ψ EP n )] = v n cos(nΨ EP n ) cos(nφ) + v n sin(nΨ EP n ) sin(nφ) If we consider that the reaction plane, Ψ RP can be determined event-by-event experimentally also [6], we can introduce Φ EP n ≡ Ψ EP n − Ψ RP and φ ≡ φ − Ψ RP , so that from these data we get Ψ EP n = Φ EP n + Ψ RP . Here φ is the azimuth angle with respect to the Reaction Plane. Now we can also define the new flow harmonic coefficients by c v n ≡ v n cos(n(Φ EP n + Ψ RP )) and s v n ≡ v n sin(n(Φ EP n + Ψ RP )), and we get for the terms of the harmonic expansion v n cos[n(φ − Ψ n )] = v n cos[n(φ − Φ n )] = c v n cos(nφ ) + s v n sin(nφ ). (1) Thus we have reformulated the azimuthal angle harmonic expansion, which was given originally in terms of cosines and Event Plane angles for each harmonic component, to both sines and cosines in the Reaction Plane as reference plane and the corresponding new coefficients c v n = c v n (y − y CM , p t ) and s v n = s v n (y − y CM , p t ). These can be obtained from the measured data, v n , Ψ EP n and Ψ RP directly. This form has the advantage that in peripheral collisions the Global Collective (not fluctuating) flow component, c v n for odd harmonics have to be odd functions of (y − y CM ), while for even harmonic components have to be even function of (y − y CM ). As the Global Collective flow has to be ±y symmetric in the transverse plane, all the coefficients of the sin(nφ ) terms should vanish c v n = 0. These symmetry properties provide a possibility to separate the fluctuating and the global flow (background flow) components. 2 When the new coefficients c v n = c v n (y − y CM , p t ) and s v n = s v n (y − y CM , p t ), are constructed, we can conclude that s v n can be due to fluctuations only. Furthermore for the Global Collective flow, c v n (y − y CM , p t ) must be an even (odd) function of (y − y CM ) for even (odd) harmonic coefficients. Due to the fluctuations this is usually not satisfied and one has to construct the even (odd) combinations from the measured data. These represent then the Global Collective component, while the odd (even) combination will represent the Fluctuating component. This separation provides an upper limit for the magnitude of the Global Flow component, because the fluctuations may in some events show the same symmetries as the Global Collective flow. On the other hand for the Fluctuating component, s v n , provides an upper limit as this component cannot be caused by the Global Collective flow. A last 2 In ref. [8] for the longitudinal motion uniformly the Bjorken scaling flow approximation was assumed, which is inadequate to describe the odd (y − y CM ) components. Thus this analysis is limited in the possibility of separating the two components. This is already included in their ansatz of the assumed distribution function, δ f i in eq. (2.9) where longitudinal fluctuations were excluded and only transverse fluctuations were studied. ICNFP 2013 00029-p.3 essential guidance may be given by the conditions that the fluctuations must have the same magnitude for sine and cosine components as well as for odd and even rapidity components. Evaluation of the experimental results this way may provide a better insight into both types of flow patterns. Furthermore, these can also help judgements on theoretical model results, and the theoretical assumptions regarding the initial states. Other experimental methods, like two particle correlations [9] or polarization measurements [10], may take advantage of this splitting of flow pattern components also. The Initial State As we have shown the Initial State can be constructed in a way such that all conservations laws are satisfied, and no simplifying assumptions are used, which would violate the conservation laws. In addition there are other principles like causality which should also be satisfied by the initial state. A frequent simplification in x, y, η, τ coordinates, is to assume uniform longitudinal Bjorken scaling flow (this leads to a simple separable initial state distribution function), and in order to satisfy the angular momentum conservation at different transverse points the energy density or mass distribution is made such that on the projectile side a substantial part of the mass is at rapidities exceeding the target and projectile rapidity [11][12][13]. In most cases this leads to acausal distributions where part of the matter is situated beyond the target and projectile rapidities. This acausality is corrected by Karpenko et al. [14] by cutting the distributions at the target and projectile rapidities. Still the attractive chromo field is not taken into account in this approach, which would limit the initial limiting rapidities by up to 2.5 units of smaller rapidities on each side [15,16]. Furthermore, the Bjorken scaling flow approach eliminates any possibility for initial shear flow and vorticity, which is a dominant source of simple flow patterns and of strong and visible instabilities in classical physics, like rotation and turbulent Kelvin Helmholtz Instability. Apart of the semi-analytic initial state model mentioned in the introduction other initial state models exists, which satisfy all conditions of a realistic initial state. First of all initial state molecular dynamics and multiparticle cascade models which satisfy all conservation laws, boundary conditions and causality, will provide realistic Global Collective initial state as the average of many such realistic events. Also, analytic models can be constructed based on these principles, which are different from the one mentioned in the introduction. The initial uniform Bjorken scaling flow, is maintained during the fluid dynamical development, so that the lack of shearflow persists in these solutions. It follows that no viscous dissipation takes place in the longitudinal direction, which makes the model configurations anisotropic and not very reliable in this model configurations. 3 New Global Collective Flow Patterns As mentioned in the introduction, in collisions of finite impact parameter at high energies we have a large angular momentum which can be as high as J = 10 6 at LHC. The angular momentum is conserved, but due to the explosive expansion of the system the angular velocity of the participant system is rapidly decreasing, thus the local rotation, the vorticity, decreases with time. It depends on the balance between the expansion and the angular momentum if the rotation will manifest itself in observable quantities at the Freeze out. FO? Figure 2. The rotation during the fluid dynamical evolution is indicated by the red arrows pointing to the initial central and corner points on the surface. The motion of these points shows the rotation of the system. The fluid dynamical initial state is preceded by an pre-equilibrium Yang-Mills longitudinal field theoretical model, which took 6.25 fm/c. Thus after 2.00 fm/c fluid dynamical evolution the length of the matter is 8.25 fm (l.h.s). The configuration on the r.h.s. is at 8 fm/c fluid dynamical evolution, which is 14.25 fm/c after the initial touch of the two nuclei. This is just after the estimated freeze out time of 10-12 fm/c. Based on ref. [17]. Due to the widespread use of the uniform longitudinal Bjorken scaling flow in the initial condition, the rotation did not occur in fluid dynamical model calculations, and it was studied only recently. First it was noticed in ref. [17], see Fig. 2. This rotation acts against the 3rd flow component or antiflow, and may decrease the measured directed flow, or even reveres the direction from antiflow to directed flow. According the calculations [17] the v 1 was expected to peak at positive rapidities, but this prediction is strongly dependent on the competition between the rotation and the expansion. The small amplitude of v 1 is difficult to identify in the strongly fluctuating background, without identifying the event-by-event center of mass and Reaction Plane. The fluid dynamical calculations with the same method showed for the first time the possibility of the turbulent Kelvin Helmholtz Instability [18]. See Fig. 3. Stability estimates confirmed the possibility of the occurrence if this instability, which could also be obtained in a simple analytic model [19]. Detecting the New Flow Patterns via Polarization The rotation and the turbulence have a small effect on the directed flow, which is weak at RHIC and LHC energies anyway, so alternative ways of detection should be considered. The angular momentum in case of distributed shear flow, shows up in local vorticity. The simplest classical expression of vorticity in the reaction plane, [x-z], is defined as: ICNFP 2013 00029-p.5 The configuration on the r.h.s. is at 6 fm/c fluid dynamical evolution, which is 12.25 fm/c after the initial touch of the two nuclei. This is just around the estimated freeze out time of 10-12 fm/c. Based on ref. [18] where the x, y, z components of the 3-velocity u are denoted by v x , v y , v z respectively. In 3dimensional space the vorticity can be defined as For the relativistic case, the vorticity tensor, ω μ ν is defined as where for any four vector q μ the quantity ∇ α q μ ≡ Δ β α ∂ β q μ = Δ β α q μ ,β and Δ μν ≡ g μν − u μ u ν . The relativistic generalization of vorticity leads to an increase of the magnitude of vorticity [20]. The local vorticity is decreasing with the expansion, but it is still significant at Freeze out in peripheral collisions due to the huge initial angular momentum. the local vorticity reaches 3 c/fm in the reaction plane [20], which is more than an order of magnitude larger than the vorticity in the transverse plane arising from random fluctuations [21]. This vorticity may lead to two other measurable consequences. According to the equipartition principle for different degrees of freedom carrying the same amount of energy the same applies for angular momentum. Here the local orbital rotation and the spin of the particles may equilibrate with each other. If equilibrium is reached by freeze out the final polarization should have the same direction and magnitude as the local vorticity. Interestingly, high temperature acts against polarization so the polarization is governed by the so called thermal vorticity, where instead of the four velocity, u μ , the inverse temperature four-vector, is used to determine the thermal vorticity [10]. If β μ is measured in units of the thermal vorticity becomes dimensionless. 00029-p.6 For the polarization studies it is of utmost importance to identify the proper global directions in a collision event-by event. See Fig. 4. Without identifying the center of mass rapidity, the Reaction Figure 4. The [x, z], Reaction Plane where the direction of the Projectile and Target matter is indicated. The arising angular momentum, J, points into the −y direction. When the event-by-event center of mass and the reaction plane is identified this angular momentum is divided between orbital rotation and polarization and spin. The polarization is transverse to the motion of the Λ andΛ particles and has the same direction as the angular momentum, J. Thus, this polarization may be detected in Λ andΛ particles, which are emitted into the ±x directions. From [10]. Plane, and the Projectile and target side of the reaction plane the detection of the angular momentum and polarization is not possible and earlier measurement at RHIC, where all azimuth angles were averaged over, gave results where the measured polarization was consistent with zero. The Λ particle is well suited for measuring its polarization because its dominant decay mode is Λ −→ p π − and the proton is emitted in the direction of polarization. Notice that due to the thermal and fluid mechanical equilibration process the polarization of Λs andΛs are the same. This distinguishes the process from electro-magnetic polarization mechanisms. The thermal vorticity projected to the Reaction Plane is shown in Fig. 5. The thermal vorticity is more pronounced than the standard vorticity, at the external edges of the matter. where the temperature is lower. The thermal vorticity is somewhat larger at RHIC, where the amount of data and the available detector acceptance are larger. Figure 5. The thermal vorticity of the matter arising from a fluid dynamical calculation for two different beam energies. The thermal vorticity is inversely proportional with the temperature, which is increasing faster than the local vorticity with increasing beam energy. Thus the thermal vorticity at RHIC is larger. Also the side regions are cooler and this also increases the thermal vorticity, which enhances the polarization in due to equipartition. Based on ref. [10]. The resulting polarization is shown in Fig. 6. Thus for this measurement the determination of the proper directions of the collision axes is vital. The polarization should be measured for Λs emitted into the ±x directions, which will then be polarized in the −y direction. This thermal and fluid mechanical polarization would not exist if the source, the participant system in heavy ion reactions would not have a significant vorticity. This is realized in peripheral heavy ion reactions, which have high initial angular momentum. Unfortunately, even some 3+1D fluid dynamical calculations assume oversimplified initial states where initial shear and vorticity vanishes and these are not able to show these effects. Detecting the New Flow Patterns via Two Particle Correlations The detection described in the previous section 5, was sensitive to the local vorticity. Two particle correlation measurements are sensitive to the integrated emission from the freeze out space-time zone, the so called "homogeneity" region, where the dominant emission is directed toward the detection, i.e. in the (out)-direction. Recently we proposed the Differential Hanbury Brown and Twiss method to study the rotation of the source via two particle correlations [9,22]. The method is based on a simple observation, if we have a spherically symmetric or cylindrically symmetric source with an rotation axis, or any source which is left/right symmetric with respect to a given "out-direction", of momentum k, then we can construct from the usual two particle correlation function with momenta p 1 = k + q/2 and p 2 = k − q/2: This correlation function does not depend on the direction of k for static, spherically or cylindrically symmetric sources, and gives the same value for two, k + and k − momentum vectors which are tilted to left/right with the same tilt angle in case of a left/right symmetric source with respect respect to k. Even if the source is not static but has local motion with local velocities, the correlation functions have the same value if the velocity of motion is in the radial, i.e. points in the local out-direction. On the other hand this is not true if the local velocities have a "side" component, i.e. when the source is rotating. This can be tested by the introduction of the Differential Correlation function, ΔC(k, q), which is defined as EPJ Web of Conferences 00029-p.8 Now let us assume that the rotation axis is the y-axis, the momentum vector k, points into the x direction, and the tilted vectors are k +x = k −x and k +z = −k −z . In a heavy ion reaction z could be the beam direction and the x, z plane is the reaction plane. E.g. for central collisions or spherical expansion, ΔC(k, q) would vanish! It would become finite if the rotation introduces an asymmetry. We have studied the differential ΔC(k, q) -function, and for symmetric sources its amplitude is increasing with the speed of rotation [22], as expected. The correlation function for the original fluid dynamical DCF with the rotation included is different from the one obtained from the rotation-less configuration, see Fig. 8. At α = −11 degrees the correlation function is distinctly different from the rotation-less one and has a minimum of −0.085 at q = 0.63/fm. Unfortunately this is not possible experimentally, so the direction of the symmetry axes should be found with other methods, like global flow analysis and/or azimuthal HBT analysis. To study the dependence on the angular momentum the same study was for lower angular momentum also, i.e. for a lower (RHIC) energy Au+Au collisions at the same impact parameter and time. We identified the angle where the rotation-less DCF was minimal, which was α = −8 degrees, less than the deflection at higher angular momentum. The original, rotating configuration was then analyzed at ICNFP 2013 00029-p.9 this deflection angle, and a minimum of −0.046 appears at q = 0.76/fm. Thus, the magnitude of the DCF at the angle of the symmetry axis increased by nearly a factor of two. Thus, the method is straightforward for symmetric emission objects, while for a general Global Collective Flow pattern one has to extract the shape symmetry axis with other methods. There are several methods for this task, and it takes some experimental tests, which of these methods are the most adequate for the task.
5,098.8
2014-04-29T00:00:00.000
[ "Physics" ]
A New Forecasting Approach for Oil Price Using the Recursive Decomposition–Reconstruction–Ensemble Method with Complexity Traits The subject of oil price forecasting has obtained an incredible amount of interest from academics and policymakers in recent years due to the widespread impact that it has on various economic fields and markets. Thus, a novel method based on decomposition–reconstruction–ensemble for crude oil price forecasting is proposed. Based on the Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) technique, in this paper we construct a recursive CEEMDAN decomposition–reconstruction–ensemble model considering the complexity traits of crude oil data. In this model, the steps of mode reconstruction, component prediction, and ensemble prediction are driven by complexity traits. For illustration and verification purposes, the West Texas Intermediate (WTI) and Brent crude oil spot prices are used as the sample data. The empirical result demonstrates that the proposed model has better prediction performance than the benchmark models. Thus, the proposed recursive CEEMDAN decomposition–reconstruction–ensemble model can be an effective tool to forecast oil price in the future. Introduction Crude oil, which is the world's most important chemical raw material and strategic resource, ensures the normal operation of the national economy and people's livelihoods, and it is a critical support for the development of the entire modern industrial society. Crude oil plays an important role in the global economy, political situation, and military strength of various countries as a basic energy source. As a result, changes in crude oil prices have sparked widespread concern worldwide. Because of the interactive impact of various factors such as the global economy, exchange rate changes, speculative behavior, and geopolitics, the oil price always exhibits non-linearity, non-stationarity, and high complexity, which poses significant challenges to crude oil price forecasting. In the literature, various linear and nonlinear models have been used separately or in combination to make forecast (see, e.g., Buyuksahin & Ertekin [1]). Linear methods assume that a given time series is regular with no sudden movements. It becomes challenging because sudden movements with variation and extreme values are normal in many realworld time series such as financial data and renewable energy data (see, e.g., Xu et al. [2]). Numerous nonlinear time series prediction methods (see, e.g., Kantz & Schreiber [3]) have been proposed in the literature to capture these nonlinearities. Conventional linear methods can better approximate time series with no high volatility and multicollinearity. Zhang et al. [4] and Elman [5] show that nonlinear methods have the advantages when modeling a complex structure in time series with high accuracy. No universal model is suitable for all circumstances because each type of method outperforms others in different domains. Individually capturing general patterns in the time series data using only one linear or nonlinear model appears to be difficult (see, e.g., Khashei & Bijari [6]). To overcome this limitation, Taskaya & Casey [7] proposed hybrid techniques with both linear and nonlinear models. The hybrid methodology is a synthesis of various prediction methods. It is usually a combination of traditional econometric models and AI algorithms (see, e.g., Wang et al. [8]) or a combination of different econometric models or AI algorithms. In addition to the hybrid methodology, the ensemble learning algorithm is an important paradigm to overcome the limitations of single methods. Both hybrid methodology and the ensemble method consider the shortcomings of single models. With the divide-andconquer strategy (see, e.g., Yu et al. [9] and Dong et al. [10]), the decomposition-ensemble learning methods are an important branch of ensemble learning paradigms. Because it will take a lot of time to make individual prediction from all decomposed components, the number of decomposed components is necessarily reduced. Yu et al. [11] first proposed a decomposition-ensemble model with a reconstruction step that considered some data characteristics. Recently, Yu & Ma [12] introduced a memory-trait-driven reconstruction method into the decomposition and ensemble framework. Inspired by their work, a new model based on decomposition-ensemble learning with a reconstruction step that considers the data complexity traits is used to explore the price predictions of crude oil. In this model, all steps of mode reconstruction, component prediction, and ensemble prediction are driven by complexity traits. First, a decomposition-ensemble approach is used to decompose the oil price time series. Second, the complexity of these decomposed components are separately computed. Then, each component can be identified based on its complexity ranking from high to low. Different components are predicted through appropriated models. Finally, the forecasting for different components can be aggregated to produce the final prediction output. The contributions of the article are as follows: i. A novel decomposition-reconstruction-ensemble method is proposed with clustering capability to capture the inner complexity traits. The performance of the proposed recursive CEEMDAN for different complexity traits of data is tested and validated using popular single models and several decomposition-reconstruction-ensemble models. ii. The proposed recursive CEEMDAN technique is used to improve the performance of the CEEMDAN decomposition method by recursively decomposing the rapidly fluctuating components into less volatile sub-components. iii. In the proposed recursive CEEMDAN decomposition-reconstruction-ensemble forecasting methodology, the reconstruction method, prediction method, and ensemble method are determined by the complexity traits of the crude oil data themselves. The remainder of this paper is organized as follows. Section 2 considers a comparison to the related works. Research data and the decomposition-reconstruction-ensemble method are discussed in Section 3. Section 4 presents the error measures to validate the prediction models. Some main findings are illustrated by comparing the results of the proposed model to the benchmark models. The prediction performance of the proposed model is further discussed in Section 5. Section 6 summarizes this paper and provides the improvement direction of future research. Forecasting by Statistical Models Statistical models, which are also known as random time series models, include exponential smoothing (ES) (see, e.g., Kourentzes et al. [13]), auto-regressive integrated moving average (ARIMA) model (see, e.g., Guo [14]), generalized auto-regressive conditional heteroskedasticity (GARCH) model (see, e.g., Zhang et al. [15]), hidden Markov model (HMM) (see, e.g., Isah & Bon [16]), and vectorial auto-regression (VAR) (see, e.g., Mirmirani & Li [17]). For example, Zolfaghari & Gholami [18] showed that ARIMA models had a good forecasting impact on international crude oil prices. To modify the mean and variance of the log returns of crude oil prices, Zhu et al. [19] introduced a hidden Markov model to obtain the behavior of random events and subjective factors for time series fluctuations. Using a VAR model, Drachal [20] applied the global economic policy uncertainty index, production, volatility index, and crude oil volatility to predict crude oil prices. Despite their simplicity and ease of implementation, these statistical models cannot directly process time series with nonlinear characteristics due to their linear correlation structure. Meanwhile, as the soft computing technology has advanced, many different intelligent algorithms have been developed and widely used in various data predictions. However, conventional statistical and econometric models are constrained by stringent theoretical assumptions, including linearity, stationarity, and dependence on specific distributional properties. As a result, these methods may encounter limitations in accurately forecasting wind power time series that are non-stationary, nonlinear, and characterized by complex dynamics. Forecasting by Artificial Intelligence and Machine Learning Methods A crucial presumption in the application of econometric models is that the time series data under study are a linear process. However, crude oil prices do not satisfy this requirement, which can result in less accurate forecasting outcomes. In contrast, various nonlinear intelligence and machine learning methods (e.g., the support vector machine (SVM) proposed by Yu et al. [21] and the extreme learning machine (ELM) proposed by Wang et al. [22]) have emerged to satisfy the requirements, and they can be applied to time series prediction tasks. Moreover, deep learning is gaining popularity in machine learning, since conventional machine learning techniques employ shallow structures. Recently, an artificial neural network (ANN) (see, e.g., Jammazi & Aloui [23]), a back-propagation neural network (BPNN) (see, e.g., Khashei & Bijari [6]), long short-term memory (LSTM) networks (see, e.g., Urolagin et al. [24]), and convolutional neural networks (CNNs) (see, e.g., Li et al. [25]) can implement time series with nonlinear characteristics and have high prediction precision. For example, Wang & Wang [26] created a crude oil price forecasting model that utilized a random Elman recurrent neural network, and the predictive power of the model was analyzed in comparison to other models. Yu et al. [27] incorporated the cutting-edge AI method of EELM into an ensemble model formulation to forecast crude oil prices, and findings showed that the suggested unique ensemble learning paradigm statistically outperformed all investigated benchmark models. However, these models have some drawbacks, including local minima, over-fitting, and a large sample size. While it has been demonstrated that ensemble models can outperform individual models, they are still susceptible to issues such as overfitting and being trapped in local extrema, which can limit their ability to generalize effectively. Forecasting by Hybrid Models To overcome the limitations of the aforementioned techniques, hybrid models have been proposed. It is not uncommon for researchers to employ a combination of econometric models and artificial intelligence algorithms or even a combination of econometric models and artificial intelligence algorithms. For example, Cheng et al. [28] predicted crude oil prices in 2018 using the vector error correction and nonlinear auto-regressive neural network (VEC-NAR) model. To enhance the technical indicator-based crude oil price forecasting, He et al. [29] implemented a unique hybrid forecast approach using scaled principal component analysis (s-PCA). In-sample and out-of-sample performance comparisons revealed that the s-PCA model was superior to the compared models. Wang & Fang [30] developed a novel combination of the FNN model and stochastic time effective function for crude oil prices forecasting, i.e., the WT-FNN model, and the findings revealed that the WT-FNN model had the best predictive impact. Zhang et al. [15] offered a novel hybrid technique to predict crude oil prices based on the least square support vector machine, particle swarm optimization, and GARCH model. The experimental findings demonstrated that this approach might accurately estimate crude oil prices. To predict crude oil prices accurately, Wang et al. [31] employed a Markov model to implement the GARCH-MIDAS model for both short-term and long-term state conversion, but they discovered that short-term predictions were more accurate. Like the hybrid approach, our proposed decomposition-ensemble method also takes into account the shortcomings of single models. The biggest difference is that the ensemble learning employs several identical individual methods for ensemble prediction. Forecasting by the Decomposition-Ensemble Learning Method Recent studies have established a novel ensemble predicting approach called the decomposition ensemble to manage the challenge of forecasting nonlinear time-series data. Similar to the hybrid method, this approach considers the limitations of single models. Ensemble learning employs multiple identical single techniques for ensemble prediction, whereas the hybrid model employs multiple distinct single models for combination prediction. Oil price predictions typically rely on various significant studies. For example, Li et al. [25] and Li et al. [32] decomposed the monthly crude oil futures price data into multiple modes using VMD. Then, they forecast each mode using a SVM that was optimized by a genetic algorithm and a BPNN that was optimized by a genetic algorithm. Using the Akaike information criterion (AIC) to determine a reasonable lag, Ding [33] proposed a decomposition ensemble model using ensemble empirical mode decomposition (EEMD) for crude oil forecasting. Yu et al. [9] used empirical mode decomposition (EMD) to decompose crude oil prices and the feedforward neural network (FNN) to forecast the components. Zheng et al. [34] recently proposed a method combining an empirical mode decomposition algorithm, quadratic surface support vector regression, and the autoregressive integrated moving average method for the stock indices and future price forecasting. The study obtained better forecasting results than the direct forecasting model. However, the existing literature on constructing the decomposition-ensemble framework has some limitations. It primarily focuses on selecting decomposition-reconstruction-prediction-ensemble methods based on the characteristics of the model, rather than taking into account the characteristics of the data themselves. Therefore, the method proposed in this paper has the ability of selecting appropriate decomposition methods, reconstruction methods, prediction methods, and ensemble methods based on the specific traits of the data. Recursively Decomposition Method In this paper, we propose a recursive CEEMDAN-based technique for time series forecasting, which attempts to extract more stable sub-components from rapidly changing components to improve the prediction accuracy. The architecture of the proposed method is given in Figure 1. The proposed method recursively calls the CEEMDAN decomposition technique (see, e.g., Torres et al. [35]) for each component until it satisfies one of the following two conditions: • The component becomes less complex than the given series. • The correlation between the component and the given series exceeds a specified threshold. The first condition takes into account the sample entropy values of each component. According to the methods proposed by Richman & Moorman [36], the sample entropy value is greater for more complicated components. Therefore, the more complicated components are decomposed again into their own sub-components via CEEMDAN in the algorithm. The second condition employs Pearson correlation (see, e.g., Hauke & Kossowski [37]) to determine the similarity between the specified component and the series. High correlation is a termination criterion for this recursive method. Recursive decomposition is halted if a sub-component is substantially connected with its higher component regardless of its fluctuation rate. Then, based on the recursive CEEMDAN algorithm, different decomposed components of the original data and their sub-components are obtained. The decomposed components are identified as low-complexity components when they have smaller complexity traits than the original time series after the first decomposition. The decomposed components with larger complexity traits than the original time series will be recognized as high-complexity components when they are recursively decomposed only once. Then, other decomposed components are recognized as medium-complexity components, which implies that these components have larger complexity traits than the original time series and they will be recursively decomposed two or more times. Performance Evaluation Criteria To verify the validity of a forecast, the model outcomes are assessed. Numerous experiments are conducted to evaluate the forecasting performance of the proposed hybrid model and the reference models. In this paper, we use three popular accuracy measures with the following corresponding definitions: where d t and O t are the real and predicted values at time t (t = 1, 2, . . . , N); N is the number of samples in the testing data set; andd t andÔ t are the average values of the actual value and predicted value, respectively. In addition, a Diebold-Mariano (DM) test (see, e.g., Yu et al. [38]) is chosen to prove the superiority of the proposed model. Furthermore, popular single models and several decomposition-reconstruction-ensemble models are built as benchmark models to test the effectiveness of the proposed model. In detail, ES is constructed as the single benchmark model for the traditional econometric model. For AI models, SVR, ELM, and ANN are developed as single benchmark models. As a benchmark model for decomposition-reconstruction-ensemble models, four similar decomposition-reconstruction-ensemble frameworks with different basic prediction models are built. Research Data In this paper, the weekly WTI and Brent crude oil spot price from the US Energy Information Administration (EIA) (http://www.eia.doe.gov/ (accessed on 11 August 2022)) were selected as sample data. The sampling period was from 1 January 2010 to 31 December 2021, and there are 627 observations in total. The training set accounts for 70% of the total sample size, which includes 418 observations, and the test set accounts for 30% of the total sample size, which includes 209 observations. The test data set is used to evaluate how well the proposed model performed compared to the benchmark models. Table 1 displays these initial crude oil price series with their statistical measurements, which include the minima, maxima, means, and standard deviations. We find that the rejection of the null hypothesis of Gaussian distribution results from the Anderson and Darling test, which is confirmed by the time series data with nonzero skewness and positive excess kurtosis. Overall, the chosen observations are not stationary, and the model construction should consider necessary data preprocessing. Experimental Result Analysis First, the original time series of WTI and Brent crude oil prices are decomposed by CEEMDAN, as shown in Figures 2 and 3. In particular, the price series of WTI and Brent crude oil are decomposed into 8 IMF components and one residual term. Each of the intrinsic mode functions can be categorized into high and low frequencies, with each component showcasing unique characteristics. The decomposition analysis reveals that the residue component exhibits noteworthy long-term trends, while sub-components 1 to 8 are stationary or nearly stationary, as illustrated in Figures 2 and 3. However, the effectiveness of the decomposition process in improving crude oil price forecasting performance remains an open topic for further discussion in subsequent sections. In the second step, component reconstruction is performed to reduce the computational time complexity. According to Tables 2 and 3, different decomposed modes have different degrees of complexity, and the complexity traits of each decomposed mode show a downward trend with an increasing time scale. Subsequently, based on the recursive CEEMDAN algorithm, all components are recognized as high-complexity components, medium-complexity components, and low-complexity components. More concretely, IMFs and residual components are identified as low-complexity when they have smaller complexity traits than the original time series after the first decomposition. The IMFs with larger complexity traits than the original time series will be recognized as high-complexity components when they are recursively decomposed with only one step. Then, other IMFs are recognized as medium-complexity components. These components have larger complexity traits than the original time series, and they will be recursively decomposed with two or more steps. Tables 2 and 3 report the test results of the complexity traits for each decomposed component of WTI and Brent crude oil prices, respectively. Next, it is necessary to select a suitable method to predict different components. According to the complexity test results, nine components are reduced into six components after the reconstruction. In addition, the complexity traits of the decomposition components will change when the components change. Based on the reconstruction method, three kinds of components with different degrees of complexity, namely the high-complexity component, medium-complexity component, and low-complexity component, can be obtained. Then, the selection of suitable predictive methods driven by complexity traits is achieved through the trial-and-error approach. Tables 4-9 presents the selection results for predicting different decomposed components of WTI and Brent crude oil prices with complexity traits. Tables 4-6 show the performance value of different combination models such as X-SVR-SVR, SVR-X-SVR, and SVR-SVR-X. For example, in the X-SVR-SVR model (see Table 4), the second and third SVR methods indicate that the medium-complexity component and lowcomplexity component use the SVR model, while X will try four different methods (i.e., ES, SVR, ELM, ANN) to find a suitable model for the high-complexity component. To facilitate computational convenience, the ADD is temporarily employed as an ensemble method for investigating the correlation between the memorable component and the prediction method. Based on the aforementioned explanations, Table 4 presents the experimental findings regarding the high-complexity components. For the parameter of the ES, a simple first-order ES with a smoothing constant is chosen. The smoothing constant is determined using the principle of the minimum root mean square error. For the parameters of the SVR model, the Gaussian RBF kernel function is adopted, and the grid search method is used to set the regularization and kernel parameters. For the ELM and ANN models, the number of nodes in the hidden layer is set to 30. Tables 4-6 illustrate that an ANN is suitable for high-complexity component forecasting, while SVR is suitable for both medium-complexity and low-complexity component forecasting. The ANN-SVR-SVR has better prediction accuracy than other model combinations for WTI crude oil price forecasting. Tables 7-9 show the experimental results of the high-complexity component, medium-complexity component, and low-complexity component, respectively. Similarly, the SVR-SVR-SVR has better prediction performance than other model combinations for Brent crude oil price forecasting according to Tables 7-9. Prediction Performance Comparison In this part, the proposed model, four single models (i.e., ES, SVR, ELM, ANN), and four decomposition-reconstruction-ensemble models (i.e., D-R-ES, D-R-SVR, D-R-ELM, D-R-ANN), which are considered benchmark models, are performed to predict the testing dataset of WTI and Brent crude oil prices. Here, "D" denotes the chosen decomposition method, and "R" denotes the proposed reconstruction rule of the component. The results are shown in Tables 10-13. According to these results, the proposed model almost outperforms all of the considered benchmark models. The final form of the proposed model is simply the decomposition-reconstruction-ensemble model with the form of "D-R-SVR" for Brent crude oil price forecasting. Thus, the model with the form of "D-R-SVR" is not considered a target model in Table 13. Furthermore, the decomposition-reconstruction-ensemble models make predictions better than the single models according to Tables 10 and 11. In particular, for WTI crude oil price forecasting, the decomposition-reconstruction-ensemble models have average MAE, RMSE, and MAPE values of 1.0867, 1.6535, and 0.0345, respectively, while the single models have average MAE, RMSE, and MAPE values of 1.2334, 1.8382, and 0.0408, respectively. For the Brent crude oil price forecasting, the prediction accuracy values for the decomposition-reconstruction-ensemble models are 1.1463, 1.5782, and 0.0233, while those for the single models are 1.3429, 1.9266, and 0.0290. The main reason is that the decomposition-reconstruction-ensemble can minimize the complexity of crude oil data, which boosts its prediction performance compared to benchmark single models. D-R-ES D-R-ELM Comparing with the eight benchmark models, i.e., ES, SVR, ELM, ANN, D-R-ES, D-R-SVR, D-R-ELM, and D-R-ANN, the proposed model shows superior performance in crude oil price forecasting. In Table 10, the proposed model improves the prediction accuracy by 59.89%, 63.33%, and 61.82% on average compared to the benchmark single models and by 52.42%, 55.06%, and 53.10% on average compared to the benchmark decomposition-reconstruction-ensemble models. Then, Table 11 shows that the proposed model improves the accuracy of the Brent crude oil price forecasting by 62.01%, 65.88%, and 65.66% on average compared to the benchmark single models and by 52.96%, 51.17% and 51.47% on average compared with the benchmark decomposition-reconstruction-ensemble models. Therefore, the proposed recursive CEEMDAN decomposition-reconstruction-ensemble prediction method can effectively improve the prediction performance of WTI and Brent crude oil prices. In addition, the DM test is used to compare the prediction performance of different models in the benchmark models in Tables 12 and 13 to statistically prove the superiority of the proposed model for WTI and Brent crude oil price forecasting. These conclusions are statistically proven by data from the DM test, as indicated by the p-values (in brackets). First, at a significance level of 5%, the proposed model outperforms all benchmark models, which suggests that the proposed recursive CEEMDAN decomposition-reconstruction-ensemble prediction model is better than the listed benchmark models for WTI and Brent crude oil price forecasting. Second, when the decomposition-reconstruction-ensemble models in the benchmark models are tested as the target models in Tables 12 and 13, only the D-R-SVR can be proven to be better than all single models with the significance level of 5%. Third, focusing on different decomposition-reconstruction-ensemble models in the benchmark models, although the D-R-SVR can be statistically demonstrated to be better than their D-R-based counterparts at the confidence level of 5%, it is essential to choose the appropriate prediction model for the reconstructed components with different degrees of complexity. Further Discussion In this section, we perform the EEMD decomposition method and two different reconstruction rules to compare the prediction performance of the proposed model. The two rules are mode reconstruction based on the threshold setting of SE (see, e.g., Zhang et al. [39]) and fine-to-coarse (FTC) (see, e.g., Yu et al. [38] and Zhang et al. [39]). Different models are performed as the benchmark models, which are denoted in the form of R-D-R-SA, where "R-D" indicates different recursive decomposition methods to be compared, "R" indicates different reconstruction rules, and "SA" represents the selected predictive methods driven by the complexity traits and simple addition for the final ensemble. Tables 14 and 15 and Figures 4 and 5 show the results of different models. Similarly, the DM test is performed to evaluate the accuracy of different prediction models, and the corresponding results are presented in Table 16. According to Tables 14-16 and Figures 4 and 5, the main findings are as follows. First, as Tables 14 and 15 show, no model can outperform other models under all indicators. Compared with the EEMD decomposition-based models, the proposed model for WTI crude oil price forecasting improves the prediction accuracy by 10.10%, 13.28%, and 11.35% on average, and the proposed model for Brent crude oil price forecasting improves the prediction accuracy by 17.27%, 21.0%, and 16.50% on average. One possible reason is that CEEMDAN minimizes the complexity of WTI and Brent crude oil price data. Thus, it can effectively filter out the meaningful components and significantly enhance the forecast accuracy. Second, the proposed model is better than the benchmark models based on other reconstruction rules. In concrete, compared with the benchmark models with different reconstruction rules, the proposed model for WTI crude oil price forecasting improves the prediction accuracy by 3.51%, 6.75%, and 4.59% on average, and the proposed model for Brent crude oil price forecasting improves the prediction accuracy by 7.78%, 9.40%, and 7.05% on average. Table 16 also shows that the DM test at the 10% level of significance confirms the superiority of the suggested model. Thus, the WTI and Brent crude oil data can be better predicted using the proposed reconstruction approach based on the complexity trait. Third, the proposed model has lower MAE, RMSE, and MAPE than other models based on the EEMD decomposition models and reconstruction rules from Figures 4 and 5. For example, compared with different reconstruction methods in the benchmark models, the proposed model for WTI crude oil price forecasting improves the prediction accuracy by 10.89%, 16.03%, and 12.75% on average, and the proposed model for Brent crude oil price forecasting improves the prediction accuracy by 20.04%, 24.32%, and 18.84% on average. Thus, the proposed model improves the prediction performance in WTI and Brent crude oil price forecasting. Meanwhile, as shown in Table 16, when the proposed model is used as the target model, all p-values of the DM test fall below the threshold of 10%, so the proposed model has a significantly higher level of accuracy in its predictions than the benchmark models. Conclusions and Future Directions This paper proposes a new complexity-traits-driven recursively CEEMDAN decomposition-reconstruction-ensemble method for WTI and Brent crude oil price forecasting. All steps of component reconstruction for decomposed components, component prediction, and ensemble prediction are driven by the complexity traits, and the proposed method proves to be more effective than the benchmark models. In the empirical analysis, the proposed recursive CEEMDAN decompositionreconstruction-ensemble learning paradigm is significantly better than the most popular single models, different decomposition-reconstruction-ensemble models, and ensemble models based on the EEMD decomposition methods or different reconstruction rules. Based on the empirical experiments, four insightful conclusions can be summarized. First, the prediction accuracy of WTI and Brent crude oil price data demonstrates that the proposed model outperforms all benchmark models. Specifically, compared with different benchmark models, the proposed model for WTI crude oil price forecasting improves the prediction accuracy by 56.16%, 59.19%, and 57.46% on average, and the proposed model for Brent crude oil price forecasting improves the prediction accuracy by 57.48%, 58.53%, and 58.56% on average. Therefore, the proposed model can be a useful tool to forecast WTI and Brent crude oil prices in the near future. Second, CEEMDAN can achieve better prediction performance than the EEMD decomposition-based method. For example, compared with the EEMD decomposition-based models, on average, the proposed model improves the prediction accuracy by 10.10%, 13.28%, and 11.35% for WTI crude oil price forecasting and by 17.27%, 21.0%, and 16.50% for Brent crude oil price forecasting. Third, the prediction performance of crude oil price data can be further improved by selecting appropriate prediction models for the reconstructed components with different degrees of complexity. For example, compared with the benchmark decomposition-reconstruction-ensemble models (i.e., D-R-KRR, D-R-ELM, D-R-SVR, and D-R-ANN), on average, the proposed model improves the prediction accuracy by 52.42%, 55.06%, and 53.10% for WTI crude oil price forecasting and by 52.96%, 51.17%, and 51.47% for Brent crude oil price forecasting. Therefore, it is essential to choose the appropriate prediction models according to the complexity traits. Finally, compared with the existing reconstruction rules, the recursively decompositionreconstruction method based on the complexity traits can reduce the modeling complexity well, which shows its usefulness and efficacy in WTI and Brent crude oil price forecasting. For example, on average, the proposed model improves the prediction accuracy by 10.89%, 16.03%, and 12.75% for WTI crude oil price forecasting and by 20.04%, 24.32%, and 18.84% for Brent crude oil price forecasting. Thus, mode reconstruction driven by complexity traits is effective. In addition to the sample entropy used by our recursive CEEMDAN method, other time series features such as the frequency change rate and autocorrelation can be used. Future research extensions will focus on the following: (1) verifying more advanced decomposition methods under the proposed framework in this paper and (2) exploring more results in other research areas such as the stock market, power market, and other emerging markets using the proposed complexity-trait-driven reconstruction-ensemble learning paradigm.
6,806.8
2023-07-01T00:00:00.000
[ "Computer Science" ]
Shear Thickening Polishing of Quartz Glass Quartz glass is a typical optical material. In this research, colloidal silica (SiO2) and colloidal cerium oxide (CeO2) are used as abrasive grains to polish quartz glass in the shear thickening polishing (STP) process. The STP method employs the shear-thickening mechanism of non-Newtonian power-law fluid to achieve high-efficiency and high-quality polishing. The different performance in material removal and surface roughness between SiO2 and CeO2 slurries was analyzed. The influence of the main factors including polishing speed, abrasive concentration, and pH value on the MRR, workpiece surface roughness, and the surface topography was discussed. Two different slurries can both achieve fine quartz surface in shear thickening polishing with the polishing speed 100 rpm, and pH value 8. The quartz glass surface roughness Ra decreases from 120 ± 10 to 2.3 nm in 14 minutes’ polishing with 8 wt% 80 nm SiO2 slurry, and the MRR reaches 121.6 nm/min. The quartz glass surface roughness Ra decreases from 120 ± 10 to 2.1 nm in 12 minutes polishing by 6 wt% 100 nm CeO2 slurry and the MRR reaches 126.2 nm/min. Introduction Quartz glass has been widely used in aerospace, high-power lasers, detection system, optical communication, and laser fusion devices due to its advantages of strong resistance to laser damage, low thermal expansion coefficient, good spectral characteristics, and good thermal shock resistance [1]. Modern optical systems have more and more stringent requirements on the surface roughness of optical components. However, quartz glass is a typical material with high hardness and low fracture toughness, which leads to its difficult-to-processing characteristics [2,3]. The traditional lapping and polishing process can achieve the nanometer level of workpiece surface roughness. However, the traditional contact-processing technology mainly uses mechanical action to remove material, which is easy to cause surface/subsurface damage and affect the performance of optical components [4]. In recent years, many polishing methods have been successfully applied to polishing optical parts such as magnetorheological finishing (MRF), ion beam figuring (IBF), chemical mechanical polishing (CMP), and so on. Zhao et al. used IBF to process the quartz wafer, the RMS value of the workpiece surface decreased from 35.598 nm to 5.060 nm after three iterations [5]. CMP greatly improves the polishing efficiency and workpiece surface quality through the chemical and physical effects of the polishing slurry on the optical glass [6]. Wang et al. [7] obtained a good optical glass surface with an RMS 4.7 A • in a 1 mm × 1 mm area by CMP method, and the MRR 675 nm/min was achieved. Yin et al. use MRF to process K9 glass and use a slotted polishing head to obtain a surface with a roughness of 40 nm under optimized processing parameters [8]. Mosavat et al. [9] simulated the deformation of monocrystalline silicon wafers with the magnetic abrasive finishing (MAF) process, and the workpiece surface roughness R a decreased from 401 nm to 63 nm after processing with optimized parameters. Mosavat et al. [10] studied the effect of process parameters on the reduction rate in the surface roughness of monocrystalline silicon wafers during the MAF process. The research shows that the maximum reduction rate of the silicon wafer is 3.7 nm, and the workpiece surface roughness is 31 nm after processing. Fukushima et al. [11] proposed a new grinding and CMP to remove burrs. Both sides of the silicon wafer were ground and precisely polished after etching to obtain better angular resolution. The shear-thickening polishing (STP) based on non-Newtonian fluid rheological characteristics was proposed to realize the flexible polishing of the curved surface of the workpiece [12]. A complex cutting edge of cemented carbide insert was polished by STP, and the surface roughness R a at the cutting edge was reduced from 121.8 nm to 7.1 nm after 15 minutes' polishing [13]. The surface roughness R a /R z of the black LT substrate was reduced rapidly from 200.5/1374.6 to 4.2/22.1 nm after 4 min polishing by the STP method [14]. D. N. Nguyen et al. obtained a good alloy steel SCM435 gears surface with a surface roughness of 13 nm by STP method under optimal machining parameters [15]. M. Li et al. used the adaptive shearing-gradient thickening polishing (AS-GTP) method to improve surface accuracy and restrain subsurface damage on lithium niobite (LiNbO 3 or LN) crystal. Under certain processing conditions, surface roughness and subsurface damage depth also declined to a minimum critical threshold (<1 nm) [16]. Min Li et al. obtained a super-smooth KDP surface with a surface roughness of 1.37 nm and high shape accuracy by anhydrous-based STP [17]. Binghai Lyu et al. utilized the STP method to achieve high efficiency and high-quality polishing of the concave surface of the hightemperature nickel-based alloy turbine blade. The concave surface roughness R a of the turbine blade was reduced rapidly from 72.3 nm to 4.2 nm after 9 min polishing [18]. SiO 2 and CeO 2 are two kinds of abrasive grains commonly used in the polishing process of quartz glass. The purpose of this article is to clear the different performances of SiO 2 and CeO 2 slurries on the material removal mechanism of quartz glass and the chemical reaction between polishing slurry and workpiece, and give a selection reference of slurry for the shear thickening polishing process of quartz glass workpiece. The effects of different concentrations, polishing slurry pH value, and polishing speed on the surface quality and MRR of the workpiece were investigated through experiments. Principle of Shear Thickening Polishing The macroscopic schematic diagram of the shear thickening polishing of a plane quartz glass workpiece is shown in Figure 1a. The STP slurry is prepared by uniformly dispersing abrasive particles in the base fluid with a shear thickening effect [19]. The rheological properties of the STP slurry change when the shear strain rate applied to the slurry exceeds a critical value. The viscosity of the slurry rises sharply, and the slurry converts to a "flexible fixed abrasive tool" that can adapt to the polishing of various curved surfaces. Although STP can effectively realize the polishing of curved quartz workpieces, such as lenses and hemispherical resonators, the quartz glass plane is selected in this study for the convenience of observation and analysis. The results can provide a reference for the curved workpiece polishing process. The micro schematic illustration of the material removal mechanism of quartz glass in the STP process is shown in Figure 1b. The abrasive particles are wrapped in particle clusters, which are comprised of solid particles as the shear thickening effect is trigged. The solid particle, a kind of organic soft matter, does not affect the removal of workpiece material during the STP process. Under different shear rates, the solid particles have different holding forces on the abrasive particles. As a result, the applied force on the abrasive particle is enhanced dramatically, and the material removal rate is accelerated. At the same time, a soft layer is generated on the workpiece surface by the chemical reaction between quartz glass and the hydroxide ion (OH − ). The material removal is further improved. Experimental Process and Conditions The research experiments were carried out on the experimental device as shown in Figure 2. The quartz glass was fixed on the fixture. During the polishing process, the workpiece was immersed in the polishing slurry and rotates along the Z-axis to ensure that the workpiece surface can be uniformly polished. It is necessary to ensure that the polishing slurry forms an effective polishing pressure and speed on the workpiece surface, and reduce the speed loss during the polishing process. More importantly, a speed gradient should be generated to apply a shear action on the polishing slurry effectively and produce a thickening effect. Therefore, the inclination angle θ between the plane and the horizontal direction is set as 13° [14]. To study the influence of polishing parameters on the surface of quartz glass during the STP polishing process, optimize the polishing parameters and improve the polishing efficiency of quartz glass, the processing conditions are shown in Table 1. The diameter of the quartz glass is 20 mm. The polishing speed and abrasive concentration have been limited in a small variation range according to basic research. Quartz glass undergoes chemical reactions under alkaline conditions, so the polishing effect under the pH values 7, 8, 10, 12 of the polishing slurry was studied. Citric acid and potassium hydroxide were used as pH adjusters. The properties of quartz glass in this study are shown in Table 2. The diameter of the polishing tank is 400 mm, and the polishing speed in this study is defined as the rotation speed of the polishing tank. The workpiece surface was observed every five minutes during the polishing process. The roughness was measured at five different positions on the processing surface, as shown in Figure 3, four points on a circle with a diameter of 15 mm and one point at the center of the workpiece surface. The workpiece surface topography was measured by a scanning electron microscope (SU8010, HITACHI) and a large-field-depth digital microscope (VHX-7000). The roughness of the processing surfaces was measured by a Taylor roughness tester (Form Talysurf i-Series 1) and a white light interferometer (Super View W1). Taylor's sampling length for each measurement point is 2 mm. The sampling range of the white light interferometer is 0.5 × 0.5 mm. An energy dispersive spectrometer (EDS) is used to test the elements on the processed surface. The quality change of the workpiece material before and after polishing was measured by a precision balance (MSA225S-CE) with an accuracy of 0.01 mg. The material removal rate can be calculated by Equation (1). where Δm is the weight loss after polishing, ρ is density, S is the processing area. Parameters Values Abrasive particles SiO2 (80 nm on average), CeO2 (100 nm on average) The diameter of the polishing tank (mm) 400 Experimental Process and Conditions The research experiments were carried out on the experimental device as shown in Figure 2. The quartz glass was fixed on the fixture. During the polishing process, the workpiece was immersed in the polishing slurry and rotates along the Z-axis to ensure that the workpiece surface can be uniformly polished. It is necessary to ensure that the polishing slurry forms an effective polishing pressure and speed on the workpiece surface, and reduce the speed loss during the polishing process. More importantly, a speed gradient should be generated to apply a shear action on the polishing slurry effectively and produce a thickening effect. Therefore, the inclination angle θ between the plane and the horizontal direction is set as 13 • [14]. To study the influence of polishing parameters on the surface of quartz glass during the STP polishing process, optimize the polishing parameters and improve the polishing efficiency of quartz glass, the processing conditions are shown in Table 1. The diameter of the quartz glass is 20 mm. The polishing speed and abrasive concentration have been limited in a small variation range according to basic research. Quartz glass undergoes chemical reactions under alkaline conditions, so the polishing effect under the pH values 7, 8, 10, 12 of the polishing slurry was studied. Citric acid and potassium hydroxide were used as pH adjusters. The properties of quartz glass in this study are shown in Table 2. The diameter of the polishing tank is 400 mm, and the polishing speed in this study is defined as the rotation speed of the polishing tank. The workpiece surface was observed every five minutes during the polishing process. The roughness was measured at five different positions on the processing surface, as shown in Figure 3, four points on a circle with a diameter of 15 mm and one point at the center of the workpiece surface. The workpiece surface topography was measured by a scanning electron microscope (SU8010, HITACHI) and a large-field-depth digital microscope (VHX-7000). The roughness of the processing surfaces was measured by a Taylor roughness tester (Form Talysurf i-Series 1) and a white light interferometer (Super View W1). Taylor's sampling length for each measurement point is 2 mm. The sampling range of the white light interferometer is 0.5 × 0.5 mm. An energy dispersive spectrometer (EDS) is used to test the elements on the processed surface. The quality change of the workpiece material before and after polishing was measured by a precision balance (MSA225S-CE) with an accuracy of 0.01 mg. The material removal rate can be calculated by Equation (1). where ∆m is the weight loss after polishing, ρ is density, S is the processing area. Preparation of STP Slurry The STP slurry is the key to the STP method. In this research, STP slurry is obtained by uniformly dispersing abrasive particles in a non-Newtonian fluid base fluid, which includes thickening phase polymer and dispersant. It is necessary to stir the slurry for 30 min and disperse it for 15 min by an ultrasonic device to make the slurry uniform. Figure 4 shows the viscosity curve of the STP slurry with different abrasive particle concentrations under different shear rates. All rheological curves were measured by the stress-controlled rheometer (MCR 302, Anton Paar, Graz, Austria), a cone-and-plate (Ø 25 mm diameter, 2° cone angle, and 0.103 mm gap) was used, and the testing temperature was controlled at 25 °C by the Peltier heating jacket. Every measurement was repeated three times to quantify the measurement error. There are three viscosity zones at different shear rates which is the same as the viscosity curve of the typical three-stage shear thickening fluid [12]. A slight shear-thinning behavior can be found as the shear rate is low. A strong shear thickening behavior can be found as the shear rate is exceeded and shear-thinning behavior is observed as the shear rate further increases. Preparation of STP Slurry The STP slurry is the key to the STP method. In this research, STP slurry is obtained by uniformly dispersing abrasive particles in a non-Newtonian fluid base fluid, which includes thickening phase polymer and dispersant. It is necessary to stir the slurry for 30 min and disperse it for 15 min by an ultrasonic device to make the slurry uniform. Figure 4 shows the viscosity curve of the STP slurry with different abrasive particle concentrations under different shear rates. All rheological curves were measured by the stress-controlled rheometer (MCR 302, Anton Paar, Graz, Austria), a cone-and-plate (Ø 25 mm diameter, 2° cone angle, and 0.103 mm gap) was used, and the testing temperature was controlled at 25 °C by the Peltier heating jacket. Every measurement was repeated three times to quantify the measurement error. There are three viscosity zones at different shear rates Preparation of STP Slurry The STP slurry is the key to the STP method. In this research, STP slurry is obtained by uniformly dispersing abrasive particles in a non-Newtonian fluid base fluid, which includes thickening phase polymer and dispersant. It is necessary to stir the slurry for 30 min and disperse it for 15 min by an ultrasonic device to make the slurry uniform. Figure 4 shows the viscosity curve of the STP slurry with different abrasive particle concentrations under different shear rates. All rheological curves were measured by the stress-controlled rheometer (MCR 302, Anton Paar, Graz, Austria), a cone-and-plate (Ø25 mm diameter, 2 • cone angle, and 0.103 mm gap) was used, and the testing temperature was controlled at 25 • C by the Peltier heating jacket. Every measurement was repeated three times to quantify the measurement error. There are three viscosity zones at different shear rates which is the same as the viscosity curve of the typical three-stage shear thickening fluid [12]. A slight shear-thinning behavior can be found as the shear rate is low. A strong shear thickening behavior can be found as the shear rate is exceeded and shear-thinning behavior is observed as the shear rate further increases. Material Removal Mechanism of Quartz Glass with Different Slurry The schematic diagram of the material removal process is shown in Figure 5. The main component of quartz glass is SiO2. The Mohs hardness of SiO2 is similar to CeO2. Quartz glass reacts with water to form silanol in a water environment, and the reaction is shown in Equation (2) [20]. Then the surface reactants and workpiece materials are removed by the mechanical action of SiO2 abrasive, as shown in Figure 5a. Polishing under alkaline conditions can improve the MRR because quartz glass can react with OH -, and the reaction is shown in Equation (3) [21]. When the polishing speed is 90 rpm, the abrasive concentration is 6 wt%, the MRR of SiO2 increased from 57.6 nm/min at pH 7 to 69.4 nm/min at pH 12, the MRR of CeO2 increased from 89.2 nm/min at pH 7 to 99.5 nm/min at pH 12, the MRR comparison is shown in Figure 6. Figure 5b presents the process with CeO2, it not only shows the removal method of SiO2 but also other chemical reactions when CeO2 is used for polishing. Cerium hydroxides, the product of cerium atoms and water as shown in Equation (3) [22], will react with silanol to form Ce-O-Si bonds as shown in Equation (4) [22]. The bond energy of Ce-O-Si is greater than the bond energy of Si-O-Si in the quartz glass. With the relative movement of the abrasive particles and the workpiece, the SiO2 can be brought out from the quartz glass [23]. During STP processing, the CeO2 abrasive surface can adsorb more OHthan the SiO2 abrasive because CeO2 is more OHfriendly than SiO2 [20], as shown in Figure 5. It is more beneficial to promote the chemical reaction between the quartz glass surface and the alkaline to a certain extent when the abrasive grains are in contact with the workpiece surface. Finally, the reactant is taken away from the surface of the material by abrasive particles. In addition, it is also conducive to the stable existence of CeO2 particles in the alkaline polishing slurry [23]. Therefore, the MRR of CeO2 is higher than SiO2 under the same polishing parameters. The MRR comparison is shown in Figure 6, when the experimental conditions are the polishing speed is 90 rpm, the abrasive concentration is 6 wt%, the MRR of CeO2 is higher than SiO2. Material Removal Mechanism of Quartz Glass with Different Slurry The schematic diagram of the material removal process is shown in Figure 5. The main component of quartz glass is SiO 2 . The Mohs hardness of SiO 2 is similar to CeO 2 . Quartz glass reacts with water to form silanol in a water environment, and the reaction is shown in Equation (2) [20]. Then the surface reactants and workpiece materials are removed by the mechanical action of SiO 2 abrasive, as shown in Figure 5a. Polishing under alkaline conditions can improve the MRR because quartz glass can react with OH -, and the reaction is shown in Equation (3) [21]. When the polishing speed is 90 rpm, the abrasive concentration is 6 wt%, the MRR of SiO 2 increased from 57.6 nm/min at pH 7 to 69.4 nm/min at pH 12, the MRR of CeO 2 increased from 89.2 nm/min at pH 7 to 99.5 nm/min at pH 12, the MRR comparison is shown in Figure 6. Figure 5b presents the process with CeO 2 , it not only shows the removal method of SiO 2 but also other chemical reactions when CeO 2 is used for polishing. Cerium hydroxides, the product of cerium atoms and water as shown in Equation (3) [22], will react with silanol to form Ce-O-Si bonds as shown in Equation (4) [22]. The bond energy of Ce-O-Si is greater than the bond energy of Si-O-Si in the quartz glass. With the relative movement of the abrasive particles and the workpiece, the SiO 2 can be brought out from the quartz glass [23]. During STP processing, the CeO 2 abrasive surface can adsorb more OHthan the SiO 2 abrasive because CeO 2 is more OHfriendly than SiO 2 [20], as shown in Figure 5. It is more beneficial to promote the chemical reaction between the quartz glass surface and the alkaline to a certain extent when the abrasive grains are in contact with the workpiece surface. Finally, the reactant is taken away from the surface of the material by abrasive particles. In addition, it is also conducive to the stable existence of CeO 2 particles in the alkaline polishing slurry [23]. Therefore, the MRR of CeO 2 is higher than SiO 2 under the same polishing parameters. The MRR comparison is shown in Figure 6, when the experimental conditions are the polishing speed is 90 rpm, the abrasive concentration is 6 wt%, the MRR of CeO 2 is higher than SiO 2 . Polishing at Different pH Values The process under different polishing slurry pH values is carried out with the polishing speed 90 rpm and the abrasive concentration 6 wt%. The MRR of the workpiece during the STP processing is shown in Figure 7a, and the evolution of surface roughness is shown in Figure 7b. It can be seen that the MRR increases as the polishing slurry pH value increases. Under alkaline conditions, the polishing slurry contains a higher concentration of OH − , which is beneficial to react with the quartz glass material. Under the same slurry pH value, the MRR of CeO2 abrasive particles is higher than SiO2. CeO2 is more OHfriendly than SiO2, which is beneficial to promote the contact of OHwith the workpiece surface and improve the MRR during the polishing. Therefore, CeO2 has a higher MRR than SiO2 in an alkaline environment. As the polishing slurry pH value increases, the surface roughness of the quartz glass decreases first and then increases. Better surface roughness can be achieved when the pH is 8. When the polishing slurry pH value is too high, the polishing slurry will over corrode the workpiece surface during STP processing which leads to uneven material removal, and pits will appear on the surface after polishing and the surface roughness increases, as shown in Figure 8. Polishing at Different pH Values The process under different polishing slurry pH values is carried out with the polishing speed 90 rpm and the abrasive concentration 6 wt%. The MRR of the workpiece during the STP processing is shown in Figure 7a, and the evolution of surface roughness is shown in Figure 7b. It can be seen that the MRR increases as the polishing slurry pH value increases. Under alkaline conditions, the polishing slurry contains a higher concentration of OH − , which is beneficial to react with the quartz glass material. Under the same slurry pH value, the MRR of CeO2 abrasive particles is higher than SiO2. CeO2 is more OHfriendly than SiO2, which is beneficial to promote the contact of OHwith the workpiece surface and improve the MRR during the polishing. Therefore, CeO2 has a higher MRR than SiO2 in an alkaline environment. As the polishing slurry pH value increases, the surface roughness of the quartz glass decreases first and then increases. Better surface roughness can be achieved when the pH is 8. When the polishing slurry pH value is too high, the polishing slurry will over corrode the workpiece surface during STP processing which leads to uneven material removal, and pits will appear on the surface after polishing and the surface roughness increases, as shown in Figure 8. Polishing at Different pH Values The process under different polishing slurry pH values is carried out with the polishing speed 90 rpm and the abrasive concentration 6 wt%. The MRR of the workpiece during the STP processing is shown in Figure 7a, and the evolution of surface roughness is shown in Figure 7b. It can be seen that the MRR increases as the polishing slurry pH value increases. Under alkaline conditions, the polishing slurry contains a higher concentration of OH − , which is beneficial to react with the quartz glass material. Under the same slurry pH value, the MRR of CeO 2 abrasive particles is higher than SiO 2 . CeO 2 is more OHfriendly than SiO 2 , which is beneficial to promote the contact of OHwith the workpiece surface and improve the MRR during the polishing. Therefore, CeO 2 has a higher MRR than SiO 2 in an alkaline environment. As the polishing slurry pH value increases, the surface roughness of the quartz glass decreases first and then increases. Better surface roughness can be achieved when the pH is 8. When the polishing slurry pH value is too high, the polishing slurry will over corrode the workpiece surface during STP processing which leads to uneven material removal, and pits will appear on the surface after polishing and the surface roughness increases, as shown in Figure 8. Polishing at Different Speeds The polishing slurries were prepared with concentrations of 6 wt% SiO2 and CeO2. The polishing process is performed under the polishing slurry pH value 7. The polishing experiment was carried out at different polishing speeds. Figure 9 shows that the MRR and roughness change at different polishing speeds. Figure 9a shows that the MRR greatly increases as the polishing speed increases, which is due to the shear stress of the polishing slurry on the workpiece increases as the polishing speed increases. At the same polishing speed, the MRR of CeO2 abrasive is higher than that of SiO2. This is because the CeO2 polishing slurry has a higher viscosity and has a higher ability to hold abrasive grains than SiO2 polishing slurry at the same shear rate. Figure 9b shows that the surface roughness decreases as the speed increases, and the surface roughness increases when the polishing speed is 110 rpm. Figure 10 shows the workpiece surface topography after 20 minutes' polishing by SiO2 and 15 minutes' polishing by CeO2 when the polishing speed is 100 rpm and 110 rpm. There is almost no defect on the polished surface when the polishing speed is 100 rpm, as shown in Figure 10b,e. When the polishing speed is 110 rpm, there will always be some pits on the processed surface, as shown in Figure 10c,f. The SEM topography of pits is shown in Figure 10d,g. The schematic diagram of pit formation is shown in Figure 11. The pressure F and polishing speed v applied on the workpiece surface by the particle clusters, and there are translational and rotational movements during the polishing process. When the polishing speed increased to 110 rpm, the F applied by the particle clusters on the workpiece surface exceeds the brittle fracture value of quartz glass, and the particle clusters are pressed into the workpiece surface like an indenter, causing brittle damage and forming pits on the workpiece surface. Polishing at Different Speeds The polishing slurries were prepared with concentrations of 6 wt% SiO 2 and CeO 2 . The polishing process is performed under the polishing slurry pH value 7. The polishing experiment was carried out at different polishing speeds. Figure 9 shows that the MRR and roughness change at different polishing speeds. Polishing at Different Speeds The polishing slurries were prepared with concentrations of 6 wt% SiO2 and CeO2. The polishing process is performed under the polishing slurry pH value 7. The polishing experiment was carried out at different polishing speeds. Figure 9 shows that the MRR and roughness change at different polishing speeds. Figure 9a shows that the MRR greatly increases as the polishing speed increases, which is due to the shear stress of the polishing slurry on the workpiece increases as the polishing speed increases. At the same polishing speed, the MRR of CeO2 abrasive is higher than that of SiO2. This is because the CeO2 polishing slurry has a higher viscosity and has a higher ability to hold abrasive grains than SiO2 polishing slurry at the same shear rate. Figure 9b shows that the surface roughness decreases as the speed increases, and the surface roughness increases when the polishing speed is 110 rpm. Figure 10 shows the workpiece surface topography after 20 minutes' polishing by SiO2 and 15 minutes' polishing by CeO2 when the polishing speed is 100 rpm and 110 rpm. There is almost no defect on the polished surface when the polishing speed is 100 rpm, as shown in Figure 10b,e. When the polishing speed is 110 rpm, there will always be some pits on the processed surface, as shown in Figure 10c,f. The SEM topography of pits is shown in Figure 10d,g. The schematic diagram of pit formation is shown in Figure 11. The pressure F and polishing speed v applied on the workpiece surface by the particle clusters, and there are translational and rotational movements during the polishing process. When the polishing speed increased to 110 rpm, the F applied by the particle clusters on the workpiece surface exceeds the brittle fracture value of quartz glass, and the particle clusters are pressed into the workpiece surface like an indenter, causing brittle damage and forming pits on the workpiece surface. Figure 9a shows that the MRR greatly increases as the polishing speed increases, which is due to the shear stress of the polishing slurry on the workpiece increases as the polishing speed increases. At the same polishing speed, the MRR of CeO 2 abrasive is higher than that of SiO 2 . This is because the CeO 2 polishing slurry has a higher viscosity and has a higher ability to hold abrasive grains than SiO 2 polishing slurry at the same shear rate. Figure 9b shows that the surface roughness decreases as the speed increases, and the surface roughness increases when the polishing speed is 110 rpm. Figure 10 shows the workpiece surface topography after 20 minutes' polishing by SiO 2 and 15 minutes' polishing by CeO 2 when the polishing speed is 100 rpm and 110 rpm. There is almost no defect on the polished surface when the polishing speed is 100 rpm, as shown in Figure 10b,e. When the polishing speed is 110 rpm, there will always be some pits on the processed surface, as shown in Figure 10c,f. The SEM topography of pits is shown in Figure 10d,g. The schematic diagram of pit formation is shown in Figure 11. The pressure F and polishing speed v applied on the workpiece surface by the particle clusters, and there are translational and rotational movements during the polishing process. When the polishing speed increased to 110 rpm, the F applied by the particle clusters on the workpiece surface exceeds the brittle fracture value of quartz glass, and the particle clusters are pressed into the workpiece surface like an indenter, causing brittle damage and forming pits on the workpiece surface. Polishing at Different Concentrations The polishing slurries were prepared with concentrations of 2 wt%, 4 wt%, 6 wt%, and 8 wt% SiO2 and CeO2. The polishing process is performed under the polishing slurry pH value of 7, and polishing speed of 90 rpm. The MRR of the workpiece during the STP processing is shown in Figure 12a, and the evolution of surface roughness is shown in Figure 12b. As the concentration of abrasive particles increases, the number of abrasive particles acting on the workpiece surface Polishing at Different Concentrations The polishing slurries were prepared with concentrations of 2 wt%, 4 wt%, 6 wt%, and 8 wt% SiO2 and CeO2. The polishing process is performed under the polishing slurry pH value of 7, and polishing speed of 90 rpm. The MRR of the workpiece during the STP processing is shown in Figure 12a, and the evolution of surface roughness is shown in Figure 12b. As the concentration of abrasive particles increases, the number of abrasive particles acting on the workpiece surface Figure 11. Schematic diagram of pit formation. Polishing at Different Concentrations The polishing slurries were prepared with concentrations of 2 wt%, 4 wt%, 6 wt%, and 8 wt% SiO 2 and CeO 2 . The polishing process is performed under the polishing slurry pH value of 7, and polishing speed of 90 rpm. The MRR of the workpiece during the STP processing is shown in Figure 12a, and the evolution of surface roughness is shown in Figure 12b. As the concentration of abrasive particles increases, the number of abrasive particles acting on the workpiece surface increases, and the MRR increases. The MRR of the CeO 2 abrasive particles is higher than the SiO 2 abrasive particles when the abrasive concentration is 2 wt% to 6 wt%. As shown in reaction Equations (2)-(4), there is a certain amount of adsorption removal when using CeO 2 abrasive grains to process quartz glass. The material removal is mainly achieved by mechanical action during quartz glass processing by SiO 2 abrasive. Therefore, under the same abrasive grain concentration, the polishing efficiency of CeO 2 abrasive grains is higher than SiO 2 abrasive grains, and the workpiece surface roughness is lower. When the concentration of abrasive particles is 8%, the fluidity of the polishing slurry prepared by CeO 2 is weakened, and the thickening strength is declined. The high concentration of CeO 2 causes hydrolysis of polyhydroxy aldehyde polymers leading to changes in rheological properties. The viscosity curve of the STP slurry is shown in Figure 4. During the polishing process, the shear thickening effect of the polishing slurry decreases sharply which leads to low holding force on the CeO 2 particles and the MRR decreases. The polishing effect is lower than that of SiO 2 . hines 2021, 12, x FOR PEER REVIEW 10 of 12 increases, and the MRR increases. The MRR of the CeO2 abrasive particles is higher than the SiO2 abrasive particles when the abrasive concentration is 2 wt% to 6 wt%. As shown in reaction Equations (2)-(4), there is a certain amount of adsorption removal when using CeO2 abrasive grains to process quartz glass. The material removal is mainly achieved by mechanical action during quartz glass processing by SiO2 abrasive. Therefore, under the same abrasive grain concentration, the polishing efficiency of CeO2 abrasive grains is higher than SiO2 abrasive grains, and the workpiece surface roughness is lower. When the concentration of abrasive particles is 8%, the fluidity of the polishing slurry prepared by CeO2 is weakened, and the thickening strength is declined. The high concentration of CeO2 causes hydrolysis of polyhydroxy aldehyde polymers leading to changes in rheological properties. The viscosity curve of the STP slurry is shown in Figure 4. During the polishing process, the shear thickening effect of the polishing slurry decreases sharply which leads to low holding force on the CeO2 particles and the MRR decreases. The polishing effect is lower than that of SiO2. Polishing Experiment with Selected Parameters It can be drawn from Sections 4.2 and 4.3 that better surface roughness can be obtained with the polishing slurry pH 8, and a higher material removal rate and surface quality can be obtained with the polishing speed 100 rpm. It also indicates from Section 4.4 that a better polishing effect can be achieved with the 8 wt% concentration of SiO2 slurry or the 6 wt% concentration of CeO2 slurry. The optical quartz glass was polished under the selected conditions with the polishing speed 100 rpm, and the slurry pH value 8. The workpiece surface roughness Ra decreased from 120 ± 10 nm to 2.3 nm in 14 min and the MRR reaches 121.6 nm/min by using 8 wt% SiO2. The workpiece surface roughness Ra decreased from 120 ± 10 nm to 2.1 nm in 12 minutes' polishing by 6 wt% CeO2 and the MRR reaches 126.2 nm/min. The workpiece surface scanning electron microscope (SEM) topography before and after polishing is shown in Figure 13. The images of the quartz glass before and after polishing are shown in Figure 14, and a smooth quartz glass surface is obtained. Polishing Experiment with Selected Parameters It can be drawn from Sections 4.2 and 4.3 that better surface roughness can be obtained with the polishing slurry pH 8, and a higher material removal rate and surface quality can be obtained with the polishing speed 100 rpm. It also indicates from Section 4.4 that a better polishing effect can be achieved with the 8 wt% concentration of SiO 2 slurry or the 6 wt% concentration of CeO 2 slurry. The optical quartz glass was polished under the selected conditions with the polishing speed 100 rpm, and the slurry pH value 8. The workpiece surface roughness Ra decreased from 120 ± 10 nm to 2.3 nm in 14 min and the MRR reaches 121.6 nm/min by using 8 wt% SiO 2 . The workpiece surface roughness Ra decreased from 120 ± 10 nm to 2.1 nm in 12 minutes' polishing by 6 wt% CeO 2 and the MRR reaches 126.2 nm/min. The workpiece surface scanning electron microscope (SEM) topography before and after polishing is shown in Figure 13. The images of the quartz glass before and after polishing are shown in Figure 14, and a smooth quartz glass surface is obtained. increases, and the MRR increases. The MRR of the CeO2 abrasive particles is higher than the SiO2 abrasive particles when the abrasive concentration is 2 wt% to 6 wt%. As shown in reaction Equations (2)-(4), there is a certain amount of adsorption removal when using CeO2 abrasive grains to process quartz glass. The material removal is mainly achieved by mechanical action during quartz glass processing by SiO2 abrasive. Therefore, under the same abrasive grain concentration, the polishing efficiency of CeO2 abrasive grains is higher than SiO2 abrasive grains, and the workpiece surface roughness is lower. When the concentration of abrasive particles is 8%, the fluidity of the polishing slurry prepared by CeO2 is weakened, and the thickening strength is declined. The high concentration of CeO2 causes hydrolysis of polyhydroxy aldehyde polymers leading to changes in rheological properties. The viscosity curve of the STP slurry is shown in Figure 4. During the polishing process, the shear thickening effect of the polishing slurry decreases sharply which leads to low holding force on the CeO2 particles and the MRR decreases. The polishing effect is lower than that of SiO2. Polishing Experiment with Selected Parameters It can be drawn from Sections 4.2 and 4.3 that better surface roughness can be obtained with the polishing slurry pH 8, and a higher material removal rate and surface quality can be obtained with the polishing speed 100 rpm. It also indicates from Section 4.4 that a better polishing effect can be achieved with the 8 wt% concentration of SiO2 slurry or the 6 wt% concentration of CeO2 slurry. The optical quartz glass was polished under the selected conditions with the polishing speed 100 rpm, and the slurry pH value 8. The workpiece surface roughness Ra decreased from 120 ± 10 nm to 2.3 nm in 14 min and the MRR reaches 121.6 nm/min by using 8 wt% SiO2. The workpiece surface roughness Ra decreased from 120 ± 10 nm to 2.1 nm in 12 minutes' polishing by 6 wt% CeO2 and the MRR reaches 126.2 nm/min. The workpiece surface scanning electron microscope (SEM) topography before and after polishing is shown in Figure 13. The images of the quartz glass before and after polishing are shown in Figure 14, and a smooth quartz glass surface is obtained. Conclusions The shear thickening polishing experiments of quartz glass with SiO2 slurry and C slurry were carried out in this study, and the performance difference between the slurries and the mechanism was discussed. Based on experimental and the theore analysis presented above, the following important conclusions can be drawn: altho both slurries can achieve a smooth surface in STP process of quartz materials, the C slurry has a greater MRR and lower surface roughness than SiO2 slurry under the s processing condition. The MRR is improved under alkaline conditions, and a better face can be obtained with pH 8 slurry. There are pits on the workpiece surface ma surface roughness increase when the pH value is higher than 8. The reduction rate of face roughness increases with increasing polishing speed, and also polishing speed plies over high pressure on the workpiece surface causing surface pits. Polishing s 100 rpm is considered as the optimal value in this study as the MRR and surface qu are evaluated at the same time. A high MRR and low roughness can be achieved with 8 wt% SiO2 slurry or the 6 wt% CeO2. The quartz glass was polished under the sele conditions. The surface roughness Ra decreases from 120 ± 10 to 2.3 nm in 14 min polishing by SiO2 slurry and the MRR reaches 121.6 nm/min. The surface roughne decreases from 120 ± 10 to 2.1 nm in 12 minutes' polishing by CeO2 slurry and the M reaches 126.2 nm/min. The results show that the STP is a promising efficient polis method for quartz glass, and the research on the STP process for complex curved sur of quartz glass will be carried out. Conclusions The shear thickening polishing experiments of quartz glass with SiO 2 slurry and CeO 2 slurry were carried out in this study, and the performance difference between the two slurries and the mechanism was discussed. Based on experimental and the theoretical analysis presented above, the following important conclusions can be drawn: although both slurries can achieve a smooth surface in STP process of quartz materials, the CeO 2 slurry has a greater MRR and lower surface roughness than SiO 2 slurry under the same processing condition. The MRR is improved under alkaline conditions, and a better surface can be obtained with pH 8 slurry. There are pits on the workpiece surface making surface roughness increase when the pH value is higher than 8. The reduction rate of surface roughness increases with increasing polishing speed, and also polishing speed applies over high pressure on the workpiece surface causing surface pits. Polishing speed 100 rpm is considered as the optimal value in this study as the MRR and surface quality are evaluated at the same time. A high MRR and low roughness can be achieved with the 8 wt% SiO 2 slurry or the 6 wt% CeO 2 . The quartz glass was polished under the selected conditions. The surface roughness R a decreases from 120 ± 10 to 2.3 nm in 14 minutes' polishing by SiO 2 slurry and the MRR reaches 121.6 nm/min. The surface roughness R a decreases from 120 ± 10 to 2.1 nm in 12 minutes' polishing by CeO 2 slurry and the MRR reaches 126.2 nm/min. The results show that the STP is a promising efficient polishing method for quartz glass, and the research on the STP process for complex curved surfaces of quartz glass will be carried out.
9,952.6
2021-08-01T00:00:00.000
[ "Materials Science" ]
The Neglect of Epistemic Considerations in Logic: The Case of Epistemic Assumptions The two different layers of logical theory—epistemological and ontological—are considered and explained. Special attention is given to epistemic assumptions of the kind that a judgement is granted as known, and their role in validating rules of inference, namely to aid the inferential preservation of epistemic matters from premise judgements to conclusion judgement, while ordinary Natural Deduction assumptions (that propositions are true) serve to establish the holding of consequence from antecedent propositions to succedent proposition. Logic may be Considered as the Science, and also as the Art, of Reasoning When reasoning we carry out acts of passage, "inferences", from granted premises to novel conclusions. Logic is Science because it investigates the principles that govern reasoning and Logic is Art because it provides practical rules that may be obtained from those principles. Reasoning is par excellence an epistemic matter, dependent on a judging agent. If the ultimate starting points for such a process of reasoning are items of knowledge, accordingly a chain of reasoning in the end brings us to novel knowledge. In today' logic, on the other hand, inferences are not primarily seen as acts, but as production-steps in the generation of derivations among metamathematical objects known as wff's, that is, well-formed formulae. Furthermore, by the side of this metamathematical change regarding the status of inferences, an ontological approach has largely taken over from the previous epistemological one. This ontological approach in logic began with another nineteenth century cleric, namely the Bohemian Bernard Bolzano and his Wissenschaftslehre (1837). As is by now well-known Bolzano avails himself of certain denizens in a Platonic "Third Realm" that are known as Sätze an sich, that is, propositions-in-themselves, precisely half of which, namely the truths-in-themselves, are true. This notion of truth (-initself), also considered as a Platonist in-itself notion, when applied to a proposition (in-itself), serves as the pivot for this novel rendering of logic. In particular, Bolzano reduces the epistemic evaluative notions with respect to judgements and inferences, namely correctness and validity, to various matters of ontology pertaining to these propositions-in-themselves. Thus the judgement [A is true], in which truth is ascribed to the proposition (-in-itself) A that serves in the role of judgemental content, is deemed to be right, or correct (German richtig), if the proposition (-in-itself) in question really is a truth. Similarly the inference-scheme, or figure, I: 1 3 content of the conclusion. Another way of formulating the second Bolzano reduction may be found in Wittgenstein's Tractatus (5.11, 5.35.132, 5.133, 6.1201, 6.1221. The inference J is valid if the implication A 1 & A 2 & ... & A k ⊃ C a logical truth, or, in the Tractarian terminology, a tautology. Both formulations of this Bolzano reduction are close enough to what Bolzano actually says; his particular cavils regarding the compatibility of the antecedent propositions, and his conjunctive, rather than the customary current disjunctive reading of consequences with multiple consequent propositions we may, at the present level of generality, disregard. 1 The epistemic conception of traditional logic is allout Aristotelian and stems from the early sections of the Posterior Analytics. The Aristotelian conception of demonstrative science organizes a field of knowledge by using axioms that are self-evident in terms of primitive concepts and proceeds to gain novel insights by application of similarly self-evident rules of inference. Frege's great innovation in logic can be seen as refining this traditional Aristotelian axiomatic conception by joining it to his notion of a formal language, with its concomitant notion of logical inference. Frege's deployment of a novel form of judgement, namely proposition ("Thought") A is true, where the content A has function/argument structure P(a), allowed him to develop a much richer view of what follows from what, in particular when drawing upon quantification theory. He did not change anything, though, with respect to epistemic demonstration (Beweis), which remains Aristotelian through and through. Thus, both the Preface to the Begriffsschrift as well §3 of Grundlagen der Arithmetik bear strong resemblance to the well-known regress argument unto first principles, with which Aristotle opens the Posterior Analytics. Two Views on Logical Language Aristotle's detailed account of consequence from the Prior Analytics, on the other hand, was of course superseded by Frege's introduction of the formal ideography that comprises also quantification theory. Frege's conception of a formal language, though, was different from our modern notion of a formal language (or perhaps better today: formal system) that distinguishes between syntax and semantics and deploys two turnstiles: one "syntactic" |-that really is a metamathematical theorem-predicate with respect to wff's, and indicates the existence of a suitable formal derivation, and one semantic |= that indicates "satisfaction" in a suitable model. Both turnstiles furthermore are relativized by including also assumptions in the guise of antecedent-formulae to the left of the respective turnstile, thereby making matters even more complex. The second, model-theoretic notion plays no role in Frege, and his uses of the "syntactic" turnstile is radically different from the modern one: Frege's sign serves as a pragmatic assertion indicator, whereas the modern one is a predicate-a propositional function if you want-that is defined on well-formed formulae. This difference is symptomatic of the difference in use between Frege's formal language, i.e. his ideography (Begriffsschrift), on the one hand, and modern formal languages that, as a rule, are construed meta-mathematically, on the other hand. 2 The latter can only be talked about; they are objects of study only, but are not intended for use. For instance, in Solomon Feferman's authoritative treatment of Gödel's two Incompleteness Theorems one finds no "object language"; instead Feferman (1960) proceeds directly to the Gödel numbers. Since the object "language" in question is never used for saying anything-its "metamathematical expressions" are not real expressions and do not express, but instead are expressed as the referents of real expressions-there is no need to display such an object language: it is only talked about, but in contradistinction to other languages, it is not a vehicle for the expression of thoughts. 3 Frege's ideography, on the other hand, is an interpreted formal language, and he spent a tremendous effort on meaning explanations, for instance, in the early sections of Begriffsschrift, for the predicate logic version of the ideography from 1879, and in the opening sections § §1-32 of Grundgesetze der Arithmetik, Vol I, from 1893, especially the § §27-31. It should be noted that this Grundgesetze version of the Fregean ideography is not a predicate logic, but a term logic, which sometimes serves to make matters hard to understand when viewed from the prevalent standard of today, where theories are routinely formulated in predicate logic. In Frege's late piece of writing, the Nachlass fragment Logische Allgemeinheit that was left uncompleted at the time of his death, we find a distinction between a Hilfssprache and a Darlegungssprache. The Editors of Frege's "Posthumous Writings" deliberately point to Tarski and translate Hilfssprache as object-language and Darlegungssprache as meta-language. This translation, however, is not felicitous. The term Hilfssprache is the German rendering of the French langue auxiliaire, which term stands for the artificial languages that were considered in the artificial languages movement, of which Frege's correspondents Couturat and Peano were prominent members. 4 Examples that spring to mind are Volapük, Bolak, Esperanto, and today also Klingon, and on the scientific side Interlingua, Latine sine flexione in which Peano wrote a famous paper on differential equations. Frege's Begriffsschrift is precisely such an artificial auxiliary language-a Hilfssprache-and the difference between it and other auxiliary languages is that it is a formal one. Nevertheless, just as Esperanto and Volapük, it was intended for expressing meaning, and accordingly one needs a "language of display" in order to set it out properly. All the languages in the Russell -Tarski tower of "meta-languages" (over the first object-language) are also object-languages, and are ultimately only spoken about. 5 The real meta-language is Curry's "U language"-U for use-and it needs a vantage point outside the Russell-Tarski hierarchy in question. 6 Frege's Darlegungssprache matches Curry's U language and his Hilfssprache is an auxiliary language like Volapük, Bolak, and Esperanto championed by Couturat and Peano (Interlingua, Latine sine flexione). Of course, the two different versions of Frege's ideography in Begriffsschrift and Grundgesetze are Hilfssprachen and must be explained, that is, dargelegt, or spelled out. The editors of the Nachlass compliment Frege for having here anticipated the precise object-language/meta-language distinction that was put firmly onto the philosophical firmament a decade later by Carnap (1934) in Logische Syntax der Sprache and by Tarski in Der Wahrheitsbegriff in den formalisierten Sprachen. However, as we saw Frege's Hilfssprache is not an artefact void of meaning, that is, it is not an uninterpreted, "object-language": on the contrary, it is an auxiliary language in the terminology of the artificial language movement. Up to ± 1930 every logician of note followed Frege's lead when constructing formal calculi, marrying their formal languages to the Aristotelian conception of Science: Whitehead and Russell, Ramsey, Lesniewski, early Carnap (Aufbau and Abriss), Curry, Church, early Heyting …. 7 Their systems were interpreted calculi intended as epistemological tools. The mathematical study of mathematical language was naturally begun by Hilbert as part of his ideological programme of applying positivistic verificationism to mathematics. Here equations between finitistically computable terms serve as analogues of positivist observation sentences. Such formulae [s = t] are even known as "verifiable propositions" in the magisterial Hilbert andBernays (1934, 1939). 8 In the Warsaw seminar of Lukasiewicz and Tarski during the second half of the 1920s, the study of formal languages and formal systems-Many-valued Logics!-was liberated from the Göttingen finitist ideological shackles of Hilbert. From hence on ordinary mathematical means were allowed in the meta-mathematical study of formal systems, much in the same way that naïve set theory was used in the development of set theoretic topology and cardinal arithmetic at which Polish mathematicians then excelled. With this liberating move, yet a further radical shift of perspective occurs. The formal systems no longer serve any epistemological role per se. Instead, strictly speaking, the "well-formed formulae" lack meaning, and do not as such express. They are mathematical objects on par with other mathematical objects; in fact, formally speaking, the meta-mathematical expressions are elements of freely generated semi-groups of strings. With this shift in the role of the "languages" of logic, epistemic matters are driven even further into the background. The logical calculi are not used for epistemological purposes anymore. One only proves theorems about them. During the 1920s the Grundlagenstreit came to the fore and sharp epistemological problems were raised. After Brouwer's criticism of the unlimited use of the Law of Excluded Middle, there appear to be only two viable options with respect to logic. We may keep Platonistic impredicativity and LEM as freely used in classical analysis after the fashion of Weierstrass, or we may jettison them. We have already seen the other dichotomy of options, namely to consider formal systems based on languages with meaning, on the one hand, and based on uninterpreted formal calculi, on the other. After Gödel's work, attempts to resuscitate Fregean logicism, for instance by Carnap, no longer seemed viable and were abandoned: retaining classical logic as well as impredicativity, while insisting on explicit meaning-explanations that render axioms and rules of inference self-evident, simply seems to be asking too much. Thus we may jettison either meaning for the full formal language, while retaining classical logic and impredicativity, which is the option chosen by Hilbert's formalism. Only his "real" sentences, that is, the "verifiable" equations between finitist terms, and which serve as the analogue to the observation sentences of positivism, have meaning, whereas other sentences, the "ideal" ones, strictly speaking, are not given meaning-explanations. For the second option on the other hand we may jettison classical logic and Platonist impredicativity, but then offer meaning explanations for constructivist language after the now familiar fashion of Heyting. 9 Classical logic / \ Language with content Accept Reject Yes The hope of Carnap and others for meaning-explanations for the full language of say, second order analysis that render evident classical logic and impredicativity appears to be forlorn. We may then follow Hilbert confining meaning only to a "real" fragment, while the "ideal sentences" of full language remain uninterpreted, or we may jettison classical logic and impredicativity, and follow Heyting's by now wellknown way of giving constructive meaning-explanations with respect to the full language. Constructive Meaning-Explanations and the Two Layers of Logic With his Constructive Type Theory Per Martin-Löf has given streamlined form to Heyting's "Proof Explanation of the intuitionistic logical constants": a proposition A is explained by laying down how its canonical proofs may be put together out of parts (and when two such canonical proofs are equal canonical proofs of the proposition A). 10 Accordingly, for each proposition A, we have a "type" Proof(A) and define a notion of truth for propositions by means of an application of the truthmaker analysis:A is true = Proof(A) exists. 11 Here the relevant notion of existence cannot be, on pain of an infinite regress, that of the existential quantifier. Classically, we may choose it to be Platonist set-theoretic existence and drawing upon classical reasoning one readily checks that the semantics verifies the Law of Excluded Middle. Thus, if we are prepared to reason Platonistically when justifying the rules of inference and axioms, casting the semantics in terms of the Heyting proof-explanation does not force us to abandon classical logic. This, however, yields no epistemic benefits, and so I prefer to use the Brouwer-Weyl constructive notion of existence with respect to types α. 12 When α is a type (general concept), is a judgement and its assertion condition is given by a rule of instantiation We note that propositions are given by truth-conditions that are defined in terms of (canonical) proofs, and (epistemic) judgements are explained in terms of assertion conditions. Thus we get an ensuing bifurcation of notions at both the ontological level of propositions, their truth, and their proofs (that is, their truthmakers), and on the epistemic level of judgements and their demonstrations. 13 In the table below the epistemological and ontological two sides of logic are spelled out for a fairly large number of notions, and in other writings I have dealt with most of the lines. In the sequel of the present paper I intend to deal with the line contrasting an assumption that a proposition is true with an epistemic assumption that a judgement is known, with as a special case an assumption that a proposition is known to be true. Epistemic notion Ontological ("Alethic") notion (1977). See also the paper by Ansten Klev in the present issue of TOPOI. That Heyting's explanation of truth as existence of a proof (-object) is a kind of truth-maker analysis was first suggested in my (1994a). 12 As is well known, Tarski's definition of truth does not on its own yield the Law of Excluded Middle for the notion of truth thus defined. Classical reasoning in the meta-theory is required for that. In my (2004) I carry out the pendant reasoning and show that, when classical meta-theory is allowed, it is very easy to validity LEM, also under the Heyting semantics. 13 In my (1997), (2000), and (2012) the demonstration versus proof distinction is given more substance. 9 The various options regarding retention of classical reasoning and meaning explanations are spelled out in some details in my 1998a. Four Different Notions of Consequence Apart from the two changes already indicated-the metamathematical shift and the Bolzano reduction of inferential validity to logical truth (or logical consequence) in "all variations"-we then have occasion to consider another major invention of the early 1930s, namely Gentzen's Natural Deduction derivations and his Sequent Calculi. Within the interpreted perspective of an interpreted formal language, with respect to two propositions A and B, there are at least four relevant notions of consequence here. (1) the implication proposition A⊃B, which may be true (or even logically true "in all variations"); (2) the conditional [if A is true then B is true], or, in other words, the consequence [A = > B] may hold; (4) the inference [A is true. Therefore: B is true] may be valid. 14 Fact 1 "implies" takes that-clauses, whereas "if-then" takes complete declaratives. Ergo:implication and conditional are not the same. The conditional (2) is a hypothetical judgement in which hypothetical truth is ascribed to the proposition B. Its verification-object is a dependent proofobject b:Proof(B) [x:Proof(A)], that is, b is a proof of B under the assumption (hypothesis, supposition) that x is a proof of A. The consequence (3) is a Gentzen sequent (German Sequenz). (Why, we may ask, did Gentzen drop the prefix Kon here?) The judgement is a generalization of [A is true] and demands for its verification a mapping (higher-level function) f: Proof(A) → Proof(B). Since implication and conditional are different, this is not the proof-object demanded for the truth of an implication: these have the canonical form λ (A, B, [x]b), or if your prefer the logical formulation, rather than the settheoretical one: where b is a dependent proof of B, under the assumption that x is a proof(A), and have a special application function ap(y,x), whereas application in the case of f is primitive: Fact 2 The judgement (1)-(3) have different meaningexplanations-their assertion conditions are not the sameand accordingly do not mean the same, are not synonymous, while (4) indicates acts of passage. The first three notions, however, are equi-assertible. Given a verification-object for one of the three, verification-objects for the other two are readily found in a couple of trivial steps. Furthermore, all four relations are refuted by the same counter-example, namely a situation in which A is known to be true and B known to be false. This might serve to explain why the four notions have sometimes been hard to keep apart, especially from the classical point of view. 15 Fact 3 Bolzano deals ably with consequence, whereas his account of inference is inadequate and quite psychologistic in terms of Gewissmachungen. 16 Frege, on the other hand, deals ably with inference, but (logical) consequence has no place in his system. Only with Gentzen's 1936 sequential formulation of Natural Deduction, where the derivable objects are sequents, that is consequences, and where the principal introduction and elimination inferences all take place to the right of the sequent-arrow, do we get a system that can cope both with inference and consequence. 17 Fact 4 Consequence, not logical consequence, is the primary notion. Gentzen's system deals with arithmetic; his rules of inference that take us from premise-sequent(s) to conclusion-sequent are obviously valid, but they do not hold logically in all variations. They are only "arithmetically valid". Fact 5 A completeness theorem for an interpreted formal language would state: all truths (and in the case of Gentzen's system: all sequents that hold) are derivable by means of these rules. For Gödelian reasons, interesting systems with theorems of the form [A is true] are not complete. 18 When we now consider how one would establish that (1) to (4) obtain, we see that for (1) The implication A⊃B is established by forming the course-of-value λ (A, B, [x]b), whereas the conditional is already established by the hypothetical, dependent proof-object in question. Finally, forming the function [x] b:Proof(A) → Proof(B) by means of "lambda" abstraction [] (Curry's notation!) on the hypothetical proof establishes that the closed consequence ("sequent") holds. Blind Judgement and Inference Under the Bolzano reduction, when the proofs ("verification objects") work also in all variations, then classically one says that the inference (4) is valid. However, the Bolzano reduction validates what we may, in the excellent terminology of Brentano, call blind judgement and inference. 19 The epistemic link to the judging reasoner has here been severed, whereas I am concerned to preserve this link. Consequence preserves truth from antecedent propositions to consequent proposition, and logical consequence does so "under all variations". The demonstration of the Prime Number Theorem (PNT) by De la Vallée-Poussin and Hadamard in 1896 certainly could be formalized within NBG, the set theory of Von Neumann, Bernays and Gödel. 20 Since this theory is finitely axiomatized, we may conjoin its axioms into one proposition VNBG and then consider the inference The inference (*), certainly, is truth-preserving, in the in the light of the formalized demonstration offered and the Soundness Theorem for the Predicate Calculus: every time an NBG axiom is used in the predicate logic derivation we replace it by the proposition VNBG and then apply conjunction elimination. Hence we get a formal derivation of PNT from VNBG, whence the Soundness Theorem guarantees truth-preservation. So under the Bolzano reduction this is a valid inference, because truth-preserving under all variations, but it provides no epistemic insight at all. ( * ) VNBG is true PNT is true Epistemic Assumptions Instead, validity of inference, rather than (logical) holding of consequence, involves preservation, or transmission, of epistemic matters from premises to conclusion and it is here that epistemic assumptions that judgements are known (or granted) become helpful. In order to validate the inference I one makes the assumption that one knows the premisejudgments, or that they are being given as evident, and under this epistemic assumption one has to make clear that also the conclusion can be made evident. 21 The difference between the two types of assumptions is especially clear when we consider Gentzen derivations in Natural Deduction. An ordinary assumption A of Natural Deduction corresponds to an alethic, ontological assumption that proposition A is true. From such an assumption we may, for instance, obtain a conclusion that B is true, when we have already established the conditional judgement,($) B is true, on hypothesis that A is true, Furthermore, if we wish to do so, from this we readily obtain also the outright assertion that the implication A⊃B is true by implication introduction, or, for that matter, if we so wish, but now with the aid of functional abstraction on the dependent proof-object that warrants ($), we also may conclude that the sequent [A → B] holds. An epistemic assumption that a judgement [A is true] is known, or perhaps better granted, corresponds for Natural Deduction derivations to the hypothesis that we have been provided with a closed derivation of the proposition A. This is patently a different kind of assumption from the ordinary Natural Deduction assumption of the wff A. Brouwer did not accept hypothetical proofs-I hesitate to call them proof-objects in his case. His proofs are all epistemic demonstrations: an assumption that a proposition is true amounts to an assumption that the assumed proposition is known to be true, for instance in his demonstration of the Bar Theorem. 22 Gentzen's Two Frameworks for Natural Deduction Ans Epistemic Assumptions Over the past decades I have had a discussion with Dag Prawitz about the status of the proofs in the BKH explanation: I have claimed that they are not demonstrations with epistemic power, but that they are mathematical witnesses, corresponding to truthmakers in currently popular theories of grounding. Prawitz, on the other hand, has held that they are epistemically binding. 23 With my present terminology I can formulate my principal objection thus: the distinction between epistemic and alethic assumptions collapses if proofs are held to be epistemically binding. There will be no difference between assuming that proposition A is true and assuming that one knows that A is true. In type theory the difference between the two kinds of assumption comes out in different treatments of proof-objects. An ordinary assumption has the form x:Proof(A):assume that x is a proof for A An epistemic assumption with respect to the same proposition takes a closed proof-object as given:assume that I am given a closed proof a:Proof(A) Against the background of these distinctions we can now explain the difference between the two Gentzen frameworks for Natural Deduction. The 1932 format from the dissertation is the usual one with assumption formulae as top nodes in derivations 1936 format, on the other hand, is an axiomatic calculus for deriving consequences of the form, where the assumption formulae are listed 1936 derivations are best seen as demonstrations of judgments of the form: Derivations in the 1932 format, on the other hand, are to my mind best seen, not as epistemic demonstrations, but as dependent proof-objects Π of the form that is, Π is a proof of C under the assumptions that x 1 … x k are proofs of A 1 … A k , respectively. 24 Epistemic Assumptions and Analytic Validation of Inferences In recent work, Per Martin-Löf has given an interesting dialogical twist to epistemic assumptions. 25 Already in his first 1946 paper on performatives, etc., John Austin wrote: If I say "S is P" when I don't even believe it, I am lying: if I say it when I believe it but am not sure of it, I may be misleading but I am not exactly lying. ……… When I say "I know", I give others my word: I give others my authority for saying that "S is P". 26 Assertions contain implicit, first-person knowledge claims (recall G. E. Moore and asserting that it is raining, but that one does not believe it!), so assertions grant authority. When I first read Austin in 2009 I was led to formulate an Inference Criterion of the same kind: When I say "Therefore" I give others my authority for asserting the conclusion, given theirs for asserting the premisses. Martin-Löf has now noted that one does not need to know that the premises are evident for the validation of an inference: what one must be prepared to undertake is to make the conclusion known or evident under the assumption that someone else grants the premises as evident. In order to undertake that responsibility it is enough if I possess a chain of immediately evidence-preserving steps (in terms of meaning-explanations) that link premises to conclusion. 27 Here the introduction rules of Gentzen may be seen as immediate and meaning explanatory, whereas the elimination rules are immediate, but not meaning explanatory. In Kantian terms, both the introduction and elimination rules are analytically valid, but only the introduction rules are explicitly analytic, or "identical", whereas the analyticity of the elimination rules is implicit, and might need to be made explicit in terms of the meaning explanations offered by the introduction rules, in analogy with: All rational animals are rational is an explicitly analytic (identical) judgement, whereas All humans are rational is also an analytic judgement, but only implicitly so, and one resolution-step, replacing the term human by its definition rational animal, is needed to bring this judgement to explicitly analytic form. 28 In order to complete the comparison, we consider the question: Why is &-elimination rule valid? We are then, in an epistemic assumption, given as evident the premise-judgement (i) c:Proof (A&B) for an application of &-elimination. Under this epistemic assumption we have to make evident the conclusion (ii) p(c): Proof(A). Since c is a proof of A&B, it executes, (evaluates, is definitionally equal) to a canonical proof of A&B that accordingly has the form (iii) <a,b>: Proof(A&B) and c = < a,b>: Proof(A&B), where we know that (iv) a :Proof(A) and b:Proof(B). But granted this, it is a meaning stipulation for the ordered-pair-and projection-operators that (v) p(< a,b>) = a:Proof(A) but, since c = < a,b>: Proof(A&B), we also get p(c) = p(< a,b>) = a :Proof(A), whence we are done. Note here these deliberations are all pursuant to the relevant meaning explanations for the notions Proof, &, < >, and p. The step from (i) to (iii) and (iv) matches the resolution-step that replaces human by rational animal. Axiom and Lemma from an Epistemic Point of View Finally, what does this mean for axioms in the traditional sense? Such axioms were self-evident judgements, and known as such. The work of Pasch and Hilbert in geometry initiated a change that led to a hypothetical-deductive conception, which replaced the epistemic notion of inference from self-evident axioms with the model-theoretic notion of logical consequence "under all variations" or "in all models". Natural Deduction added one more feature here to the dethroning of axioms: they now become ordinary assumptions among other ordinary assumptions, but as such they are privileged, because they need never be discharged, and may be discounted, when standing in antecedent position in consequences. Nevertheless, contrary to axioms in the old-fashioned sense, they are not known, nor are they asserted whenever they occur. An axiom in the old sense was not an assumption: it was asserted, whereas now that epistemic status is gone, and instead axioms are unasserted assumptions among other assumptions, with the privilege of not carrying the onus of discharge on them. In conclusion then let me just note that epistemic assumptions are well known in mathematical practice when one draws upon a lemma, the demonstration of which is left out until the main demonstration has been completed. Nevertheless, within the main demonstration, the lemma does not work as an additional assumption, but avails itself of assertoric force, even though proper grounding by means of a demonstration is as yet absent. A very clear case here is the so-called Zorn's Lemma, whose epistemic status is highly debatable from the point of view of constructivism, but classically is granted axiomatic status.
6,825
2018-06-04T00:00:00.000
[ "Philosophy" ]
Schistosoma haematobium infection is associated with lower serum cholesterol levels and improved lipid profile in overweight/obese individuals Infection with parasitic helminths has been reported to improve insulin sensitivity and glucose homeostasis, lowering the risk for type 2 diabetes. However, little is known about its impact on whole-body lipid homeostasis, especially in obese individuals. For this purpose, a cross-sectional study was carried out in lean and overweight/obese adults residing in the Lambaréné region of Gabon, an area endemic for Schistosoma haematobium. Helminth infection status, peripheral blood immune cell counts, and serum metabolic and lipid/lipoprotein levels were analyzed. We found that urine S. haematobium egg-positive individuals exhibited lower serum total cholesterol (TC; 4.42 vs 4.01 mmol/L, adjusted mean difference [95%CI] -0.30 [-0.68,-0.06]; P = 0.109), high-density lipoprotein (HDL)-C (1.44 vs 1.12 mmol/L, -0.24 [-0.43,-0.06]; P = 0.009) and triglyceride (TG; 0.93 vs 0.72 mmol/L, -0.20 [-0.39,-0.03]; P = 0.022) levels than egg-negative individuals. However, when stratified according to body mass index, these effects were only observed in overweight/obese infected individuals. Similarly, significant negative correlations between the intensity of infection, assessed by serum circulating anodic antigen (CAA) concentrations, and TC (r = -0.555; P<0.001), HDL-C (r = -0.327; P = 0.068), LDL-C (r = -0.396; P = 0.025) and TG (r = -0.381; P = 0.032) levels were found in overweight/obese individuals but not in lean subjects. Quantitative lipidomic analysis showed that circulating levels of some lipid species associated with cholesterol-rich lipoprotein particles were also significantly reduced in overweight/obese infected individuals in an intensity-dependent manner. In conclusion, we reported that infection with S. haematobium is associated with improved lipid profile in overweight/obese individuals, a feature that might contribute reducing the risk of cardiometabolic diseases in such population. The authors do not mention how the sample size was calculated. Please include this information in the Methods section. One of the main limitations of this study is the sample size. This information is crucial to determine if the sample size is appropriate, and therefore the conclusions can be supported. Authors' reply: We agree with the reviewer that one of the limitation of our study, as also underlined in the discussion section (page 15), is its rather small sample size. To roughly determine the sample size, we used the average value for total cholesterol levels in a previous small cohort study performed in Lambaréné. For this primary outcome, we aimed to be able to detect a mean difference of ~12,5% between Sh-and Sh+ group, with alpha = 0.05 and a power of 80%. The number of volunteers to be recruited was calculated to be 33 per group. Taking into account the infection prevalence in the study area, a compliance rate of ~80% at screening, and a 5-10% drop-out rate after inclusion (e.g. P. falciparum infection), ~110 individuals were intended to be screened, among which 71 were finally included. This information has been added to the Methods section (line 97-99, Page 7). Individuals Sh+ were negative for STH and Plasmodium? It is not clear in the Methods if coinfections were excluded. Authors' reply: All the individuals found to be positive for Plasmodium falciparum, in both Sh-and Sh+ groups, were excluded (see Figure S1). We agree that this was not crystal clear in the method section, so we have adjusted the sentence accordingly (line 108-111, Page 7). Concerning the co-infection with STH, we have deliberately decided not to exclude the 7 Sh+ individuals found to be infected with other helminths (see response to Reviewer 2 above). As underlined, the impact of infection with other helminths is negligible and removing them from our analyses only reduces the statistical power but does not affect our conclusions. Line 112: for treatment of Sh+ individuals, parasitological (presence of eggs in urine) and CAA results were considered? Authors' reply: For antihelminthic treatment of Sh+ individuals, the presence of urine eggs was used as readout for infection (CAA detection was not done during the field study but months later in the whole samples collection, together with other serum parameters). We have now added this information (line 115-116, page 8). I strongly recommend including the reference values of the biochemical parameters for your study population as supplemental material. Authors' reply: Reference values for biochemical parameters at the whole population level are usually wellestablished in Westernized countries but are not always easily available for African individuals. We provided to the reviewer (see Table 2 for reviewers below) some of the available information obtained from local/national Gabonese health care system but we think it will not be of crucial interest to add them as supplementary data. Of note, the average values for all the biochemical parameters were within the 'normal' physiological ranges for the different groups. in healthy Caucasian adults Quantitative insulin sensitivity check index (QUICKI) = 1 / (log(fasting insulin μU/mL) + log(fasting glucose mg/dL)) should be calculated and included as an additional measurement of insulin resistance. Authors' reply: We agree that calculating the QUICKI index might be an alternative to HOMA-IR for assessing whole-body insulin resistance. However, we do not find any differences using both methods, whatever the conditions (see Table 3 for reviewers as example below). Taking into account that HOMA-IR is by far the most common index used and that it has also been previously used in the few publications investigating the impact of helminths on metabolic homeostasis in humans, we decided not to include this redundant calculated parameter in our already large tables. Reviewer #1: Results are clearly and completely presented. One aspect not addressed is the relationship of the immune response (as determined by eosinophils numbers and or %) to the lipid profile for each individual. The authors do mention in the discussion that the possible effect of IL4/IL13 on hepatocyte function may be one of the mechanisms for lowering TG. Eosinophil levels do reflect the immune response to the parasite by an individual and although the authors are correct to use CAA levels to measure intensity of parasite infection it would also be worth it I thought to measure intensity of the immune response to the Sh and lipid levels. P-value Age ( Authors' reply: We do agree with the reviewer that his/her suggested analysis would have make sense and nicely complement the one done using CAA. Unfortunately, as acknowledged, part of the eosinophil data are missing due to lost samples during field analysis and, as such, we do have a reduced statistical power (especially when the data are stratified according to BMI), preventing to draw reliable and firm conclusions. Of note, when doing this analysis for the whole population (see Figure 1 for reviewers below), some similar trends than the ones observed with CAA are still observed but none of them reached statistical significance due to low sample size. Figure 1 for reviewers: Associations between intensity of S. haematobium infection assessed by blood eosinophil levels and serum lipid parameters in the whole population The 'Statistical Analysis' paragraph is well written. These are a good choice of tests, consideration of confounding factors, and multiple testing corrections. Tables S1 & S2 is where they show the raw Odds Ratio and confounder adjusted OR for egg and serum levels. Tables S1, S2 are very informative, and are well presented by showing both raw and adjusted results (that consider multiple important confounding factors). Tables S3, S4, S5 are mislabelled. Authors' reply: The labels of Tables S3-5 have been corrected (see new Tables). Table S3 summarises factors stratified by CAA range; the Eosinophil response appears to show incredibly strong correlation to stratification level, it is unfortunate they lost some measurements for Eosinophils as stated, but their choice of statistical test is well suited to different sample sizes. Table S4 shows the disparity between gender between BMI >/< 25 groups, showing the importance of showing adjusted OR -It is important that the presentation of raw data is included. Fig. 1 It is interesting when whole population the HDL-C has a significant p-value but when you split it into lean & obese there is no significant p-value in either, despite the same trend. The overall trend is clear however that serum CAA levels are associated with many cholesterol measurements in the obese category. Fig. 2 a and b are skillfully plotted -and a statement is needed with respect to what p-values are associated to (#/*/#*) on the heatmap. Authors' reply: The definition of the p-value labels has been added to the legend of Figure 2 (Page 23) Reviewer #2: The results are well presented and clear We fully agree with the reviewer and, although we speculated in the discussion on some possible underlying mechanism(s), further studies are definitely required for improving our understanding. Reviewer #3: I recommend the authors to include a short conclusion paragraph at the end of the discussion. Paragraph 273-292: The article by Cortes-Selva D et al. (Frontiers in Immunology 12;9:2580) should be included since it reinforces the hypothesis that Schistosoma induced Th2 response confers protection from hyperlipidemia, atherosclerosis, and glucose intolerance. Authors' reply: Together with a short sentence, we have added the suggested publication in the discussion section (line 297-298, Page 15). It might indeed support changes in tissue-resident immune cell lipid/cholesterol metabolism in response to helminth infection. Reviewer #1: The 'Statistical Analysis' paragraph is well written. These are a good choice of tests, consideration of confounding factors, and multiple testing corrections. Tables S1 & S2 is where they show the raw Odds Ratio and confounder adjusted OR for egg and serum levels. Tables S1, S2 are very informative, and are well presented by showing both raw and adjusted results (that consider multiple important confounding factors). Tables S3, S4, S5 are mislabelled. Table S3 summarises factors stratified by CAA range; the Eosinophil response appears to show incredibly strong correlation to stratification level, it is unfortunate they lost some measurements for Eosinophils as stated, but their choice of statistical test is well suited to different sample sizes. Table S4 shows the disparity between gender between BMI >/< 25 groups, showing the importance of showing adjusted OR -It is important that the presentation of raw data is included. Fig. 1 It is interesting when whole population the HDL-C has a significant p-value but when you split it into lean & obese there is no significant p-value in either, despite the same trend. The overall trend is clear however that serum CAA levels are associated with many cholesterol measurements in the obese category. Fig. 2 a and b are skillfully plotted -and a statement is needed with respect to what p-values are associated to (#/*/#*) on the heatmap.
2,454.2
2020-07-01T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
The mechanism of a high-affinity allosteric inhibitor of the serotonin transporter The serotonin transporter (SERT) terminates serotonin signaling by rapid presynaptic reuptake. SERT activity is modulated by antidepressants, e.g., S-citalopram and imipramine, to alleviate symptoms of depression and anxiety. SERT crystal structures reveal two S-citalopram binding pockets in the central binding (S1) site and the extracellular vestibule (S2 site). In this study, our combined in vitro and in silico analysis indicates that the bound S-citalopram or imipramine in S1 is allosterically coupled to the ligand binding to S2 through altering protein conformations. Remarkably, SERT inhibitor Lu AF60097, the first high-affinity S2-ligand reported and characterized here, allosterically couples the ligand binding to S1 through a similar mechanism. The SERT inhibition by Lu AF60097 is demonstrated by the potentiated imipramine binding and increased hippocampal serotonin level in rats. Together, we reveal a S1-S2 coupling mechanism that will facilitate rational design of high-affinity SERT allosteric inhibitors. S erotonin (5-HT) transmission is involved in many basic brain functions, such as regulation of mood, sleep, appetite, and sexual drive 1 . The serotonin transporter (SERT) is embedded in the presynaptic membrane and terminates 5-HT transmission by rapid reuptake of released 5-HT. SERT belongs, together with the transporters for the neurotransmitters dopamine, norepinephrine, γ-aminobutyric acid, and glycine, to the family of neurotransmitter:sodium symporters (NSSs) that all exploit the electrochemical potential stored in the Na + gradient to translocate substrates against their concentration gradients [2][3][4] . Pharmacological inhibition of SERT is known to alleviate symptoms of depression and anxiety, the two most prevalent psychiatric disorders ranking among the top five leading causes of disability worldwide 5,6 . Specifically, the selective serotonin reuptake inhibitors (SSRIs), such as S-citalopram (S-CIT) (Fig. 1a), sertraline and paroxetine, are currently used to treat depression, anxiety, obsessive-compulsive disorder (OCD), and post-traumatic stress disorder (PTSD) among others 7 . The tricyclic antidepressants, such as imipramine (IMI) (Fig. 1a) and amitriptyline, and the multimodal antidepressants vilazodone and vortioxetine 8 , also target SERT. In addition, SERT is in part targeted by (illicit) psychostimulants such as MDMA (ecstasy), ibogaine, cocaine, and amphetamine 9,10 . The structures of human SERT 11 , drosophila dopamine transporter 12 and two bacterial NSSs, LeuT 13 and MhsT 14 , have revealed a conserved structural fold for NSSs with a primary ligand binding (S1) site located in the center of the transmembrane domain. Interestingly, the existence of an allosteric binding site in SERT was reported more than three decades ago 15 . The key observation was that certain SERT inhibitors could impede the dissociation of a pre-bound radiolabeled ligand [16][17][18][19] . The most potent impedance being by S-CIT on [ 3 H]S-CIT dissociation, however, only displays a low potency (IC 50 is~5 µM), and in spite of intensive investigations 20,21 no other compound has shown to possess higher potency 20,22 . Based on computational modeling and experimental binding studies, we previously located the lowaffinity second binding site for S-CIT and clomipramine to the extracellular vestibule (EV), the entry pathway toward the S1 site 21 . Interestingly, in one of the recent hSERT crystal structures (PDB 5I73), two S-CIT molecules are bound to the protein: one in the S1 site (denoted as S1:S-CIT) and another bound to a binding site in the EV-the S2 site (denoted as S2:S-CIT)~13 Å above the S1 site 11,23 . This is consistent with previous findings that the EV of LeuT harbors a S2 site capable of binding ligands [24][25][26] . The comparison of the 5I73 structure to the structure bound with only one S-CIT in S1 (PDB 5I71) shows that they have identical conformations in the EV, and it is not clear whether ligand bindings in the S1 and S2 sites allosterically interact with each other through modulation of any specific structural motif 27,28 , which may result in conformational changes. Allosteric modulators can potentially possess higher selectivity due to the divergence of the binding sites among homologous proteins 29,30 resulting in fewer side effects. In addition, compared with competitive inhibitors to the endogenous ligand, they may retain some of the functions of the target proteins. Indeed, several well-known allosteric modulators possess novel pharmacologic properties such as use-dependency (e.g., lidocaine), or activity modulation of the endogenous ligand (e.g., benzodiazepines), or to perform asymmetric signaling as in metabotropic glutamate receptor complexes 31 , all providing advantageous therapeutic potentials over orthosteric modulators. For S-CIT, it has been proposed that its allosteric binding in SERT contributes to its higher efficacy and faster onset observed in clinical trials as compared with racemic citalopram [32][33][34][35][36] . However, the low affinity of S-CIT to S2 relative to S1 hampers the possibility to reveal the specific therapeutic potentials in targeting this site. Thus, a high affinity and selective S2-bound ligand would facilitate not only a thorough mechanistic understanding of allosteric communications between the S2 and S1 sites, but also a proper evaluation of the therapeutic potential of allosteric modulation in SERT. Here, we provide mechanistic evidence for allosteric modulations between the S1 and S2 sites in SERT by showing that the effects of ligand binding to the S1 site allosterically propagate through altered conformation of a structural motif between the S1 and S2 sites. In the context of these findings, we report the identification and characterizations of a high-affinity S2-bound inhibitor for SERT, Lu AF60097 ((S)-1-(4-fluorophenyl)-1-(3-(4-(2-oxo-1,2-dihydroquinolin-7-yl)piperidin-1-yl)propyl)-1,3-dihydroisobenzofuran-5-carboxamide), in vitro, in silico, and in vivo. Results S1-binding is allosterically connected to S2-binding. We have previously shown that the binding of either S-CIT or clomipramine to the S2 site inhibits [ 3 H]S-CIT dissociation from the S1 site 21 . To examine whether ligand binding to S1 would induce conformational changes of SERT that allosterically affect ligand binding to S2 37 , we first evaluated whether the inhibitory potency of a S2-bound ligand can be differentially affected by the identity of the S1-bound ligand. For S-CIT and IMI, we specifically assumed that they would bind exclusively to S1 at nanomolar concentrations (denoted as S1:S-CIT and S1:IMI). Accordingly, we added 25 nM of either [ 3 H]S-CIT or [ 3 H]IMI to membranes of COS-7 cells transiently expressing hSERT wild type (WT), and then measured the dissociation rates of the two ligands in the presence of increasing concentrations of S-or R-CIT (0.4 µM-1 mM) that should occupy the S2 site at high concentrations (S2:S-CIT and S2:R-CIT). The increase in S2-occupancy by either S-or R-CIT results in a dose-dependent inhibition of dissociation by the S1-bound radioligand. The concentration of a S2-bound ligand causing 50% decrease in the dissociation of a S1-bound radioligand (IC 50 ) is used as a measure to reflect the inhibitory (allosteric) potency for S2 binding (see "Methods"). We found that the allosteric potency of S2:S-CIT was 29-fold higher in the presence of S1:[ 3 H]S-CIT than in the presence of S1:[ 3 H]IMI (Fig. 1b, Table 1). In contrast, the allosteric potency of S2:R-CIT was reversed, i.e., lower in the presence of S1:[ 3 H]S-CIT relative to S1:[ 3 H]IMI (Fig. 1c, Table 1). The results indicate that the allosteric potency of a S2-bound ligand is sensitive to the identity of the S1-bound ligand. Thus, assuming no direct interaction between the S1-and S2-bound ligands (see below), these results support the idea that SERT conformational changes induced by ligand binding to S1 modulate ligand binding to S2. To probe the mechanistic details of a possible allosteric interaction between ligands bound to S1 and S2, we performed extensive molecular dynamics (MD) simulations at microsecond scale using the ts3 and WT SERT models (see "Methods") in complexes with different combinations of S1-and S2-bound ligands (Fig. 2, Supplementary Table 1). We first compared the resulting conformations of the simulations in the presence of S1: S-CIT without any ligand bound in S2 (denoted as S1:S-CIT/S2: apo) to that of the 5I71 structure. When comparing the two S-CIT bound SERT crystal structures, 5I71 and 5I73, we found that the EV space occupied by the S2:S-CIT in 5I73 is filled by a dodecane in 5I71 (likely part of a lipid molecule used in the crystallization process) ( Supplementary Fig. 1). Interestingly, in the absence of any ligand in S2, our simulations in both ts3 and WT constructs showed that the side chains of Phe334 and Phe335 of TM6 move toward the center of the EV and form an aromatic cluster with Phe556 that rotates inward ( Supplementary Fig. 1). Consequently, Phe335 and Phe556 occupy the space that overlaps with dodecane and S2:S-CIT in the crystal structures (Supplementary Fig. 2). On the other hand, the resulting EV conformations of the S1:S-CIT/S2:S-CIT simulations were similar to that of the 5I73 structure. Thus, the conformational differences observed between S1:S-CIT/S2:apo and S1:S-CIT/S2:S-CIT conditions suggest that the binding of S-CIT in S2 is associated with robust conformational rearrangements. We then compared the SERT conformations in the S1:S-CIT/ S2:apo and S1:IMI/S2:apo conditions (Fig. 2b, d). We found that different moieties of these two S1 ligands that face TM10, i.e., the cyano group of S-CIT and the aromatic ring of IMI, have significantly different impacts on the conformation of the bulge helical turn in TM10 (Leu492 to Thr497). In particular, we found a remarkable difference in the χ 1 dihedral angle of Thr497 depending on the docked compound. The cyano group of S1:S-CIT favors the χ 1 rotamer of Thr497 to be in gauche−, while this rotamer is more likely in gauche+ in the presence of S1:IMI (Fig. 2g). Next, we built and equilibrated SERT WT models in the S1:S-CIT/S2:S-CIT and S1:IMI/S2:S-CIT conditions (see "Methods", Fig. 2c, e). The analysis of the MD simulations of these conditions showed that the χ 1 rotamer of Thr497 in the presence of S1:S-CIT is further stabilized in gauche− by the addition of S2:S-CIT, whereas the S2:S-CIT in the same pose is not stable in the presence of S1: IMI, forcing Thr497 in the latter condition to rotate from gauche+ to the gauche− rotamer (Fig. 2f). When Thr497 is in gauche−, we found that the S1-gating residue, Phe335, cannot form a stable interaction with the benzofuran moiety of S2:S-CIT and transitions between gauche− and trans rotamer in the presence of S1:IMI, whereas this interaction is stable in the presence of S1:S-CIT. To quantify this difference, we counted the numbers of transitions in each condition, and found Phe335 transitions between gauche− and trans rotamer at a rate of 145.4/µs in S1:IMI/S2:S-CIT, while only 1.1/µs in S1:S-CIT/S2:S-CIT. Thr497 and Phe335 are situated in between the S1 and S2 sites. Their varied configurations in the S1:S-CIT/S2:S-CIT and S1:IMI/ S2:S-CIT conditions correlate with the conformation of Glu494, which shows a higher propensity to form a salt bridge with the Fig. 1 Experimental evidence for allosteric binding between the S1 and S2 sites in SERT. a Chemical structure of the tested drugs. Left: imipramine (IMI). Right: R(−)-and S(+)-citalopram (R-CIT and S-CIT, respectively). charged N of S2:S-CIT in the presence of S1:S-CIT compared with in the presence of S1:IMI (Fig. 2h), resulting in a more stable pose in the former condition (Fig. 2i). Thus, we hypothesized that the observed S2:S-CIT affinity difference in these two conditions (Fig. 1b) is likely resulted from the different impacts of S1-bound ligands on the interaction between S2:S-CIT and Glu494, which are mediated by the Thr497-Phe335 motif. To experimentally test this hypothesis, we removed the negative charge of Glu494 by the E494Q mutation and measured the allosteric potency of R-and S-CIT in the presence of S1:[ 3 H] S-CIT or [ 3 H]IMI (Fig. 1d). Remarkably, compared with WT, in hSERT E494Q, the allosteric potency of S2:S-CIT was significantly reduced for S1:[ 3 H]S-CIT but did not change for [ 3 H] IMI, and the two potencies became virtually the same. The same was observed for S2:R-CIT (Fig. 1d, Supplementary Table 2). As our simulation results suggest that Thr497 and Phe335 are sterically crowded in the S1:IMI/S2:S-CIT condition, we further hypothesized that by mutating Thr497 to a residue with a smaller sidechain, the space in between S1 and S2 would be less crowded, which might facilitate the S2:S-CIT binding. Indeed, in the presence of [ 3 H]IMI, the allosteric potency of S2:S-CIT was increased 17-fold in SERT T497A relative to SERT WT. In contrast, the allosteric potency of S2:S-CIT or S2:R-CIT in the presence of [ 3 H]S-CIT was not affected by T497A (Fig. 1e, Supplementary Table 2). Taken together, we propose that Thr497-Phe335 represents a structural motif mediating allosteric communication between the S1 and S2 sites. Identification of a high affinity S2 inhibitor. Based on these findings, we hypothesized that the binding of S2-ligands having of the allosteric interaction between SERT S1 and S1 sites. In the presence of S1:S-CIT the Thr497 χ1 dihedral is mostly shifted towards gauche−, whereas in the presence of S1:IMI, it is in gauche+, which in turn affects the salt bridge interaction between S2:S-CIT and Glu494. In all panels, the S1:S-CIT conditions are colored in salmon, whereas the S1:IMI conditions are in purple. a A zoomed-out view of the 5I73 structure showing the S1 and S2 sites. b A zoomed-in view of the equilibrated model of WT S1:S-CIT/S2:apo, c S1:S-CIT/S2:S-CIT, d S1:IMI/S2:apo, and e S1:IMI/S2:S-CIT. g Distribution of the Thr497 χ1 rotamer for S1:S-CIT/S2:apo, S1:IMI/S2:apo (dotted lines), and S1:S-CIT/S2:S-CIT and S1:IMI/S2:S-CIT (solid lines) conditions. h Distribution of the Glu494/S2:S-CIT distance (minimum distance between the charged N of S2:S-CIT and the two carboxyl oxygens of Glu494) for S1:S-CIT/S2:S-CIT and S1:IMI/S2:S-CIT conditions. i S2:S-CIT is more stable in the presence of S1:S-CIT (salmon) than in the presence of S1: IMI (purple) measured by pairwise ligand RMSDs (see "Methods"). higher allosteric potency in the presence of S1:IMI must not result in steric crowdedness near the Thr497-Phe335 motif, while forming a favored interaction with Glu494. To further characterize the potential of the S2 site as a druggable allosteric site 28,31 with the ultimate goal of developing therapeutic agents against it, we screened a compound library of citalopram analogs and assessed their allosteric potency using both S1:[ 3 H]S-CIT and S1:[ 3 H]IMI as described above. Strikingly, we found three S-CIT analogues possessing very potent (30-200 nM) inhibition of the S1:[ 3 H]IMI dissociation, but low (6-30 µM) allosteric potency on inhibiting the S1: Table 1). Interestingly, these three compounds all have a carboxamide instead of the cyano group on the benzofuran moiety of the S-CIT scaffold. They differ, however, in the presence and position of double bonds in the bicyclic N-substituents (Fig. 3a). In particular, the allosteric potency of Lu AF60097 in the presence of S1:IMI was 31 nM (Table 1), which is more than a 150-fold increase compared with citalopram. To understand the molecular mechanism of the selective high allosteric potency of Lu AF60097, we characterized and compared the binding modes of S2:Lu AF60097 in the presence of S1:S-CIT versus S1:IMI by MD simulations. We first docked Lu AF60097 unbiasedly into the extracellular vestibule of our equilibrated S1: IMI/S2:apo model, and identified three poses (denoted as pose "I", "II", and "core") to be further relaxed and evaluated by MD simulations (see "Methods" and Supplementary Table 1). We then carried out molecular mechanics/generalized Born surface area (MM/GBSA) calculations to evaluate the binding energy of the equilibrated poses, and found that the pose "core", in which the S-CIT scaffold of Lu AF60097 adopts a similar orientation as S2:S-CIT in the S2 site, has the most favored energy (pose "I", −59.7 kcal/mol; pose "II", −73.7 kcal/mol; pose "core", −77.6 kcal/mol). Consistent with the predicted binding free energy, the S-CIT core of Lu AF60097 in pose "I" protrudes out of the EV and is not fully engaged with hSERT, resulting in drastically weaker binding ( Supplementary Fig. 3a). Whereas pose "core" forms the ionic interaction with Glu494 and is in proximity to Lys490, pose "II" does not form interactions with either of these residues but a polar interaction with Asp328 ( Supplementary Fig. 3b, c). Therefore, we chose the pose "core" for further analysis (Fig. 3d, e). Our mutagenesis results of Asp328, Lys490, and Glu494 indeed support pose "core" but not pose "II" (see below). Our MD simulations show that the cyano-to-carboxamide substitution orients the benzofuran moiety of Lu AF60097 to move slightly away from the Thr497-Phe335 motif compared with S2:S-CIT: whereas the cyano group of S2:S-CIT points to a polar cavity under Gln332, the carbamoyl group of Lu AF60097 forms a H-bond to the sidechain of Gln332 ( Supplementary Fig. 4). Such a rearrangement allows the sidechain of Thr497 to be in the preferred gauche + χ 1 rotamer in the presence S1:IMI, while S2:Lu AF60097 forms a salt bridge with Glu494 through its charged N (Fig. 3f, g). In addition, the quinolinone moiety of Lu AF60097 protrudes into a sub-pocket near the tip of the extracellular loop 4b (EL4b) with the 2-oxo modification forming , and e S1:S-CIT (in salmon)/S2:Lu AF60097 (in green) conditions. f Distribution of the Thr497 χ1 rotamer for S1:S-CIT/S2: Lu AF60097 (salmon) and S1:IMI/S2:Lu AF60097 (purple) conditions. g Distribution of the Glu494/S2:Lu AF60097 distance (minimum distance between the charged N of Lu AF60097 and the two carboxyl oxygens of Glu494) for S1:S-CIT/S2:Lu AF60097 (salmon) and S1:IMI/S2:Lu AF60097 (purple) conditions. Experiments in b and c are performed essentially as in Fig. 1 Interestingly, when similar MD simulations were carried out for a S1:S-CIT/S2:Lu AF60097 model (Fig. 3e), in which the S-CIT scaffold of Lu AF60097 is aligned to S2:S-CIT as in the pose "core", S2:Lu AF60097 could not form the salt bridge with Glu494 through its charged N (Fig. 3g), while its quinolinone moiety could not get into the sub-pocket near EL4b. These divergent poses of S-CIT scaffolds of S2:Lu AF60097 versus S2:S-CIT are likely associated with the replacement of cyano group with a carboxamide in the benzofuran moiety. Consequently, the ionic interaction of S2:Lu AF60097 to Glu494 is much weaker than that of S2:CIT in the presence of S1:S-CIT (Figs. 2h and 3g). Moreover, by comparing the MD frames from both the S1:IMI/S2:Lu AF60097 and S1:S-CIT/S2:Lu AF60097 conditions when the quinolone moiety of Lu AF60097 did not protrude into the sub-pocket near EL4b, we found that the moiety is in different orientations and dynamics in the EV (Supplementary Fig. 5). Together, our computational results indicate that the allosteric interactions between the different S1:ligands and S2:Lu AF60097 result in markedly different poses of S2:Lu AF60097, which could account for the observed 200-fold loss in IC 50 in the presence of S1:[ 3 H]S-CIT compared with that in the presence of S1:[ 3 H]IMI (Table 1). In the [ 3 H]IMI dissociation assay, we found in agreement with the MD simulations that most mutations decreased the allosteric potency of Lu AF60097 more than tenfold (Fig. 4b, Table 2). The F556R mutant caused the most significant change with a complete ablation of the allosteric potency within the concentration range of the applied Lu AF60097. According to our MD simulations, Phe556 has a stable aromatic interaction with the fluorophenyl ring of the S-CIT scaffold in Lu AF60097, and therefore the F556R mutation would expectedly cause a drastic change in the IC 50 . Ala331 interacts with both of the aromatic rings in the S-CIT scaffold of Lu AF60097 in the MD simulations, and the substitution with the negatively charged Asp residue does indeed cause a~300-fold decrease in allosteric potency, suggesting a critical position of this residue in the binding pocket for S2:Lu AF60097. In addition, mutations L406E in EL4b and I179F in neighboring TM3 are expected to disrupt the structural integrity of EL4b, a segment predicted in accommodating the quinolinone moiety of Lu AF60097. Indeed, these mutations resulted in 21-and 14-fold decrease in the allosteric potency of S2: Lu AF60097, respectively. Taken together, most of the mutations based on the predicted pose "core" from our MD simulations have detrimental effects on Lu AF60097 binding in vitro, suggesting that the compound does bind to the predicted S2 site. Of note, E494Q only resulted in a~5-fold decrease in allosteric potency. This is in contrast to the S1:S-CIT/S2:S-CIT condition in which the mutation caused a~12-fold decrease. The difference is likely due to a reduced contribution from the salt bridge to binding affinity for the larger Lu AF60097 (39 heavy atoms, compared 24 of S-CIT). In our equilibrated S1:IMI/S2:Lu AF60097 model from the MD simulations, Lu AF60097 in pose "core" interacts with the backbone but not the sidechain of Asp328. Thus, the only minor (~2-fold) decrease in allosteric potency caused by the D328N mutation was expected. This result also argues that Lu AF60097 is less likely to be in pose "II", which forms a polar interaction with the sidechain of Asp328 (Supplementary Fig. 3b). Competitive and non-competitive inhibition of 5-HT transport. Because of the low potency of S2:Lu AF60097 in inhibiting the dissociation of S1:[ 3 H]S-CIT, we reasoned that it was possible to assess the affinity of Lu AF60097 at the S1 site using Supplementary Fig. 6). Whereas we cannot rule out that [ 3 H]S-CIT binding is partially modulated by the allosteric interaction, the results suggest that the S2 (in the presence of S1:IMI) over S1 selectivity for Lu AF60097 is at least eightfold. For S-CIT, the selectivity is~1000-fold in favor of the S1 site. In comparison, when we performed a binding experiment under similar conditions but using [ 3 H]IMI as the radioligand, [ 3 H]IMI was not displaced by Lu AF60097, possibly because Lu AF60097 has a high S2 affinity in the presence of S1:IMI and locks [ 3 H]IMI in the S1 site ( Supplementary Fig. 6). This finding further substantiates the ligand-dependent differences in the allosteric interaction between the S1-and S2-bound ligands. Next, we investigated the capability of Lu AF60097 to inhibit the transport of [ 3 H]5-HT by hSERT expressed in COS-7 cells (Fig. 5 Fig. 5a). This IC 50 is different from its measured allosteric potency but similar to the equilibrium binding affinity for Lu AF60097 when displacing [ 3 H]S-CIT, suggesting a S1 component of Lu AF60097's inhibition of the uptake. To further substantiate the possibility of a S1-binding component, we also performed [ 3 H]5-HT uptake inhibition in the F556R mutant, which only showed very minimal allosteric binding by Lu AF60097 (Fig. 4). Indeed, the uptake inhibition by Lu AF60097 in this mutant was not different from WT (K i = 220 [190; 240] and 260 [210; 310] nM, n = 15 and 3, for WT and F556R, respectively, Fig. 5a). To examine whether the inhibition of 5-HT uptake is due to a blockade by a competitive or a non-competitive action by Lu AF60097, we performed [ 3 H]5-HT saturation uptake with increasing concentrations of Lu AF60097 (Fig. 5b, Supplementary Table 4). The results show that Lu AF60097 inhibits 5-HT uptake mainly by changing the K M of [ 3 H]5-HT transport in the low concentrations, indicative of a competitive action. In the high concentrations Lu AF60097 also reduce the maximal uptake velocity (V MAX ), suggesting a combined competitive and noncompetitive mechanism. Thus, together with the [ 3 H]S-CIT equilibrium binding results, we found that Lu AF60097 possesses a S1-binding-based competitive component as well, when the S1 site is not occupied by IMI. In our prolonged MD simulations of the potential binding pose of Lu AF60097 in the S1 site, we found that its S-CIT scaffold adopts a similar pose as that of the S1-bound S-CIT, while the quinolinone moiety protrudes out of the S1 site. However, its positively charged tertiary amine moiety cannot form any ionic interaction with either the sidechain carboxyl group of Asp98 or the backbone carbonyl group of Tyr95 ( Supplementary Fig. 7) 38 . This less favored binding mode of Lu AF60097 is consistent with its significantly reduced affinity at S1 compared with S1:S-CIT (Table 1). Lu AF60097 and imipramine can block SERT synergistically. Since the Lu AF60097 affinity is markedly increased in the presence of S1:IMI, we predicted that they would inhibit 5-HT uptake synergistically, i.e., the inhibitory effect of applying them together would be more potent than combining the effects of applying them individually. Thus, we studied the inhibition of [ 3 H]5-HT uptake by low concentrations of either IMI or Lu AF60097 alone or in combination. As shown in Fig. 5c, IMI (4 nM) or Lu AF60097 (27 nM) alone only resulted in a modest decrease in [ 3 H]5-HT uptake (10.7 ± 0.8% and 5.0 ± 1.8% inhibition, respectively, relative to control, mean ± S.E., n = 5-7). In contrast, the two compounds together acted synergistically and caused a significant 36.2 ± 0.3% decrease of the 5-HT uptake (mean ± S.E., n = 5). The results support that the binding of one ligand facilitates the binding of the other ligand. Hippocampal 5-HT levels are increased by Lu AF60097. To investigate whether Lu AF60097 administration has any effect on 5-HT homeostasis in an in vivo setting, we performed microdialysis in rat hippocampus with local administration of Lu AF60097. Two microdialysis probes were inserted, one in each hemisphere, into rat hippocampus. In close vicinity of each probe, an injection needle was placed and 5-HT levels were measured through the microdialysis probe on the freely moving rats. When 5-HT levels were stabilized, we injected 1 µl of 250 nM Lu AF60097 in one hemisphere while saline (artificial cerebrospinal fluid, aCSF) was injected in the other for comparison. Lu AF60097 increased 5-HT levels reaching significance after 80 min and reached to about 6-fold above saline after 2 h (Fig. 6a). The results suggest that Lu AF60097 is capable of targeting and blocking SERT function in vivo at a concentration no higher than 250 nM. We further assessed whether it was possible to mimic the synergistic effect on 5-HT levels we observed on cell lines when co-administering IMI and Lu AF60097. To match the concentrations from the in vitro experiment, we first performed an in vitro recovery experiment to determine the fraction of compounds, which would perfuse through the dialysis probe. We found that 18.6 ± 1.1% and 3.22 ± 0.7 (means ± SEM, n = 3) of IMI and Lu AF60097, respectively would cross the probe membrane. Based on these experiments, we found that the perfusion of either 0.36 µM IMI or 1.9 µM Lu AF60097 in the microdialysis probe had no effect on hippocampal 5-HT levels relative to saline within the duration of the experiment (Fig. 6b). In contrast, when the two compounds were administered together, the 5-HT levels rose to significantly higher levels either 40 or 60 min after perfusion start, relative to IMI and Lu AF60097 alone, respectively. The increased 5-HT levels remained increased during the remaining of the 160 min test period. These data suggests that Lu AF60097 is also able to potentiate the effect of IMI on extracellular 5-HT levels in an in vivo setting. Discussion The presence of an allosteric site in SERT has been known for more than three decades 39 . Structurally diverse compounds such as sertraline 17 , paroxetine 19 , clomipramine 21 , and citalopram 21 have been shown to possess allosteric activity as they can impair dissociation of a pre-bound high affinity radioligand to the transporter. However, the allosteric potencies of these compounds are all in the micromolar range, while they bind to the orthosteric site with low-nanomolar affinity. Accordingly, it has not been possible to isolate their specific allosteric impact on SERT function. Mutagenesis studies 21 and x-ray crystallography 11 have located the allosteric site to the EV of the transporter. In addition to SERT, allosteric sites at similar locations have been found in NET 18 and LeuT 26 . Further, it has been proposed that S2 occupancy by a substrate in LeuT is required for substrate to be released from S1 24 , while the substrate 5-HT has been shown to have an allosteric effect on [ 3 H]IMI dissociation in SERT, though with low potency 39 . Here, we show at atomistic detail that the S1 and S2 binding sites in SERT are allosterically coupled to each other. By combining extensive (~125 µs) MD simulations of various conditions and site-directed mutagenesis, we show that the impact of ligand binding to the S1 site propagates through the Thr497-Phe335 motif to alter the configuration of the S2 site. In particular, the propensity of Glu494 in the S2 site to form a salt bridge with the S2-bound ligands is differentially affected by S1:S-CIT and S1: IMI. Consistent with this prediction, removal of the negative charge of Glu494 (E494Q) results in similar S2 affinities for S-CIT in the presence of either S1:IMI or S1:S-CIT, while the same impact on the S2 affinities of R-CIT were observed as well (Fig. 1). Moreover, mutating Thr497 to a residue with a smaller sidechain (T497A) improves S2:S-CIT binding in the presence of S1:[ 3 H] IMI, to the same extent as when [ 3 H]S-CIT is bound to S1. Together, the results suggest that the configuration of the Thr497-Phe335 motif is sensitive to the identity of the S1-bound ligand and plays a critical role in the allosteric communication between the S1 and S2 sites. We further report on a compound with nanomolar affinity at the S2 site of hSERT. Lu AF60097 has a >100-fold gain in allosteric potency relative to any other reported S2-bound hSERT ligand 22 . Our MD simulations suggest that Lu AF60097 can stably bind in the EV with its S-CIT scaffold in a similar pose as that of S2:S-CIT revealed by the hSERT crystal structure 11 . Interestingly, the potency of Lu AF60097 in the S2 site shows a reversed trend compared with S2:S-CIT, i.e. higher allosteric potency with S1:[ 3 H]IMI than with S1:[ 3 H]S-CIT (Fig. 3b, c). Based on our simulation results, we propose that, compared with S2:S-CIT, the carboxamide substituent of Lu AF60097 alters the polar interactions near Gln332 and shifts the S-CIT scaffold slightly away from the Thr497-Phe335 motif thus relieving the steric crowdedness between the S1 and S2 sites in the presence of S1:IMI. In addition, our simulation results indicate that the selectively improved S2:Lu AF60097 affinity in the presence S1: IMI may also come from the specific binding of the quinolinone substituent of Lu AF60097 in a sub-pocket near EL4, which is not formed in the presence of S1:S-CIT. Mutations of selected residues show detrimental effects on Lu AF60097 allosteric potency, supporting its predicted binding pose. Taken together, we conclude that, in the presence of S1:IMI, Lu AF60097 binds with high affinity to the EV of SERT. In addition to high S2 affinity, high S2 specificity is also necessary to isolate the allosteric impact of a S2-bound ligand on SERT function. Thus, we investigated whether Lu AF60097 has any S1 binding component and found that it binds at S1 with an IC 50 of~265 nM (Supplementary Fig. 6). This value is~9-fold higher than that of its allosteric potency and is promising for isolating the allosteric impact. Supported by the [ 3 H]5-HT saturation uptake experiment (Fig. 5b) and the F556R mutation, which virtually eliminates S2:Lu AF60097 binding, but has no impact on its inhibition of the 5-HT uptake, we conclude that Lu AF60097 has a S1 binding component. Whereas we present the first lead compound with high-affinity allosteric association to SERT in vitro, an immediate question is whether this translates into an in vivo setting. We showed that a small amount of Lu AF60097 (1 µl 250 nM) is able to elicit a marked increase of extracellular 5-HT levels in the microdialysis analysis, suggesting that the compound is also capable of targeting SERT in vivo. This we further substantiated by showing that co-administration of IMI and Lu AF60097 in vitro (Fig. 5c) and in vivo (Fig. 6b) does have a potent effect on the inhibition of 5-HT uptake, compared with administrating either of these two compounds alone in the same concentrations. This opens the door for further in vivo analysis of Lu AF60097 or similar next generation compounds to adequately assess these potentials. We propose that a clinical potential of the allosteric inhibitor property of Lu AF60097 lies in its potentiation of IMI binding. This might make it possible to lower the therapeutic dose of IMI by coadministering an allosteric binder such as Lu AF60097, preserving the positive effects of tricyclic antidepressants on major depressions while reducing the detrimental side effects such as cardiac arrhythmias and -arrest 40 and interferences with autonomic control 41 . Indeed, the results herein open the possibility of performing additional in vivo experiments with SERT allosteric inhibitors in combination with current effective orthosteric inhibitors, to probe for improved therapeutic effects using behavioral paradigms for depression or anxiety. Methods Site directed mutagenesis. The human SERT was cloned into the pUbi1z vector using the NotI and XbaI. Mutations herein were generated using the QuickChange method (adapted from Stratagene, La Jolla, CA) or ordered through GeneArt. All mutations were confirmed by DNA sequencing. The used primers were L406E, CGCAGGTCCCAGCCTCGAGTTCATCACGTATGCAG A486E, CTTTTGGAGGGGAGTACGTGGTGAAG E494K, GAAGCTGCTGGAGAAGTACGCCACGGGG F556L, CATTTGCAGTTTACTCATGAGCCCGCCAC F556R, CATCATTTGCAGTAGACTGATGAGCCCG Only the sense primers are shown. The complimentary antisense primers were also used. (25)(26)(27)(28)(29)(30) Ci/mmol). COS-7 cells, transfected with SERT WT or SERT mutants were seeded in 24-well dishes (10 5 cells/well) coated with polyornithine. The seeded cell number were adjusted to achieve an uptake level of maximally 10% of total added [ 3 H]5-HT. The uptake assays were carried out 2 days after transfection. Prior to the experiments the cells were washed once in 800 µL uptake buffer (120 mM NaCl, 5 mM KCl, 1.2 mM MgSO 4 , 1.2 mM CaCl 2 , 10 mM glucose, 25 mM HEPES, pH 7.4) at room temperature. Drugs tested for inhibition of uptake were added to cells in indicated concentrations 30 min prior to addition of 0.3 µCi [ 3 H]5-HT. After incubating 3 min (WT) or 5 min (mutants) the cells were washed twice with 500 µL ice-cold uptake buffer, lysed in 250 µL of 1% SDS and left for 1 h at 37°C. All samples were transferred to 24-well counting plates, and 500 µL Opti-phase Hi Safe 3 scintillation fluid was added followed by counting in a Wallac Tri-Lux beta-scintillation counter. Nonspecific uptake was determined in the presence of 1 µM paroxetine. All determinations were performed in triplicate. Molecular docking. We used the crystal structures of hSERT bound with Scitalopram in the S1 site only (PDB ID 5I71), and bound in both S1 and S2 sites (PDB ID 5I73) in ts3 construct as the starting points for our modeling studies. The binding site ions missing in the crystal structures were added. For the WT hSERT models, the three thermostabilizing mutations that were introduced in the ts3 construct were mutated back to the WT residues. Imipramine was docked into the S1 site using the induced-fit docking (IFD) protocol 42 implemented in the Schrodinger suite (release 2016-4). The best-scoring pose was selected, which is consistent with previously deduced imipramine binding pose 43 . Lu AF60097 was docked into the S2 site using the IFD protocol. The RMSD values of the ensemble of IFD poses were calculated and clustered using the centroid linkage method as implemented in the Conformer Cluster module of Maestro software (release 2016-4, Schrodinger Inc., New York, NY). Three largest clusters of Lu AF60097 poses were: pose I, where dihydroisobenzofuran moiety is close to Phe556 and fluorophenyl is close to TM1b and TM6; pose II, where fluorophenyl is close to Phe556 and dihydroisobenzofuran moiety is close to TM1b and TM6; and pose "core", where the benzodioxo and fluorophenyl moieties of the S-CIT scaffold adopt a similar orientation as S2:S-CIT (PDB ID 5I73) and the quinolinone moiety of Lu AF60097 protrudes toward extracellular milieu. These three poses were selected as initial starting points for further relaxation by MD simulations (Supplementary Table 1) and evaluation by MM/GBSA for their binding energy (see below). MM/GBSA analysis. The Molecular mechanics/generalized Born surface area calculations (MM/GBSA) analysis of the S1:IMI/S2:Lu AF60097 binding poses was carried out on the last 300 ns of each trajectory using the thermal_mmgbsa.py script from Schrodinger suite (release 2017-2), which calculates Prime MM/GBSA (version 4.8) for every frame in a trajectory using OPLS3 force field with VSGB2.1 solvation model 44 . The values reported in the results section are averages for all trajectories of each of the three S1:IMI/S2:Lu AF60097 poses. MD simulations. hSERT models were placed into explicit 1-palmitoyl-2-oleoyl-snglycero-3-phosphocholine lipid bilayer (POPC) using the orientation of the 5I73 structure from the Orientation of Proteins in Membranes database 45 . Simple point charge (SPC) water model 46 was used to solvate the system, charges were neutralized, and 0.15 M NaCl was added. The total system size was ∼135,000 atoms. Desmond MD systems (D. E. Shaw Research, New York, NY) with OPLS3 force field 47 were used for the MD simulations. The system was initially minimized and equilibrated with restraints on the ligand heavy atoms and protein backbone atoms, followed by production runs at 310 K with all atoms unrestrained. The NPγT ensemble was used with constant temperature (310 K) maintained with Langevin dynamics, 1 atm constant pressure achieved with the hybrid Nose-Hoover Langevin piston method 48 on an anisotropic flexible periodic cell, and a constant surface tension (x-y plane). Overall, 74 trajectories of with a total simulation time of 125.28 μs were collected (Supplementary Table 1). Conformational analysis. Distances and dihedral angles were calculated with MDTraj (version 1.7.2 49 ), in combination with in-house Python scripts. Data sets for conformational analyses were assembled as follows. We first combined data from individual trajectories into a common pool for each simulated condition. Then, for the histograms in Figs. 2g, h and 3f, g, we extracted 500 bootstrapped samples of 5000 random frames each for a given simulated condition and plotted averages and standard deviations of frequency distributions for those 500 samples. For the ligand pairwise RMSD calculations (Fig. 2i) we carried out ten bootstrap samplings, and extracted 500 frames for each sampling for each condition. For each of the 500-frame bootstraps, all the frames are aligned pairwise for the RMSD calculations, yielding a 500 × 500 matrix. The averages and standard deviations of the ten bootstrapped samples are reported. For Supplementary Fig. 5, 50,000 random frames were extracted from the S1:S-CIT/S2:Lu AF60097 and S1:IMI/S2: Lu AF60097 conditions (for the latter, we selected and used the frames that have the distance between the 2-oxo modification of the quinolinone moiety of AF60097 and the side chain oxygen of Ser404 greater than 4.5 Å). Microdialysis experiments. Sprague Dawley rats weighing 300-400 g is anesthetized with Hypnorm/Midazolam (2 ml/kg) and intracerebral guide cannulas are stereotaxically implanted into the brain, aiming to position the guide cannula tip in the ventral hippocampus (co-ordinates: −5.6 mm posterior to bregma, lateral −4.8 mm, −4.0 mm ventral to dura) according to Paxinos and Watson. Anchor screws and dental cement are used for fixation of the guide cannulas. The body temperature of the animals is maintained at 37°C using a homoeothermic blanket. Rats recovered from surgery for 2-3 days, single-housed. On the day of the experiment a microdialysis probe (CMA/12, 0.5 mm diameter, 3 mm membrane length, non-metal) is inserted through the guide cannula. The probes are connected via a dual swivel to a microinjection pump. Perfusion of the microdialysis probe with filtered Ringer solution (145 mm NaCl, 3 mM KCl 1 mM MgCl 2 , 1.2 mM CaCl 2 ) begin shortly before insertion of the probe into the brain and continued for the duration of the experiment at a constant flow rate. After stabilization, the experiments are initiated. The experimental design consisted of collection of brain dialysate samples in 20 min fractions. Prior to the first sample collection the probes had been perfused for 180 min. A total of 12 fractions were sampled (four basal fractions and eight fractions after perfusion start) and the dialysate 5-HT content was analyzed using HPLC detection. Perfusion solutions contained Lu AF60097, imipramine, a combination of the two, or vehicle (artificial cerebrospinal fluid (aCSF)). After the experiments, the animals are euthanized. The concentration of 5-HT in the dialysates was determined by means of HPLC with electrochemical detection. The monoamines were separated by reverse phase liquid chromatography (ODS 160 × 3.0 mm column) and analyzed using a mobile phase consisting of 150 mM NaH 2 PO 4 , 4.8 mM citric acid monohydrate, 3 mM dodecyl sulfate, 50 μM EDTA, 8 mM NaCl, 11.3% methanol and 16.7% acetonitrile (pH 5.6); flow rate of 0.4 ml/min. Electrochemical detection was accomplished using a coulometric detector and a SenCell (Antec); potential set at E1 = 500 mV (Coulochem III, ESA). All experiments involving research animals were performed in accordance with guidelines from the Danish Animal Experimentation Inspectorate and approved by the local ethical committee. Data calculation. The allosteric potency was calculated as previously described 21 . The dissociation rate constants (k [drug] ) at indicated unlabeled ligand concentrations were calculated and expressed relative to the dissociation rate constant without the presence of unlabeled ligand (k buffer ). The allosteric potency was determined as the drug concentration that impairs the dissociation rate by 50% compared with dissociation in buffer. IC 50 values were calculated from concentration effect curves of normalized dissociation ratio (k [drug] /k buffer ) versus log [drug] and are shown as mean values calculated from means of pIC50 and the SE interval from the pIC50 ± S.E. All data were analyzed by linear or nonlinear regression analysis using Prism 7.0 (GraphPad Software Inc., San Diego, CA). Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data avaliability Data supporting the findings of this manuscript are available from the corresponding authors upon reasonable request. A reporting summary for this article is available as a Supplementary Information file. The source data underlying Figs. 1b-e, 3b-c, 4b, 5a, 6 and Supplementary Fig. 6 are provided as a Source Data File.
9,724.4
2020-03-20T00:00:00.000
[ "Biology", "Psychology", "Chemistry" ]
Quantitative Real-Time PCR Assay for the Detection of Pectobacterium parmentieri, a Causal Agent of Potato Soft Rot Pectobacterium parmentieri is a plant-pathogenic bacterium, recently attributed as a separate species, which infects potatoes, causing soft rot in tubers. The distribution of P. parmentieri seems to be global, although the bacterium tends to be accommodated to moderate climates. Fast and accurate detection systems for this pathogen are needed to study its biology and to identify latent infection in potatoes and other plant hosts. The current paper reports on the development of a specific and sensitive detection protocol based on a real-time PCR with a TaqMan probe for P. parmentieri, and its evaluation. In sensitivity assays, the detection threshold of this protocol was 102 cfu/mL on pure bacterial cultures and 102–103 cfu/mL on plant material. The specificity of the protocol was evaluated against P. parmentieri and more than 100 strains of potato-associated species of Pectobacterium and Dickeya. No cross-reaction with the non-target bacterial species, or loss of sensitivity, was observed. This specific and sensitive diagnostic tool may reveal a wider distribution and host range for P. parmentieri and will expand knowledge of the life cycle and environmental preferences of this pathogen. Introduction The potato (Solanum tuberosum) is one of the most important crops in the world. The world market for potato production exceeds 388 million tons per year (https://www. potatopro.com/world/potato-statistics (accessed on 7 April 2021)) and per capita consumption in Russia exceeds 110 kg (https://www.potatopro.com/russian-federation/ potato-statistics (accessed on 7 April 2021)). Therefore, research related to optimising potato production, increasing yields and reducing losses associated with plant diseases and other factors is essential and urgent. Among the challenges faced by potato growers is potatoes' spoilage as a result of bacterial infections. In particular, the development of rot on tubers during storage and transportation can lead to severe losses-up to half of the harvest [1]. The leading cause of blackleg and soft rot in potatoes is the bacteria of the Pectobacteriaceae family, namely the group of Soft Rot Pectobacteriaceae (SRP), comprising phytopathogens of the genera Pectobacterium and Dickeya [2]. One of the representatives of this group is P. parmentieri. P. parmentieri (Ppa) was first described by Khayi et al. in 2016. It is a species closely related to the previously known pathogen of Japanese horseradish, P. wasabiae (Pwa). Several Pwa strains, isolated from potatoes and which cause soft rot, have been scrutinised Figure 1. Phylogenetic tree based on the concatenated nucleotide sequences of 92 conservative genes, including the genes of ribosomal proteins and the proteins essential for the transcription and translation processes. Bootstrap support values are shown above their branch as a percentage of 1000 replicates. The scale bar shows 0.01 estimated substitutions per site, and the tree was rooted to Samsonia erythrinae DSM 16730. Average nucleotide identity (ANI) values compared to P. parmentieri RNA 08-42-1A type strain are shown to the right of the organism name and coloured according to a heat map scale, where a green colour corresponds to the highest value and a red colour corresponds to the lowest value. Search for Species-Specific Primers The search for species-specific sequences was carried out using the workflow described in a previous study [20]. Briefly, this workflow splits the genome of the type Ppa strain into short sections, then each section is compared with a negative database of "nontarget" genomes and a positive database of "target genomes" and, as a result, regions are identified that occur in all Ppa genomes and are not found in genomes of other species. Using this search, a set of unique Ppa species-specific sites was obtained. Regions belonging to the areas of the genome encoding no genes were manually rejected. Next, several potentially suitable sites within the housekeeping genes were selected for further Figure 1. Phylogenetic tree based on the concatenated nucleotide sequences of 92 conservative genes, including the genes of ribosomal proteins and the proteins essential for the transcription and translation processes. Bootstrap support values are shown above their branch as a percentage of 1000 replicates. The scale bar shows 0.01 estimated substitutions per site, and the tree was rooted to Samsonia erythrinae DSM 16730. Average nucleotide identity (ANI) values compared to P. parmentieri RNA 08-42-1A type strain are shown to the right of the organism name and coloured according to a heat map scale, where a green colour corresponds to the highest value and a red colour corresponds to the lowest value. Search for Species-Specific Primers The search for species-specific sequences was carried out using the workflow described in a previous study [20]. Briefly, this workflow splits the genome of the type Ppa strain into short sections, then each section is compared with a negative database of "non-target" genomes and a positive database of "target genomes" and, as a result, regions are identified that occur in all Ppa genomes and are not found in genomes of other species. Using this search, a set of unique Ppa species-specific sites was obtained. Regions belonging to the areas of the genome encoding no genes were manually rejected. Next, several potentially suitable sites within the housekeeping genes were selected for further preliminary testing in the conventional PCR mode (Section 2.3) and a further selection of Plants 2021, 10, 1880 4 of 12 the most appropriate sequence for qPCR analysis development was made (Section 2.4). Primers and probes were designed for these sites. Table 1 shows the sequences of primers, probe and amplicon for detection based on the ankyrin repeat domain-containing protein sequence that showed the best results and was therefore selected for further study. Table 1. Primers for amplification of a species-specific region and P. parmentieri and the amplicon of ankyrin repeat domain-containing protein. The selected species-specific sequence belongs to an ankyrin repeat domain-containing protein that is located adjacent to the components of a type VI secretion system. Interestingly, an avirulence factor was located several genes upstream of the locus shown in Figure 2. A type VI secretion system is important for plant-associated bacteria, including the Pectobacterium species. It contributes to virulence and grants fitness and colonisation advantages in planta [21]. It might be suggested that the gene containing the species-specific sequence is important for the bacterium. The sequence search conducted with BLAST using an nr/nt database confirmed that the chosen amplicon did not have close homologues in other organisms. preliminary testing in the conventional PCR mode (Section 2.3) and a further selection of the most appropriate sequence for qPCR analysis development was made (Section 2.4). Primers and probes were designed for these sites. Table 1 shows the sequences of primers, probe and amplicon for detection based on the ankyrin repeat domain-containing protein sequence that showed the best results and was therefore selected for further study. The selected species-specific sequence belongs to an ankyrin repeat domain-containing protein that is located adjacent to the components of a type VI secretion system. Interestingly, an avirulence factor was located several genes upstream of the locus shown in Figure 2. A type VI secretion system is important for plant-associated bacteria, including the Pectobacterium species. It contributes to virulence and grants fitness and colonisation advantages in planta [21]. It might be suggested that the gene containing the species-specific sequence is important for the bacterium. The sequence search conducted with BLAST using an nr/nt database confirmed that the chosen amplicon did not have close homologues in other organisms. Primary Analysis by Conventional PCR For the initial assessment of the applicability of the primers obtained for the purpose of species-specific PCR detection, a conventional PCR test was carried out on a limited set of strains. The strains marked F… are a part of the local collection of bacterial pathogens associated with potato soft rot. The collection includes comprehensively described type strains, strains with appropriate genomic characterisation and loosely characterised local isolates. The information on the strains used is provided in Supplementary Table S1. The primary testing strain set included several representatives of different Pectobacteriaceae species belonging to the genus Pectobacterium ( Primary Analysis by Conventional PCR For the initial assessment of the applicability of the primers obtained for the purpose of species-specific PCR detection, a conventional PCR test was carried out on a limited set of strains. The strains marked F . . . are a part of the local collection of bacterial pathogens associated with potato soft rot. The collection includes comprehensively described type strains, strains with appropriate genomic characterisation and loosely characterised local isolates. The information on the strains used is provided in Supplementary Table S1 only with the target strains (marked in the boxes) and in the absence of false-positive results with all other strains. This enabled the assumption of this site's suitability for amplification in qPCR mode, and made it possible to proceed to the validation using an extended range of strains. Figure 3 shows the results of such an analysis for the amplification of ankyrin repeat domain-containing protein, as a result of which significant amplification was demonstrated only with the target strains (marked in the boxes) and in the absence of false-positive results with all other strains. This enabled the assumption of this site's suitability for amplification in qPCR mode, and made it possible to proceed to the validation using an extended range of strains. qPCR Analysis on an Extended Set of Strains This study involved seven strains previously attributed to being Ppa or Pwa on the basis of genomic sequencing or 16S rRNA gene sequencing. Two more strains were previously identified as Pwa using the diagnostic primer set PhF 5′-GGTTCAGTGCGTCAG-GAGAG and PhR 5′-GCGGAGAGGAAGCGGTGAAG [18], which does not distinguish between Pwa and closely related Ppa (№ 1-9, Supplementary Table S1). A test was also conducted for 67 (№ 10-77) isolates of other Pectobacteriaceae species and 32 strains (№ 78-109) related to other species associated with crop rot. These strains were isolated from potato rots and passed through McConkey's medium to exclude Salmonella and Grampositive isolates and SVP medium to ensure the presence of pectolytic activity. As shown in Supplementary Table S1, all Ppa strains demonstrated a positive PCR signal. Among the strains with alternative Ppa/Pwa attribution (F035 and F178), F035 showed amplification and therefore can be more accurately classified as Ppa, while F178, revealing no positive signal, may be categorised as Pwa. The historical strain Pwa F007 used in the study did not show any false positive amplification. No positive results were shown for other isolates with pectolytic activity, both Pectobacteriaceae and unrelated ones. Additionally, in silico analysis using an nt-database did not presume any amplification of plant genomic DNA using the designed primers. No amplification was observed in the PCR reaction in vitro using potato DNA as a template. Thus, the authors are confident that the possibility of cross-amplification with potato DNA was excluded. Sensitivity Serially diluted plasmid and genomic DNA were used in qPCR reactions for a sensitivity test. Based on the threshold cycles (Cq) obtained for each concentration of copies in the sample (Table 2), standard curves were plotted. The resulting curves were linear (Figure 4). The correlation coefficient (R 2 ) was 0.99 for both curves, with a slope of −3.34 and −3.33 for plasmid and genomic DNA, respectively, corresponding to a PCR efficiency of 98.9% and 99.62%. qPCR Analysis on an Extended Set of Strains This study involved seven strains previously attributed to being Ppa or Pwa on the basis of genomic sequencing or 16S rRNA gene sequencing. Two more strains were previously identified as Pwa using the diagnostic primer set PhF 5 -GGTTCAGTGCGTCAGGAGAG and PhR 5 -GCGGAGAGGAAGCGGTGAAG [18], which does not distinguish between Pwa and closely related Ppa (№ 1-9, Supplementary Table S1). A test was also conducted for 67 (№ 10-77) isolates of other Pectobacteriaceae species and 32 strains (№ 78-109) related to other species associated with crop rot. These strains were isolated from potato rots and passed through McConkey's medium to exclude Salmonella and Gram-positive isolates and SVP medium to ensure the presence of pectolytic activity. As shown in Supplementary Table S1, all Ppa strains demonstrated a positive PCR signal. Among the strains with alternative Ppa/Pwa attribution (F035 and F178), F035 showed amplification and therefore can be more accurately classified as Ppa, while F178, revealing no positive signal, may be categorised as Pwa. The historical strain Pwa F007 used in the study did not show any false positive amplification. No positive results were shown for other isolates with pectolytic activity, both Pectobacteriaceae and unrelated ones. Additionally, in silico analysis using an nt-database did not presume any amplification of plant genomic DNA using the designed primers. No amplification was observed in the PCR reaction in vitro using potato DNA as a template. Thus, the authors are confident that the possibility of cross-amplification with potato DNA was excluded. Sensitivity Serially diluted plasmid and genomic DNA were used in qPCR reactions for a sensitivity test. Based on the threshold cycles (Cq) obtained for each concentration of copies in the sample (Table 2), standard curves were plotted. The resulting curves were linear (Figure 4). The correlation coefficient (R 2 ) was 0.99 for both curves, with a slope of −3.34 and −3.33 for plasmid and genomic DNA, respectively, corresponding to a PCR efficiency of 98.9% and 99.62%. The limit of detection (LoD) was nearly 16 copies per reaction, corresponding to 4 × 10 2 copies/mL. Figure 5 shows the amplification curves for the sensitivity test and the good flare-up of the probe during the reaction, even at high dilutions. The limit of detection (LoD) was nearly 16 copies per reaction, corresponding to 4 × 10 2 copies/mL. Figure 5 shows the amplification curves for the sensitivity test and the good flare-up of the probe during the reaction, even at high dilutions. Assays of Plant Samples To conduct an experiment simulating a pathogen's detection in infected plants, the tubers of the "Gala" variety were used, one of the most widespread varieties in Russia, and one which is moderately resistant to bacterial diseases. The potatoes were soaked in a 10 6 cfu/mL suspension of the pathogen for infection and then incubated at 28 °C until the development of soft rot symptoms. On days 3, 4 and 5, a ~100 mg piece of peel was taken from the tubers and total DNA was isolated. Then, qPCR was performed from the DNA obtained, in the same way as in the previous experiments. Control tubers were soaked in a sterile LB medium. As shown in Table 3, the pathogen was successfully detected in all cases, confirming the possibility of using the analysis to assess the contaminated material. With an increase in the duration of incubation, the titre of bacteria increased proportionally. Amplification was also recorded for the control tuber, indicating a trace presence of the pathogen, which did not lead to noticeable symptoms of rotting. Discussion According to the species definition, Ppa differs from Pwa by its ability to produce acid from melibiose, raffinose, lactose and D-galactose [3]. This feature was used to differentiate Ppa strains isolated from potato in Southern Europe [4]. However, the biochemical tests made the precise diagnostics more laborious and, thus, raised questions about the value of such fine analysis. Besides the obvious purpose of monitoring the causal agents of plant diseases, in order to develop adapted prevention actions in particular countries, regions or climate areas, some fundamental arguments exist. Assays of Plant Samples To conduct an experiment simulating a pathogen's detection in infected plants, the tubers of the "Gala" variety were used, one of the most widespread varieties in Russia, and one which is moderately resistant to bacterial diseases. The potatoes were soaked in a 10 6 cfu/mL suspension of the pathogen for infection and then incubated at 28 • C until the development of soft rot symptoms. On days 3, 4 and 5, a~100 mg piece of peel was taken from the tubers and total DNA was isolated. Then, qPCR was performed from the DNA obtained, in the same way as in the previous experiments. Control tubers were soaked in a sterile LB medium. As shown in Table 3, the pathogen was successfully detected in all cases, confirming the possibility of using the analysis to assess the contaminated material. With an increase in the duration of incubation, the titre of bacteria increased proportionally. Amplification was also recorded for the control tuber, indicating a trace presence of the pathogen, which did not lead to noticeable symptoms of rotting. Discussion According to the species definition, Ppa differs from Pwa by its ability to produce acid from melibiose, raffinose, lactose and D-galactose [3]. This feature was used to differentiate Ppa strains isolated from potato in Southern Europe [4]. However, the biochemical tests made the precise diagnostics more laborious and, thus, raised questions about the value of such fine analysis. Besides the obvious purpose of monitoring the causal agents of plant diseases, in order to develop adapted prevention actions in particular countries, regions or climate areas, some fundamental arguments exist. Information on the role of Ppa in the bacterial pathogenesis of potatoes worldwide is contradictory [22]. According to national monitoring surveys, Ppa occurrence ranges from single, moderate cases [6] to severe breakouts [10]. While wet weather throughout the year is preferred for the development of the pathogen (https://www.cabi.org/isc/ datasheet/48069201 (accessed on 17 May 2021)), a broad range of conditions is tolerated. The aggressiveness of Ppa is also debatable. As for other SRP, their pathogenesis relies on the production and secretion of plant cell wall-degrading enzymes, which cause the typical symptoms of soft rot. Enzyme synthesis depends on suitable environmental conditions [23]. Generally, the virulence of Ppa is considered to be moderate. However, a number of studies [24,25] have demonstrated that some strains of P. parmentieri can cause fast and severe maceration of tubers and plants comparable with P. atrosepticum and P. brasiliense, which are considered to be the most aggressive among Pectobacterium. It is worth noting that the bacterial community in rotting potato tissues is very complex [26] and may include several different pathogenic species. SRP pathogens may interact antagonistically [27] or synergistically [28] with respect to one another. Therefore, the study of the impact of a particular pathogen on the development of the disease requires quantitative differential identification of the SRP species, particularly with Ppa. Currently, no effective control agents have been developed to prevent or to treat SRP infections [29,30]. A promising approach is the use of bacteriophages (phages), which are bacterial viruses that infect pathogenic bacteria. A number of successful applications of phage control of plant pathogens, including SRP, have been reported (reviewed in [31,32]). Some phages infecting Ppa have been isolated and investigated [33,34]. An important feature of phage therapy is to have a very selective host range of bacteriophages, usually limited to a bacterial species or even a group of strains within a species. This may be considered to be an advantage, because phage treatment does not affect commensal and endosymbiotic microflora of the plant attacking pathogenic bacteria only. However, scientifically sound use of therapeutic bacteriophages requires fine and precise diagnostics of the causative agent of the disease. Existing assays are often too general for efficient phage application, and more focused methods of discriminating SRP are needed. Besides pectolytic enzymes, a number of other proteinaceous and carbohydrate factors and signal pathways have been found to participate in bacterial adhesion, the colonising of plant tissue and enhancement of the disease (reviewed in [23]). Essential intracellular effectors have been secreted into the plant cell via secretion systems type III (T3SS), type IV (T4SS) and type VI (T6SS) [35]. An important feature of Ppa/Pwa is the absence of a number of essential genes encoding T3SS in the genome [36,37]. This absence may explain the limited host range of P. parmentieri. In such conditions, the role of T6SS and other secretion systems becomes more important [38]. The genomic sequence unique to Ppa that was identified was located adjacent to the T6SS apparatus, and its conservation within a species may indicate a unique role in the functioning of the system. This sequence does not belong to any known mobile elements and, thus, may serve as a hallmark of Ppa genomes. Another important area where qPCR detection of SRP is needed is the establishment of the threshold bacterial population necessary for the development of disease symptoms. While the occurrence of SRP-related blackleg, wilting and aerial rot of vegetating potato depends on numerous environmental factors (reviewed in [39]), the development of soft rot in stored ware and seed potato is a consequence of a latent infection of the tuber surface. The incidence of soft rot, as a minimum, correlates with the population of SRP as revealed by laboratory testing. Most in vitro experiments described in the literature use an application of 10 6 -10 7 cfu/mL aliquots of SRP suspensions applied to unprotected potato tissue (tuber slices) to establish the stable development of soft rot symptoms. This work reports that, starting from almost negligible values, the population of Ppa grew fast at room temperature and reached~10 6 cfu/mL, resulting in tissue rotting in a few days. On the other hand, undamaged potato tubers with a latent SRP population 10 4-10 6 cfu/mL on the skin revealed no signs of soft rot being stored in proper warehouse conditions (4-7 • C) [40]. Therefore, the monitoring of the bacterial insemination of the tubers may help to estimate the risk of soft rot development in the stored tubers and to reveal the dangerous threshold for each particular SRP species. The designed assay has been shown to be sensitive enough to detect Ppa within the range of natural latent infection level (10 2 -10 5 cfu). Thus, this analysis is suitable for assessing the quality of potatoes and diagnosing the likely development of rot. The reported protocol, based on the genomic analysis of an ample amount of recent GenBank data, was successfully tested and demonstrated high sensitivity and suitability for in vivo testing. The species-specific sequence revealed is not only unique to Pectobacterium parmentieri, but is also a part of a functional gene which can be important for pathogenic lifestyle of this economically important plant pathogen. The high specificity of the developed assay is particularly important for efficient phage application in the biocontrol of plant diseases caused by SRP bacteria. Phylogenetic Analysis Bacterial genomes were downloaded from the NCBI GenBank bacterial database ( ftp://ftp.ncbi.nlm.nih.gov/genbank (accessed on 27 March 2021)). A phylogenetic tree was generated using an UBCG pipeline, based on 92 core genes including 43 ribosomal proteins, nine genes of aminoacyl-tRNA synthetases, DNA processing and translation proteins and other conservative genes. Bootstrap analysis phylogeny was conducted by aligned concatenated sequences of 92 core genes made by UBCG with MAFFT (FFT-NS-x1000, 200 PAM/k = 2). Then, bootstrap trees were constructed using the RAxML program (maximum likelihood method) (GTR Gamma I DNA substitution model). The robustness of the trees was assessed by fast bootstrapping (1000) [41]. Search for Species-Specific Sequences and Primer Design To search for species-specific sequences, custom databases were constructed using BLAST (https://blast.ncbi.nlm.nih.gov/Blast.cgi (accessed on 25 February 2021)). The search for species-specific regions for amplification was carried out using the workflow presented in the previous study [20]. Primers and probes were generated with Primer3Plus (https://primer3.ut.ee/ (accessed on 15 March 2021)) and manually checked for the consistency of melting temperatures and for the absence of hairpins and dimers formation using the functions of Geneious Prime and Primer Biosoft (http://www.premierbiosoft.com/NetPrimer/ AnalyzePrimerServlet (accessed on 20 March 2021)). Bacterial Strains, Media and Culture Conditions A complete list of bacterial strains engaged in this study, with an indication of their species, year and location of isolation, is shown in Supplementary Table S1. Strains were obtained from the Laboratory of Molecular Bioengineering, IBCh RAS. Pectolytic bacteria were cultivated at 28 • C on 1.5% LB agar. CVP medium was used to assess pectinolytic activity. E. coli NovaBlue strain was used for transformation during the preparation of a plasmid. E. coli was cultivated at 37 • C on LB agar medium with the addition of ampicillin. Genomic DNA Isolation Genomic DNA was isolated using overnight bacterial cultures, using a GeneJET Genomic DNA Purification Kit (ThermoScientific, Waltham, MA, USA), according to the manufacturer's protocol. Potato DNA was extracted using a CTAB-based protocol. For this purpose, a piece of peel of 100 mg was mechanically homogenised with a 0.1% sodium pyrophosphate solution. The resulting homogenate was transferred into 1.5 mL tubes and centrifuged. 40 µL of lysozyme solution (100 µL/mL) and 60 µL of 10% SDS solution were added to the sediment, resuspended and incubated at 37 • C for 30 min. Then, 650 µL of 2% STAB was added to the mixture and incubated for another 30 min at 65 • C. Then, the mixture was cooled and 700 µL of chloroform was added, vortexed and precipitated at 12,000 rpm. The supernatant was mixed in a new tube with 600 µL of isopropanol. After subsequent centrifugation, the precipitate was washed twice with 75% ethanol and dried until the volatile solvents completely evaporated, and the resulting DNA was dissolved in water. The concentration and quality of the extracted DNA was estimated using a NanoProteometer N60 (NanoProteometer, Munich, Germany). After extraction, DNA concentrations were diluted to a single value of 10 ng/µL. PCR Conditions The conventional PCR was carried out in a volume of 25 µL containing 5 µL of Evrogen ScreenMix (Evrogen, Moscow, Russia,), 0.35 µM of forward and reverse primers and 60 ng of template DNA. Amplification was performed using a T100 Thermal Cycler (Bio-Rad, Hercules, CA, USA) and in the following conditions: 94 • C for 300 s, then 45 cycles of 94 • C for 10 s, 62 • C for 10 s and 72 • C for 10 s. The resulting PCR products were separated by electrophoresis in 1.5% agarose/TA buffer gel and visualised by ethidium bromide staining. The size of the bands was eluted using a 1 kb DNA Ladder marker (Evrogen). Plasmid Construction for Sensitivity Assay For a precise evaluation of PCR sensitivity, we constructed a plasmid containing an insert of the target sequence amplified from the Ppa F149 strain. For this purpose, the product of PCR amplification was purified using ISOLATE II PCR and Gel Kit (Bioline, St. Petersburg, Russia) and cloned to pAL2-T vector using a QuickTA kit (Evrogen). Plasmid DNA used as standard was purified with a QIAprep Spin Miniprep Kit (Qiagen, Hilden, Germany), according to the manufacturer's instructions. Sanger sequencing of the corresponding region in the resulting plasmid confirmed the correctness of the insert. qPCR The qPCR was carried out in a LightCycler 96 (Roche, Basel, Switzerland). Each 35 µL reaction contained 200 µM of each dNTP, 0.2 µM of probe, 0.35 µM of forward and reverse primers and 60 ng of template DNA. The optimised amplification conditions were as listed in Section 4.5. Each reaction was carried out in four replicates. Water was used as a negative control. Plasmid-based internal control was used to exclude false-negative results, as described earlier [43]. The processing of the amplification curves obtained and the calculation of the threshold cycles were carried out using software supplied by Roche. A sensitivity analysis was carried out on serial three ten-fold dilutions of the test plasmid and genomic DNA of strain F149. The resulting samples were analysed by qPCR. For each defined threshold cycle, the mean and standard deviation were calculated using Roche software. To construct the standard curve, the threshold cycles' mean values were plotted against the concentration of copies of the target sequence in each reaction. For all values, the standard deviation was calculated. Testing the Detection System on Artificially Infected Tubers For the experiment, potato tubers of the most widespread variety, "Gala", were obtained from a market. They were washed and soaked in a bacterial suspension to infect the tubers, following the same protocol as in a previous study [20]. Then, the tubers were incubated at 28 • C. On days three, four, five and six, DNA was extracted from 100 mg of the infected tuber's peel, as described in Section 4.4, and analysed by qPCR.
6,515.6
2021-09-01T00:00:00.000
[ "Biology" ]
Reflection Positivity and Levin-Wen Models The reflection positivity property has played a central role in both mathematics and physics, as well as providing a crucial link between the two subjects. In a previous paper we gave a new geometric approach to understanding reflection positivity in terms of pictures. Here we give a transparent algebraic formulation of our pictorial approach. We use insights from this translation to establish the reflection positivity property for the fashionable Levin-Wen models with respect both to vacuum and to bulk excitations. We believe these methods will be useful for understanding a variety of other problems. Introduction In an earlier paper [JL17], we gave a new proof of the reflectionpositivity (RP) property for Hamiltonians, see Definition 2.1. We presented that proof within the framework of a picture language [JL18]. Our language includes a geometric transformation F s , that we call the string Fourier transform (SFT). The SFT acts on pictures by rotation, and it generalizes the usual Fourier transform that acts on functions, see [JL17]. The picture approach has a great advantage: we find it very intuitive, illustrating the generality and geometric nature of RP. But it also has a disadvantage, especially for readers unfamiliar with picture language: it could appear to the uninitiated as a difficult proof to understand. In this paper we elaborate our previous work in two ways. Firstly we translate our picture proof in [JL17] into an algebraic proof. We begin with an algebraic formulation of F s in Definition 2.2. In the remainder of §2 we prove a general theorem about RP. We hope that this exercise makes our pictorial proof accessible for any reader who compares the arXiv:1901.10662v1 [math-ph] 30 Jan 2019 two methods. Moreover we believe that it should make clear why we find our pictorial method of proof both attractive and transparent. Secondly we take advantage of the generality of our pictorial method to analyze some other pictures that occur in the theoretical physics literature. Levin and Wen introduced a set of lattice models to study topological order [LW05]. These models generalize the Z 2 toric code of Kitaev [K06]; for background see Kitaev and Kong [KK12]. Levin and Wen showed that ground states of their models correspond to topological quantum field theories in the sense of Turaev and Viro [TV92]. In their paper, Kitaev and Kong give an interesting dictionary to translate between these two sets of concepts. In §3 we study Levin-Wen models for graphs on surfaces, using the data of unitary fusion categories. We then use our new methods to establish Theorem 3.2, the main new result in this paper: Levin-Wen Hamiltonians have the RP property. Although we do not analyze it in detail, our method also proves the RP property for higher-dimensional pictorial pictorial models, such as the Walker-Wang models [WW12]. A novel aspect of the proof of RP in [JL17] was our observation that the positivity of the string Fourier transform F s (−H) of H ensures the RP property. In fact when H is reflection-invariant, the positivity of F s (−H 0 ) is sufficient to ensure RP for H, where H 0 denotes the part of H that maps across the reflection mirror. In §2, we present algebraic definitions of F s , of the convolution product * , and of the RP property. While this may appear somewhat different from the standard definitions, one can recover the results in [JL17] by a proper choice of the Hilbert space and the Hamiltonian. We do not pursue this comparison in this paper. We attempt to make minimal assumptions in our statements, so that the methods here could be applied in a wide variety of circumstances. 1.2. Our Example. In §3 we consider the Levin-Wen model on a surface which has a reflection mirror. The Hamiltonian is an action on the Hilbert space: it is the sum of contributions from Wilson loops on plaquettes and actions on sites. The terms in H arising from the actions on sites do not contribute to H 0 . In the Levin-Wen model, H 0 is the sum of the actions on plaquettes that cross the reflection mirror. When the plaquette p crosses the mirror P , we decompose the Wilson loop as a half circle and its mirror image. The action of F s on a picture is to rotate the picture by 90 • . Pictorially we can consider the actions of the two half circles after rotation as the product of a half circle and its adjoint, namely its vertical reflection. So the F s (H p ) should be positive. The sticking point is that the actions of the two half circles are not independent, as they share boundary conditions on the mirror. So H p is not simply a tensor product of operators on two sides of the mirror. Technically we need to take care of the boundary condition in the decomposition of H 0 . Adding the boundary condition to the decomposition, we prove that F s (−H 0 ) is positive. Combining this work with the statements in §2, we obtain our main result. We remark that RP of the Hamiltonian H in the Levin-Wen model on a torus not only works for the expectation in the vacuum state, but also for the expectation in bulk excitations (objects in the Drinfeld center). Each bulk excitation defines its own one-dimensional lower quantized theory that are topologically entangled on the two boundary circles. We expect this realization to be useful in the study of the anomaly theory on the boundary. Algebraic Reflection Positivity In this section we look again at results that we proved in [L16,JL17], using pictorial methods in the general framework of subfactor planar para algebras. Here we give purely algebraic definitions and proofs, in order to ensure that the ideas and the exposition are accessible to readers who are not familiar with picture language. Suppose H + is a finite dimensional Hilbert space and H − is its dual space. Let ·, · H ± be the inner product of the Hilbert spaces H ± . Let θ be the Riesz representation map from H ± to H ∓ . Then for any x, x ∈ H + , their inner product is given by Definition 2.2 (SFT). The string Fourier transform F s : hom(H −+ ) → hom(H +− ) is a map such that for T ∈ hom(H −+ ), and for arbitrary x, x ∈ H + and y, y ∈ H − , Remark (A Key Identity). Definition 2.2, with T = e −βH , x = x , and y = y = θ(x), and substituting x forx, yields (1) Thus the RP property for H is equivalent to the positivity of the expectation of F s (e −βH ) in vectors that are tensor products. The map θ defines a map from hom( Extend the definition of θ as an anti-linear map on H −+ : For any A more detailed condition on H that yields the RP property depends (as in past studies) on properties of the part of H mapping between H + and H − . For H ∈ hom(H −+ ), let θ(H) := θHθ ∈ hom(H −+ ). Theorem 2.4 (Second RP Statement). Suppose 2.1. Algebraic Properties of the SFT. In this section we establish algebraic properties of F s . We use them in the next section to prove Theorem 2.3 and Theorem 2.4. Proposition 2.5. The SFT of the identity is non-negative, Proof. Let {x i } denote an orthonormal basis for H + and {y showing an arbitrary expectation of F s (I) 0. Remark. The RP property Definition 2.1, for the case H = 0, is a special example of an expectation of F s (I), namely Proof. For any x, x ∈ H + and y, y ∈ H − , the matrix elements of F s (θ(T )) are Thus the matrix elements agree as claimed. Corollary 2.7. A Hamiltonian H ∈ hom(H −+ ) is reflection invariant, iff its SFT is hermitian on H +− . In other words, Remark. Pictorially we represent θ in [JL17] as a horizontal reflection, * as a vertical reflection, and F s as a clockwise 90 • rotation. for any Lemma 2.9. Let B be an orthonormal basis of H + . Then for any x ∈ H + and y ∈ H − , Proof. For any x, x 1 , x 2 ∈ H + and y, y 1 , y 2 ∈ H − , and with H 2 Note that the β are an orthonormal basis for H + , so the sum in parentheses equals dim(H + ). The convolution is associative, as a consequence of Lemma 2.9. Remark. Let B be an orthonormal basis for H + and θ(B) a corresponding basis for H − . Then for i, j ∈ B, the vectors i ⊗ θ(j) are an orthonormal basis for H +− . A matrix unit E ii jj ∈ hom(H +− ) is zero except on i ⊗ θ(j ) and maps that vector to the vector i ⊗ θ(j). The transformations A, B ∈ hom(H +− ) can be written One can compare the matrix elements of AB with those of A * B, namely In particular on H +− , one has I = ij E iijj and I * I = dim(H + )I . (3) In [JL17] we represent A and B pictorially as "two-box" pictures. The multiplication AB is given by vertical composition of the two-box pictures, while the multiplication A * B is given by the corresponding horizontal composition of the same pictures. Proof. Let x 1 , x 2 ∈ H + and y 1 , y 2 ∈ H − . By Definition 2.2, where we infer the last three equalities from Lemma 2.9 and Definition 2.2. Therefore, the operators agree as claimed. If S 0 and T 0, then S * T 0. Proof. Let √ S and √ T denote the positive square roots of S and T . By Definition 2.10, one has S * T = (Y ( Corollary 2.13 (Exponentials and Products). If F s (S) 0, then F s (e S ) 0. If F s (S) 0 and F s (T ) 0, then F s (ST ) 0. Proof. From Theorem 2.11, F s (ST ) = F s (S) * F s (T ) . We then infer F s (ST ) 0 from Theorem 2.12. Likewise F s (S) 0 ensures F s (S n ) 0 for any natural number n. Since F s is a linear transformation, and the exponential power series has positive coefficients, so F s (e S − I) 0. But from Proposition 2.5 we know F s (I) 0, hence F s (e S ) 0. Proposition 2.14 (A Positivity Property). If T + ∈ hom(H + ), then Proof. Let {x i } be an orthonormal basis for H + and {y j } an orthonormal basis for H − . Let s ij = x i , T θ(y j ) H + . A vector a ∈ H +− has the form a = i,j a ij x i ⊗ y j . According to Definition 2.2, the matrix elements of F s (θ(T + ) ⊗ T + ) on H +− in the basis x i ⊗ y j are Here T + = H + − s −1 I. As H + = I − ⊗ H + acts on H + , we infer that T + satisfies the hypotheses of Proposition 2.14. Hence F s (θ(T + ) ⊗ T + ) 0, and consequently F s (−H(s)) 0. We then conclude from Theorem 2.3, that H(s) has the RP property. Adding a constant to H(s) does not affect RP, so H(s) + (λ + s −1 )I = H − s θ(H + )H + also has the RP property. Namely for all x , x ∈ H + and all β 0, This representation is continuous in s, also at s = 0. So let s → 0+ to ensure the RP property for H. Levin-Wen models In this section, we define the Levin-Wen model for graphs in surfaces using the data of unitary fusion categories. Our main result is proving reflection positivity for the Hamiltonian in the Levin-Wen model. Graphs in surfaces. Let M + be a surface in the half space the set of edges go across the plane P . Then for any e ∈ E 0 , its positive half is an edge in E + and its negative half is an edge in E − . We identify the three edges as the same edge. Then Let s, t : E → V be the source function and the target function. For any edge e ∈ E, the end points of e are ∂e = {s(e), t(e)} . Since the orientation is reversed by θ P , we have s(θ P (e)) = θ P t(e). For any vertex v ∈ V , we define the set of adjacent edges E(v) = {e ∈ E|v ∈ ∂e}. The cardinality of E(v) is called the degree of the vertex v, denoted by |v|. Let κ v be an bijection from {1, 2, . . . , |v|} to E(V ), so that the numbers go from 1 to |v| anti-clockwise around the vertices. The order κ v is determined by the choice of the edge κ v (1). Define ε v (e) = + if s(e) = v; ε v (e) = − if t(e) = v. Unitary fusion categories. Suppose C is a unitary fusion category, (corresponding to a unitary tensor category in [KK12]). Let Irr be the set of irreducible objects (i.e., simple objects) of C , and let 1 ∈ Irr be the trivial object. Take A = X∈Irr X and A n := ⊗ n k=1 A. For any object X, let ON B(X) denote an ortho-nomal basis of hom C (1, X). Let d(X) be the quantum dimension of X. Let 1 X be the identity map in hom C (X, X). Define X + = X and X − to be the dual object of X. For any objects X, Y, Z in C , let θ C : hom C (X ⊗ Y, Z) → hom C (Y − ⊗ X − , Z − ) be the modular conjugation on C . Pictorially θ C is a horizontal reflection. Let ∩ A be the co-evaluation map from 1 to A 2 and ∪ A be the evaluation map from A 2 to 1. Then Pictorially, we represent x as For any y, z ∈ hom C (A 2 , A), define C y,z : hom C (1, A n ) → hom C (1, A n ): for any x ∈ hom C (1, A n ), n 2, take the algebraic expression to be The corresponding pictorial representation is, 3.3. Configuration spaces. For every edge e ∈ E, we define H e = L 2 (Irr). Moreover, the delta functions δ j , j ∈ Irr, form an ONB of L 2 (Irr). For every vertex v ∈ V , we define H v = hom C (1, A |v| ). Definition 3.1 (LW Hilbert spaces). Define the Hilbert spaces for the Levin-Wen model as The two Hilbert spaces H − and H + are dual to each other with respect to the Riesz representation θ. Define the embedding map as a multilinear extension of the map on an ONB: for any β v ∈ ON B(H v ) and any j(e) ∈ Irr. Extend the reflection θ P to an anti-unitary θ : H + → H − as follows, ) . Define P v, j to be the projection from hom C (1, A |v| ) on to hom C (1, j) at the vertex v. Define P e,j to be the projection from L 2 (Irr) on to Cδ j at the edge e. For any v ∈ V , the action on the vertex is given by the operator H v on H: One calls each connected component of M \ Γ a plaquette. Let P be the set of plaquettes. For any p ∈ P, let us denote the vertices and edges on ∂p by v 1 , e 1 , v 2 , e 2 . . . , v m , e m clockwise. For any j ∈ Irr, the action on the plaquette is given by the operator H p,j on H: where y 0 = y n and ρ v k , C v k ,y k ,θ(y k−1 ) are the actions of ρ and C y k ,θ(y k−1 ) at the vertex v k respectively. Here also where µ = j∈Irr d(j) 2 is the global dimension of C . It is known that H p , for p ∈ P, and H v , for v ∈ V are mutually commuting projections [LW05,KK12]. In the Levin-Wen model, the Hamiltonian H on H is for some λ P 0 and λ V 0. Pictorially, the action of H p,j is contracting a loop labelled by j in the plaquette p with morphisms in C on ∂p: The contraction is induced from the relation for more details. Pictorially, this relation changes the shape of a pair of lines labelled by e i and j and as follows: → . Then around each vertex v i , the shape of the picture looks like (4). The definition of H p,j is independent of the choice of the starting vertex v 1 . It is also independent of the order κ v . When we change the orientation of an edge e in the oriented graph Γ, we replace P e,X by P e,X − . Then the operators H v and H p,j are not changed. So the operators are essentially independent of the orientation of the graph Γ. 3.5. Reflection Positivity. The main new result of this paper is the following: Theorem 3.2 (RP Property for Levin-Wen Models). The Hamiltonian H in (6), acting on the Hilbert space H of Definition 3.1, has the RP property: for any h + , Ω + ∈ H + , and β 0, Lemma 3.3. For any plaquette p across the plane P , namely p ∩ P = ∅, we have F s (−ιH p,j ι * ) 0. 3.6. An Interpretation. Let us explain an elementary example: let M + be isotopic to a cylinder, so M is a torus. Take the graph Γ to be a square lattice in M . For the Levin-Wen model on a torus M , it is known that the excitations in the bulk are objects of the Drinfeld center Z(C ). If Ω + is the vacuum vector in H + , then ι * (θ(Ω + ) Ω + ) is the vacuum vector in H, namely all objects and morphisms are trivial. We can consider the expectation on the vacuum, namely ι * (θ(Ω + ) Ω + ), as a path integral over configurations, where the Hamiltonian acts diagonally. These configurations can be identified as closed string nets on the dual lattice through the modular self-duality proved in [LX16], when C is a unitary modular tensor category. The RP condition for the path integral in the bulk induces a one-dimensional lower quantum theory on the boundary of M + , which is a union of two circles. If Ω + is an open string with end points on the two boundary circles of M + , then ι * (θ(Ω + ) Ω + ) is a closed string in M , corresponding to a bulk excitation. We can still consider the expectation on ι * (θ(Ω + ) Ω + ) as a non-local path integral. The RP condition for the path integral in the bulk induces a quantum theory topologically entangled on the two boundary circles. As mentioned in the introduction, we expect this realization to be useful in the study of the anomaly theory on the boundary. Acknowledgement This research in the Mathematical Picture Language Project was supported by the Templeton Religion Trust under grant TRT 0159.
4,790.8
2019-01-30T00:00:00.000
[ "Mathematics" ]
Prediction Model of HBsAg Seroclearance in Patients with Chronic HBV Infection Background Prediction of HBsAg seroclearance, defined as the loss of circulating HBsAg with or without development of antibodies for HBsAg in patients with chronic hepatitis B (CHB), is highly difficult and challenging due to its low incidence. This study is aimed at developing and validating a nomogram for prediction of HBsAg loss in CHB patients. Methods We analyzed a total of 1398 patients with CHB. Two-thirds of the patients were randomly assigned to the training set (n = 918), and one-third were assigned to the validation set (n = 480). Univariate and multivariate analysis by Cox regression analysis was performed using the training set, and the nomogram was constructed. Discrimination and calibration were performed using the training set and validation set. Results On multivariate analysis of the training set, independent factors for HBsAg loss including BMI, HBeAg status, HBsAg titer (quantitative HBsAg), and baseline hepatitis B virus (HBV) DNA level were incorporated into the nomogram. The HBsAg seroclearance calibration curve showed an optimal agreement between predictions by the nomogram and actual observation. The concordance index (C-index) of nomogram was 0.913, with confirmation in the validation set where the C-index was 0.886. Conclusions We established and validated a novel nomogram that can individually predict HBsAg seroclearance and non-seroclearance for CHB patients, which is clinically unprecedented. This practical prognostic model may help clinicians in decision-making and design of clinical studies. Introduction HBV infection continues to be a global health problem. Worldwide, around 2 billion people have evidence of past or present infection with HBV and an estimated 257 million are chronically infected [1]. Almost half of the world's population resides in areas of high HBV endemicity, with the highest prevalence in Africa and East Asia. In addition, in China, approximately 300 million people suffer from hepatopathy, having a major impact on the global burden of liver diseases [2]. Patients with chronic HBV infection have an increased risk of developing sequelae such as cirrhosis and hepatocellular carcinoma (HCC). Chronic infection is characterized by the persistence of hepatitis B virus surface antigen (HBsAg) for at least 6 months (with or without concurrent hepatitis B virus e-antigen (HBeAg)). Persistence of HBsAg is a surrogate marker for the risk of developing chronic liver disease and HCC. Recent studies have focused Hindawi BioMed Research International Volume 2020, Article ID 6820179, 7 pages https://doi.org/10.1155/2020/6820179 on the role of HBsAg quantification in seroclearance of HBsAg which usually indicates that HBV infection has been cured [3]. Nomograms are widely used as prognostic devices in medicine, especially for individualized estimation of cancer survival. With the ability to generate the probability of a clinical event by integrating diverse prognostic and determinant variables, nomograms can meet our need for biologically and clinically integrated models and fulfill our drive towards personalized medicine [4]. Current evidence suggests that the occurrence of HBsAg seroclearance in patients with chronic HBV infection is a rare event that occurs at 1% to 2% per year, usually after a long duration of sustained biochemical remission [5], and its probability to be forecasted is rarely known. In this study, we sought to develop a clinical nomogram for predicting the rate of HBsAg loss of patients with CHB. Study Design and Patients. From 2009.1 to 2018.6, a total of 3220 patients were diagnosed with CHB and were followed up every 3 to 6 months by the Infectious Diseases Department of The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China. All patients had been HBsAg positive for more than 6 months and were excluded for at least one of the following conditions: lost to follow-up for over 12 months, presence of comorbidities (hepatitis A/C/E virus coinfection, autoimmune liver diseases, other malignant tumors, renal insufficiency, hepatolenticular degeneration, and alcoholic liver disease), having received immunosuppressive (transplantation) therapy, and cases of data loss. All enrolled patients signed informed consent. Variables and Data Collection. We recorded the following data for each patient: gender, age, body mass index (BMI), alcohol history, family history, diagnosis, treatment, and other laboratory indexes, such as alanine aminotransferase (ALT), aspartate aminotransferase (AST), total bilirubin, and albumin (ALB). The reference ranges of the biochemical index are as follows: ALT: 3-35 U/l; AST: 15-40 U/l; total bilirubin: 4.0-23.9 μmol/l; and ALB: 36.0-51.0 g/l. Furthermore, we also collected the following serological and virological markers: hepatitis B virus surface antigen (HBsAg), hepatitis B virus surface antibody (HBsAb), hepatitis B virus e-antigen (HBeAg), hepatitis B virus E antibody (HBeAb), and hepatitis B virus core antibody (HBcAb), measured by chemiluminescence immunoassay technique. HBsAg loss was defined as two consecutive HBsAg titers < 0:05 IU/ml, measured with the Elecsys HBsAg II Quant kits (Germany). The baseline HBV load was measured by nucleic acid fluorescent quantitative polymerase chain reaction (PCR), and the chemicals were purchased from DAAN GENE (Guangzhou, China). The lower limit of HBV DNA detection was 100 IU/ml. Logarithmic transformation of quantitative HBsAg values and baseline HBV DNA values were finally obtained. Definitions. Chronic hepatitis B was defined as follows: (1) HBsAg present for ≥6 months, (2) serum HBV DNA varies from undetectable to several billion IU/ml, and (3) normal or elevated ALT and/or AST levels [6]. HCC was diagnosed by at least two imaging studies (i.e., hepatic ultrasound together with CT, MRI, or both), and most cases were histopathologically confirmed according to the AASLD (2018) guidelines [6]. Statistical Analyses. Frequency and percentage (%) were used to describe categorical variables, median and interquartile range were used for non-parametric continuous variables. Mann-Whitney U test and χ 2 test were used for intergroup differences. COX regression analysis was used to analyze both the univariate-and multivariate-adjusted rate ratios (with 95% confidence intervals) of HBsAg loss. Variables significant in univariate analyses were included in multivariate analyses. The statistical analysis was carried out using IBM SPSS 25.0, the p value was taken bilaterally, and p < 0:05 indicated statistically significant difference. The nomogram was built based on the results of multivariate analyses of BMI, quantitative HBsAg and HBeAg status in the primary cohort, and baseline HBV DNA values. SAS 9.4 was used to randomly divide both the HBsAg seroclearance and nonseroclearance groups into a 1 : 2 ratio: two-thirds of cases used as training sets and the remaining one-third as validation sets. R 3.5.1 (http://www.r-project .org/) was used for constructing the nomograms with the survival using rms, grid, and ggplot2 graphics package. Nomogram validation consisted of discrimination and calibration by using the validation set. Discrimination and predictive performance of the nomogram were evaluated using a concordance index (C-index). C-index values range from 0.5 to 1.0, with 0.5 indicating random chance and 1.0 indicating a perfect ability to correctly discriminate the outcome using the nomogram [7]. p value < 0.05 was considered statistically significant. The calibration curve was derived based on regression analysis. Clinical Data Characteristics. Finally, 1398 CHB patients were enrolled in this study (Figure 1), and the cohort was divided into two sets: the training set (n = 918, HBsAg loss cases: 35) and validation set (n = 480, HBsAg loss cases: 22). After comparison between the two sets of data, there was no statistical difference regarding index variables (all the p values were >0.05). Long-term follow-up cohorts for the training and validation sets showed male predominance (72.77%; 77.08%), with differences found in median age (34 years; 35 years), median BMI (21.80; 22.04), report of familial history of HBV (60.57%; 61.25%), and median baseline HBV DNA (4.750 log 10 IU/ml; 4.197 log 10 IU/ml). AST was within the normal range (median value: 36 U/l) while ALT was mildly elevated (median value: 38 U/l) in both sets. Besides, the median follow-up time was 75 months in both sets (Table 1). In addition, the clinical features of patients with HBsAg loss are shown in Supporting Information Table S-1. Prognostic Nomogram for HBsAg Loss of CHB. A nomogram that incorporated the above-mentioned significant prognostic factors and baseline HBV DNA load, another important factor reported in the literature, was established ( Figure 2). The prognostic nomogram for HBsAg loss of CHB patients showed that variable "quantitative HBsAg" substantially contributed to prognosis, followed by BMI, HBeAg status, and baseline HBV DNA load. Harrell's Cindex for HBsAg loss prediction was 0.913 (95% CI, 0.868 to 0.958). Each subtype within the above-mentioned variables was assigned a score on the point scale. By summing up the total score and locating it on the total point scale, it was easy to draw a straight line down to determine the estimated probability of HBsAg non-seroclearance at each score point. Discussion Despite the introduction of an effective HBV vaccine decades ago, the burden of chronic HBV infection remains a public health concern, particularly in endemic regions of Asia and sub-Saharan Africa [8]. HBsAg loss is regarded as a positive achievement in the natural history, particularly if it occurs before the accrual of significant liver disease and is deemed as a functional "cure." However, HBsAg is infrequently cleared in CHB patients [9]. Chu and Liaw [5] reported that the predictive factors for HBsAg seroclearance can be divided into host factors including age, gender, normal alanine aminotransferase levels, and viral factors including HBeAg negativity at baseline, HBV DNA negativity by hybridization at baseline, genotype, and hepatitis C virus superinfection. A systematic review and meta-analysis of HBsAg clearance rates and predictors of clearance by Yeo et al. [10] showed that favorable factors for HBsAg loss included HBeAg seronegativity, low quantitative HBsAg values, and low HBV load at baseline. In our study, the independent risk factors of [11,12]. Based on such findings, it can be inferred that BMI may be a contributing factor for HBsAg seroclearance. A nomogram is a simple graphical representation of a prediction model that generates a numerical probability of a clinical event and can be an important component of modern medical decision-making, if carefully constructed to answer a focused question and appropriately interpreted and applied [4]. It is a powerful unprecedented tool that can be harnessed to predict individual outcomes, in this case seroclearance. In the field of hepatopathies, nomograms are widely used in HCC [13] and acute on chronic liver failure (ACLF) [14]. Based on the factors mentioned above, the regression model can calculate the probability of target events in a certain period of time, such as 3, 5, and 10 years. At present, to our knowledge, there is no literature regarding any validated model that can reliably predict HBsAg loss. We envisaged the possibility of constructing a nomogram predicting HBsAg seroclearance as our clinical endpoint, incorporating factors which were proven to be independent by multivariate Cox regression analysis. Antiviral therapy for CHB patients is a long process that often leads to poor patient compliance. It has become a major conundrum for clinicians to estimate the probability of HBsAg loss and Nonetheless, our nomogram is limited by the retrospective nature of data collection and other shortcomings of our study design that affect its robustness and reliability. Firstly, HBsAg seroclearance is a low probability event, and the number of patients with HBsAg loss enrolled in our study was relatively small. Secondly, both the training and validation sets came from our follow-up cohort. Indeed, the presence of an external validation group would have improved the clinical value of our nomogram. Lastly, differences in antiviral therapy regimens and the HBV genotype, which Figure 2: Nomogram predicting the probability of HBsAg non-seroclearance at 3, 5, and 10 years, using BMI, quantitative HBsAg (qHBsAg), HBeAg status, and baseline HBV DNA. To use the nomogram, an individual patient's value is located on each variable axis, and a line is drawn upward to determine the number of points received for each variable value. The sum of these numbers is located on the total points axis, and a line is drawn downward to the non-seroclearance axes to determine the likelihood of 3-year, 5-year, and 10-year non-seroclearance. BMI: body mass index; qHBsAg: quantitative hepatitis B virus surface antigen; HBeAg: hepatitis B virus e-antigen (0 and 1 represent negative and positive status, respectively); baseline HBV DNA: baseline hepatitis B viral deoxyribonucleic acid load. Figure 3: The calibration curves for predicting HBsAg seroclearance for the training cohort (a) and the validation cohort (b). Nomogrampredicted probability of HBsAg seroclearance is plotted on the x-axis; actual HBsAg seroclearance is plotted on the y-axis.
2,854.6
2020-08-14T00:00:00.000
[ "Medicine", "Biology" ]
Efficient, Quantitative Numerical Methods for Statistical Image Deconvolution and Denoising We review the development of efficient numerical methods for statistical multi-resolutionestimationofopticalimagingexperiments.Inprinciple,thisinvolvesconstrainedlineardeconvolutionanddenoising,andsothesetypesofproblemscanbeformulatedasconvexconstrained,orevenunconstrained,optimization.Weaddresstwomainchallenges:firstoftheseistoquantifyconvergenceofiterativealgorithms;thesecondchallengeistodevelopefficientmethodsfortheselarge-scaleproblemswithoutsacrificingthequantificationofconvergence.Wereviewthestateoftheartforthesechallenges. Introduction In this chapter we review progress towards addressing two main challenges in scientific image processing. The first of these is to quantify convergence of iterative algorithms for image processing to solutions (as opposed to optimal values) to the underlying variational problem. The second challenge is to develop efficient methods for these large-scale problems without sacrificing the quantification of convergence. The techniques surveyed here were studied in [1][2][3]. We present only the main results from these studies, in the context that hindsight provides. Scientific images are often processed with software that accomplishes a number of tasks like registration, denoising and deblurring. Implicit in the processing is that some systematic error is being corrected to bring the image closer to the truth. This presumption is more complicated for denoising and deblurring. These are often accomplished by filtering or by solving some variational problem such as minimizing the variance of an image. For applications requiring speedy processing, such as audio and video communication, this is sufficient. But the recent development of nanoscale photonic imaging modalities such as STED and RESOLFT featured in Chaps. 1, 7 and 9 has shifted the focus of image denoising and deconvolution from qualitative to quantitative models. Quantitative approaches to image processing are the subject of Chap. 11 where statistical multiscale estimation is discussed (see Sect. 11.3). Here, the recovered image comes with statistical statements about how far the processed image is, in some statistical sense, from the truth. The estimators are almost exclusively variational, that is, they can be characterized as the solution to an optimization problem. It is important to emphasize that the value of the optimization problem is meaningless. This stands in stark contrast to many conventional applications in economics and operations research, where the value of the optimal solution is related to profit or cost, and so is of principal interest. The focus on optimal solutions rather than optimal values places heavy demands on the structure of model formulations and the algorithms for solving them. Unless the numerical method allows one to state how far a computed iterate is to the solution of the underlying variational problem, then the scientific significance of the iterate is lost. The leading computational approaches for solving imaging problems with multiresolution statistical estimation criterion are based on iterated proximal operators. Most of the analysis for first-order iterative proximal methods is limited to statements about rates of convergence of function values, if rates are discussed at all (see for instance [4][5][6][7]). First-order methods have slow convergence in the worst case scenario. A common assumption to guarantee linear convergence of the iterates is strong convexity, but this is far more than is necessary, and in particular it is not satisfied for the Huber function (12.35). It was shown in [8] that metric subregularity is necessary for local linear convergence. Aspelmeier, Charitha and Luke [1] showed that the popular alternating directions method of multipliers algorithm (ADMM) applied to optimization problems with piecewise linear-quadratic objective functions (e.g. the Huber function), together with linear inequality constraints generically satisfies metric subregularity at isolated critical points; hence linear convergence of the iterates for this algorithm can be expected without further ado. More recently, in [3] it was shown that the primal iterates of a modification of the PAPC algorithm (Algorithm 2) converge R-linearly for any quadratically supportable objective function (for instance, the Huber function). Conventional results without metric subregularity obtain a convergence rate of O(1/k) with respect to the function values. In settings like qualitative image processing or machine learning, such results are acceptable, but in the setting of statistical image processing these statements do not contain any scientific content. We present in this chapter efficient iterative first-order methods that offer some hope of quantitative guarantees about the distance of the iterates to optimal solutions. Problem Formulation We limit our scope to the real vector space R n with the norm generated from the inner product. The closed unit ball centered on the point y ∈ R n is denoted by B(y). The positive orthant (resp. negative orthant) in R n is denoted by R n + (resp. R n − ). The domain of an extended real-valued function ϕ : The set of symmetric n × n positive (semi)-definite matrices is denoted by S n ++ (S n + ). The notation A 0 (A 0) denotes a positive (semi)definite matrix A. For any z ∈ R n and any A ∈ S n + , we denote the semi-norm z 2 A := z, Az . The operator norm is defined by A = max u∈R n { Au : u = 1} and coincides with the spectral radius of A whenever A is symmetric. If A = 0, σ min (A) denotes its smallest nonzero singular value. For a sequence {z k } k∈N converging to z * , we say the convergence is We limit our discussion to proper (nowhere equal to −∞ and finite at some point), lower semi-continuous (lsc), extended-valued (can take the value +∞) functions. We will, in fact, limit our discussion to convex functions, but convexity is not the central property governing quantitative convergence estimates. By the subdifferential of a function ϕ, denoted ∂ϕ, we mean the collection of all subgradients that can be written as limits of sequences of Fréchet subgradients at nearby points; a vector v is a (Fréchet) subgradient of ϕ at y, The functions of interest for us are subdifferentially regular on their domains, that is, the epigraphs of the functions are Clarke regular at points where they are finite [10,Definition 7.25]. For our purposes it suffices to note that, for a function ϕ that is subdifferentially regular at a point y, the subdifferential is nonempty and all subgradients are Fréchet subgradients, that is, ∂ϕ(y) = ∂ϕ(y) = ∅. Convex functions, in particular, are subdifferentially regular on their domains and the subdifferential has the particularly simple representation as the set of all vectors v where A mapping Φ : R n ⇒ R n is said to be β-inverse strongly monotone [10,Corollary 12.55 The mapping Φ is said to be polyhedral (or piecewise polyhedral [10]) if its graph is the union of finitely many sets that are polyhedral convex in R n × R n [11]. Polyhedral mappings are generated by the subdifferential of piecewise linear-quadratic functions (see Proposition 12.9). is called piecewise linear-quadratic if dom f can be represented as the union of finitely many polyhedral sets, relative to each of which f (x) is given by an expression of the form 1 2 x, Ax + a, x + α for some scalar α ∈ R vector a ∈ R n , and symmetric matrix A ∈ R n×n . Closely related to plq functions is quadratically supportable functions. Definition 12.2 (pointwise quadratically supportable (pqs)) A proper, extendedvalued function ϕ : R n → R ∪ {+∞} is said to be pointwise quadratically supportable at y if it is subdifferentially regular there and there exists a neighborhood V of y and a constant μ > 0 such that If for each bounded neighborhood V of y there exists a constant μ > 0 such that (12.4) holds, then the function ϕ is said to be pointwise quadratically supportable at y on bounded sets. If (12.4) holds with one and the same constant μ > 0 on all neighborhoods V , then ϕ is said to be uniformly pointwise quadratically supportable at y. For more on the relationship between pointwise quadratic supportability, coercivity, strong monotonicity and strong convexity see [3]. We denote the resolvent of Φ by J Φ ≡ (Id + Φ) −1 where Id denotes the identity mapping and the inverse is defined as The corresponding reflector is defined by R ηΦ ≡ 2J ηΦ − Id. One of the more prevalent examples of resolvents is the proximal map. For ϕ : R n → (−∞, ∞] a proper, lsc and convex function and for any u ∈ R n and Q ∈ S n ++ , the proximal map associated with ϕ with respect to the weighted Euclidean norm is uniquely defined by: When Q = c −1 Id, c > 0, we simply use the notation prox c,ϕ (u). We also recall the fundamental Moreau proximal identity [12], that is, for any z ∈ R n z = prox Q,ϕ (z) + Qprox Q −1 ,ϕ * (Q −1 (z)), (12.6) where Q −1 is the inverse of Q ∈ S n ++ . Notions of continuity of set-valued mappings have been thoroughly developed over the last 40 years. Readers are referred to the monographs [10,11,13] for basic results. A key property of set-valued mappings that we will rely on is metric subregularity, which can be understood as the property corresponding to a Lipschitzlike continuity of the inverse mapping relative to a specific point. It is a weaker property than metric regularity which, in the case of an n × m matrix for instance, is equivalent to surjectivity. Our definition follows the characterization of this property given in [11,Exercise 3H.4]. Definition 12.3 (metric subregularity) The mapping Φ : R n ⇒ R m is called metrically subregular at x for y relative to W ⊂ R n if (x, y) ∈ gphΦ and there is a constant c > 0 and neighborhoods O of x such that The constant c measures the stability under perturbations of inclusion y ∈ Φ(x). An important instance where metric subregularity comes for free is for polyhedral mappings. A notion related to metric regularity is that of weak-sharp solutions. This will be used in the development of error bounds (Theorem 12.6). [14]) The solution set argmin { f (x) | x ∈ Ω } for a nonempty closed convex set Ω, is weakly sharp if, for p = inf Ω f , there exists a positive number α (sharpness constant) such that Definition 12.5 (weak sharp minimum Similarly, the solution set S f is weakly sharp of order ν > 0 if there exists a positive number α (sharpness constant) such that, for each x ∈ Ω, Abstract Problem The generic problem in which we are interested is The following blanket assumptions on the problem's data hold throughout: Assumption (i) implies that the optimal value of (P 0 ) is finite. Assumption (ii) implies that the constraint structure is convex. Assumption (iii) implies that the mapping A : R n → R m is linear and full rank, where The challenge of statistical multi-resolution estimation lies in the feature that the dimension of the constraint structure, m, is much greater than the dimension of the unknowns, n, and grows superlinearly with respect to the number of unknowns. The above constrained optimization problem is often formulated as an unconstrained-looking problem via the introduction of a (nonsmooth) penalty term enforcing the constraints: min The requirements on the function θ align this penalty term with exact penalization [15], that is, a relaxation of the constraints where, for all parameters ρ large enough, the constraints are exactly satisfied. The following assumptions are used to guarantee the exact correspondence between solutions to (P 0 ) and (P). In (P 0 ) and (P) the function f is often smooth, but not prox friendly. In applications it is most often a smooth regularization or a fidelity term. For the ADMM/DR method reviewed in Sect. 12.3 smoothness is not required. Assumption 2 It is assumed that the functions g i (i = 1, 2, . . . , M) are prox friendly and that they enjoy some structure that makes g also prox friendly. For instance, if the constraints are separable, then the function is also prox-friendly as is the function The functions g i • A i can be regularizing functions (like total variation) or hard inequality constraints. For example, hard inequality constraints are modeled by the use of indicator functions for g i in (P 0 ): Saddle Point and Dual Formulations The saddle point formulation is derived by viewing the function g in (P) as the image of a function g * under Fenchel conjugation, that is, g(x) = (g * ) * . Writing this explicitly into (P) yields The bifunction in the saddle point formulation is Contrast this with the Lagrangian for the extended problem The Lagrangian is 14) and the augmented Lagrangian L for (P L ) is given by where z ∈ R m , and η > 0 is a fixed penalty parameter. Assumption 1(i) guarantees that the mapping L(·, ·) has a saddle point, that is, The existence of a saddle point corresponds to zero duality gap for the induced optimization problems By weak duality, we have inf x∈R n p(x) ≥ sup y∈R m q(y). This can be viewed as a partial dual to problem (P). The full dual problem involves the Fenchel conjugate of the entire objective function. For (P) the dual problem is Instead of working with this dual, it is more convenient to work with the following equivalent formulation via the change of variable y → −y: inf Under standard constraint qualifications (e.g., [16,Theorem 2.3.4]), (x,ŷ) is a saddle point of L if and only ifx is an optimal solution of the primal problem (P 0 ), andŷ is an optimal solution of the dual problem (D). The following two inclusions characterize the solutions of the problems (P 0 ) and (D) respectively: In both cases, one has to solve an inclusion of the form for general set-valued mappings B and D. Statistical Multi-resolution Estimation Statistical multi-resolution estimation (SMRE) discussed in Sect. 11.2.7 of Chap. 11 is specialized here for the case of imaging systems with Gaussian noise. The variational model for statistical multi-resolution estimation with Gaussian noise takes the form Here f : R n → R is a regularization functional, which incorporates a priori knowledge about the unknown signal x such as smoothness, w i is a weighting function for the grid points in the subset V i , and A : R n → R n is the linear imaging operator that models the experiment. The constant γ i has an interpretation in terms of the quantile of the estimator. In the context of the general model (P 0 ), Here the affine mapping is an averaging operator that accounts for sampling at different resolutions of the image. Note that the observation b need not be in the range of the imaging operator A -all that is assumed is that this mapping is injective, not surjective. This means that, in applications, practitioners need to be careful not to make the constraint γ i too small, otherwise the optimization problem might be infeasible. If the algorithms presented below appear to be diverging for a particular instance of (P SM R E ), it is because the problem is infeasible; increasing the constants γ i should solve the problem. Alternating Directions Method of Multipliers and Douglas Rachford In this section we survey the main results (without proofs) from [1]. For proofs of the statements, readers are referred to that article. A starting point for most of the main approaches to solving (P 0 ) is the alternating directions method of multipliers (ADMM) (primary sources include [17][18][19][20][21]). This method is one of many splitting methods which are the principal approach to handling the computational burden of large-scale, separable problems [22]. ADMM belongs to a class of augmented Lagrangian methods whose original motivation was to regularize Lagrangian formulations of constrained optimization problems. The ADMM algorithm for solving (P L ) follows. The penalty parameter η need not be a constant, and indeed evidence indicates that the choice of η can greatly impact the complexity of the algorithm. For simplicity we keep this parameter fixed. We do not specify how the argmin in Algorithm 1 should be calculated, and indeed, the analysis that follows assumes that these can be computed exactly. 1 One problem that should be immediately apparent is that this algorithm operates on a space of dimension n + 2m. Since one of the two challenges we address is high dimension, this expansion in the dimension of the problem formulation should be troubling. Nevertheless, we show with this algorithm how the first challenge, namely quantification of convergence is achieved. The connection between the ADMM algorithm and the Douglas-Rachford algorithm introduced in Chap. 6, (6.30) was first discovered by Gabay [19] (see also the thesis of Eckstein [17]). For any η > 0, the Douglas-Rachford algorithm [23,24] for solving (12.16) is given by (12.19) where J η D ≡ (Id + η D) −1 and J η B (Id + η B) −1 are the resolvents of η D and η B respectively. Given z 0 and y 0 ∈ Dz 0 , following [25], define the new variable ξ 0 ≡ z 0 + η y 0 so that z 0 = J η D ξ 0 . We thus arrive at an alternative formulation of the Douglas-Rachford algorithm (12.18): for (12.21) where R η D and R η B are the reflectors of the respective resolvents. This is the form of Douglas-Rachford considered in [26]. Specializing this to our application yields and so the resolvent mappings are the proximal mappings of the convex functions f * • (−A T ) and g * respectively, and hence the resolvent mappings and corresponding fixed point operator T are single-valued [12]. (12.22). For fixed η > 0, given any initial points ξ 0 and y 0 , z 0 ∈ gphD such that ξ 0 = y 0 + ηz 0 , the sequences z k k∈N , ξ k k∈N and y k k∈N defined respectively by (12.18), (12.20) and y k ≡ 1 η ξ k − z k converge to points z ∈ Fix T , ξ ∈ Fix T and y ∈ D Fix T . The point z = J η D ξ is a solution to (D), and y = 1 η ξ − z ∈ Dz. If, in addition, A has full column rank, then the sequence y k , z k k∈N corresponds exactly to the sequence of points generated in steps 2-3 of Algorithm 1 and the sequence ξ k+1 k∈N converges to ξ, a solution to (P 0 ). The correspondence between Douglas-Rachford and ADMM in the proposition above means that if quantitative convergence can be established for one of the algorithms, it is automatically established for the other. Linear convergence of Douglas-Rachford under the assumption of strong convexity and Lipschitz continuity of f was already established by Lions and Mercier [26]. Recent published work in this direction includes [27,28]. Local linear convergence of the iterates to a solution was established in [29] for linear and quadratic programs using spectral analysis. In Proposition 12.8, two conditions are given that guarantee linear convergence of the ADMM iterates to a solution. The first condition is classical and follows Lions and Mercier [26]. The second condition, based on [30], is much more prevalent in applications and generalizes the results of [29]. (12.22) where A : R n → R m is an injective linear mapping. Let ξ ∈ Fix T for T defined by (12.21). For fixed η > 0 and any given triplet of points ξ 0 , y 0 , z 0 satisfying ξ 0 ≡ z 0 + η y 0 , with y 0 ∈ Dz 0 , generate the sequence (y k , z k ) k∈N by Steps 2-3 of Algorithm 1 and the sequence (ξ k ) k∈N by (12.20). (i) Let O ⊂ R n be a neighborhood of ξ on which g is strongly convex with constant μ and ∂g is β-inverse strongly monotone for some β > 0. Then, for Then the sequences (ξ k ) k∈N and (y k , z k ) k∈N converge linearly to the respective points ξ ∈ Fix T ∩ W and (y, z) with rate bounded above by In either case, the limit point z = J η D ξ is a solution to (D), y ∈ Dz and the sequence x k k∈N of Step 1 of Algorithm 1 converges to x, a solution of (P 0 ). The strong convexity assumption (i) of Theorem 12.8 fails in many applications of interest, and in particular for feasibility problems (minimizing the sum of indicator functions). By [31, Theorem 2.2], case (ii) of Theorem 12.8, in contrast, holds in general for mappings T for which T − Id is metrically subregular and the fixed point sets are isolated points with respect to an affine subspace to which the iterates are confined. The restriction to the affine subspace W is a natural generalization for the Douglas-Rachford algorithm, where the iterates are known to stay confined to affine subspaces orthogonal to the fixed point set [32,33]. We show that metric subregularity with respect to this affine subspace holds in many applications. )(x), that T : W → W for W some affine subspace of R m and that Fix T ∩ W is an isolated point {ξ}. Then there is a neighborhood O of ξ such that, for all starting points (ξ 0 , y 0 , z 0 ) with ξ 0 ≡ z 0 + η y 0 ∈ O ∩ W for y 0 ∈ D(z 0 ) so that J η D ξ 0 = z 0 , the sequence (ξ k ) k∈N generated by (12.20 ADMM for Statisitcal Multi-resolution Estimation of STED Images The theoretical results above are demonstrated with an image b ∈ R n (Fig. 12.1) generated from a Stimulated Emission Depletion (STED) microscopy experiment [34,35] conducted at the Laser-Laboratorium Göttingen examining tubulin, represented as the "object" x ∈ R n . The imaging model is simple linear convolution. The measurement b, shown in Fig. 12.1, is noisy or otherwise inexact, and thus an exact solution is not desirable. Although the noise in such images is usually modeled by and exact penalty g(Ax) given by (12.12) with g i given by (12.17). For an image size of n = 64 × 64 with three resolution levels the resulting number of constraints is m = 12161 (that is, 64 2 constraints at the finest resolution, 4 * 32 2 constraints at the next resolution and 9 * 21 2 constraints at the lowest resolution). The constant α = 0.01 in (12.24) is used to balance the contributions of the individual terms to make the most of limited numerical accuracy (double precision). The constant γ i is chosen so that the model solution would be no more than 3 standard deviations from the noisy data on each interval of each scale. Since this is experimental data, there is no "truth" for comparison -the constraint, together with the error bounds on the numerical solution to the model solution provide statistical guarantees on the numerical reconstruction [36]. In Fig. 12.2b the iteration is shown with the value of ρ = 4096 for which the constraints are exactly satisfied (to within machine precision), indicating the correspondence of the computed solution of problem (P) to a solution to the exact model problem (P 0 ). The only assumption from Proposition 12.10 that cannot be verified for this implementation is the assumption that the algorithm fixed point is a singleton; all other assumptions are satisfied automatically by the problem structure. We observe, however, starting from around iteration 1500 in Fig. 12.2b, behavior that is consistent with (i.e. does not contradict) linear convergence. From this, the observed convergence rate is c = 0.9997, which yields an a posteriori upper estimate of the pixel-wise error of about 8.9062e −4 , or 3 digits of accuracy at each pixel. 2 Primal-Dual Methods The ADMM method presented above suffers from the extreme computational cost of computing the prox-operator in step 1. The results of the previous section required several days of cpu time on a 2016-era laptop. In this section we present a method studied in [3] that can achieve results in about 30 s on the same computer architecture. In this section we survey the main results (without proofs) from [1]. There is one subtle difference in the present survey over [3] that has major implications for the application and implementation of the main Algorithm 2. In this section we consider exclusively functions g in problem (P) of the form (12.11). The algorithm we revisit is the proximal alternating predictor-corrector (PAPC) algorithm proposed in [37] for solving (S). It consists of a predictor-corrector gradient step for handling the smooth part of L in (12.13) and a proximal step for handling the nonsmooth part. Algorithm 2: Extended Proximal Alternating Predictor-Corrector (EPAPC) for (S). Parameters : Set η > 0 and choose the parameters τ and σ to satisfy At each iteration the algorithm computes one gradient and a prox-mapping corresponding to the nonsmooth function, both of which are assumed to efficiently implementable. We suppose these can be evaluated exactly, though this does not take into account finite precision arithmetic. The dimension of the iterates of the EPAPC algorithm is on the same order of magnitude as with the ADMM/Douglas-Rachford method, but the individual steps can be run in parallel and, with the exception of the projection in Step 6, are much less computationally intensive to execute. For quantitative convergence guarantees of primal-dual methods, additional assumptions are required. Assumption 3 (i) The function f : R n → R is convex and continuously differentiable with Lip- ii) The function f : R n → R is pointwise quadratically supportable (Definition 12.2) at eachx in the solution set S * . (iii) There exists a σ > 0 such that The assumption of Lipschitz continuous gradients Assumption 3(i), is standard, but stronger than one might desire in general. The assumption is included mainly to guarantee boundedness of the iterates. Lipschitz continuity of the gradients is enough for our purposes, however. By the standing Assumption 1 the mapping A is injective and when m ≤ n, then A has full row rank, and AA T is invertible. When m > n, A is still injective but AA T has a nontrivial kernel and care must be taken that the conjugate function g * does not decrease too fast in the direction of the kernel of A T . This is assured by Assumption 3(iii). This assumption comes into play in Lemma 12.1. Step (3) of Algorithm 2 can be written more compactly when g(w) := g(w 1 , . . . , In this case, the convex conjugate of a separable sum of functions is the sum of the individual conjugates: g * (w) Defining the matrix S = σ −1 I m we immediately get that for any point ζ i ∈ R m i , i = 1, . . . , M, Thus Step (3) of Algorithm 2 can be written in vector notation by w k = prox S,g (y k−1 + σA p k ). It is possible to use different proximal step constants σ i , i = 1 . . . , M, see details in [37]. The choice σ i = σ for i = 1, . . . , M is purely for simplicity of exposition. The projection onto (ker A T ) ⊥ in (6) is carried out by applying the pseudo inverse: When m ≤ n and A i is full rank for all i = 1, 2, . . . , M, then ker A T = {0} and the above operation is not needed. But an unavoidable feature of multi-resolution analysis, our motivating application, is that m > n, so some thought must be given to efficient computation of A T A −1 . The next technical result, which is new, establishes a crucial upper bound on the growth of the Lagrangian with respect to the primal variables. Assumption 3 hold and let ( p k , y k , x k ) k∈N be the sequence generated by the EPAPC algorithm. Then for every k ∈ N and every (x, y) ∈ R n × R m , Lemma 12.1 Let and Note that for the choice of τ given in the parameter initialization of Algorithm 2, G 0. The constant μ in Proposition 12.11 depends on the choice of (x 0 , y 0 ) and so depends implicitly on the distance of the initial guess to the point in the set of saddle point solutions. Convergence of the primal-dual sequence is with respect to a weighted norm on the primal-dual product space built on G in (12.28). where by the assumptions on the choice of τ given in Algorithm 2, G 0. We can then define an associated norm using the positive definite matrix H , u 2 H := 1 τ x 2 + y 2 G . We are now ready to state the main result and corollaries, whose proofs can be found in [3]. ( p k , x k , y k ) k∈N be the sequence generated by the EPAPC algorithm. Let λ min+ (AA T ) denote the smallest nonzero eigenvalue of AA T . If Assumptions 1 and 3 are satisfied, then there exists a saddle point solution for L(·, ·), the pairû = (x,ŷ), withŷ ∈ (ker A T ) ⊥ , and for any α > 1 and for all k ≥ 1, the sequence u k = (x k , y k ) k∈N satisfies Theorem 12.4.1 Let is positive and μ > 0 is the constant of pointwise quadratic supportability of f atx depending on the distance of the initial guess to the point (x,ŷ) in the solution set S * . In particular, (x k , y k ) k∈N is Q-linearly convergent with respect to the H -norm to a saddle-point solution. An unexpected corollary of the result above is that the set of saddle points is a singleton. The above theorem yields the following estimate on the number of iterations required to achieve a specified distance to a saddle point. Corollary 12.13 Under Assumptions 1 and 3, letū = (x,ȳ) be the limit point of the sequence generated by the EPAPC algorithm. In order obtain (12.33) it suffices to compute k iterations, with , and δ is given in (12.32). EPAPC for Statisitcal Multi-resolution Estimation of STED Images An efficient computational strategy for evaluating or at least approximating the projection P (ker A T ) ⊥ in Step 6 of Algorithm 2 has not yet been established. We report here preliminary computational results of Algorithm 2 without computing Step 6. Our results show that the method is promising, though error bounds to the solution to (S) are not justified without computation of P (ker A T ) ⊥ . In our numerical experiments, the constraint penalty in (S) takes the form g(y) = This is an exact penalty function, and so solutions to (S) correspond to solutions to (P 0 ). Using Moreau's identity (12.6), the prox-mapping is evaluated explicitly in (6) for each constraint by The proximal parameter is a function of τ and given by σ = 1/(τ AA T 2 ). More details in [37,Sect. 4.1]. Here, we also consider the smooth approximation of the L 1 -norm as the qualitative objective. The L 1 -norm is non-smooth at the origin, thus in order to make the derivative-based methods possible we consider a smoothed approximation of this, known as the Huber approximation. The Huber loss function is defined as follows: if |t| ≤ α |t| − α 2 if |t| > α, (12.35) where α > 0 is a small parameter defining the trade-off between quadratic regularization (for small values) and L 1 regularization (for larger values). The function φ is smooth with 1 α -Lipschitz continuous derivative and its derivative is given by (12.36) Pointwise quadratically supportability of this function at solutions is not unreasonable but still must be assumed. We demonstrate our reconstruction of the image inset shown in Fig. 12.1 of size n = 64 2 with the same SMRE model as the demonstration in Sect. 12.3.1. The confidence level γ i was set to 0.25 * i at each resolution level (i = 1, 2, 3). Figure 12.3(bottom) shows the step size of the primal-dual pair for each of these regularized problems as a function of iteration. The model with quadratic regularization achieves a better average rate of convergence, but for both objective functions the algorithm appears to exhibit R-linear convergence (not Q-linear). What is not evident from these experiments is the computational effort required per iteration. Without computation of the pseudo-inverse in step 6, the EPAPC algorithm computes these results in about 30 s on a 2018-era laptop, compared to several days for the results shown in Fig. 12.2. Randomized Block-Coordinate Primal-Dual Methods The previous sections reviewed numerical strategies and structures that yield quantitative estimates of the distance of an iterate to the solution of the underlying variational problem. In this section we examine implementation strategies for dealing with high dimensional problems. These are implementation strategies because they do not involve changing the optimization model. Instead, we select at random a smaller subset of variables or constraints in the computation of an update in the fulldimensional iterative procedure. This is the principal strategy for handling problems that, due to their size, must be distributed across many processing and storage units (see for instance [26,[38][39][40] and references therein). We survey here a randomized primal-dual technique proposed and analyzed in [2]. The main theoretical question to resolve with such approaches is whether, and in what sense, iterates converge to a solution to the original problem. We can determine whether the iterates converge, but obtaining an estimate of the distance to the solution remains an open problem. The algorithm below is a primal-dual method like the algorithms reviewed above, with the exception that it solves an extension of the dual problem (D): The main prox operation is computed on the dual objective in (D), that is f * (x) + g * (y) with respect to the variables (x, y) ∈ R n × R m . The dimension of the basic operations is unchanged from the previous approaches, but the structure of the sum of functions allows for efficient evaluation of the prox mapping. Implicit in this is that the function f is prox friendly. In the algorithm description below it is convenient to use the convention f ≡ g 0 , A 0 ≡ Id. The algorithm is based in part on [39]. Notice that each iteration of Algorithm 3 requires only two small matrix-vector multiplications: A i (·) and A T i (·). The methods of the previous sections, in contrast, worked with full matrix A = [A T 1 , . . . , A T m ] T . This means that all iterations involve full vector operations. For some applications this might be not feasible, at least on standard desktop computers due to the size of problems. Algorithm 3 uses only blocks A i of A, therefore each iteration requires fewer floating point operations, at the cost of having less information available for choosing the next step. This reduction in the effectiveness of the step is compensated for through larger blockwise steps. Computation of the step size is particularly simple. This follows the same hybrid step-length approach developed in [41] for the nonlinear problem of blind ptychography. In particular, we use step sizes adjusted to each block A i : Proposition 12.14 (Theorem 1 of [2]) Suppose Assumption 1 holds and let τ σ i A i 2 < 1. Then (x k , y k ) generated by Algorithm 3 converges to a solution to ( D). In particular, the sequence (x k ) converges almost surely to a solution to (P). The statement above concerns part (i) of Theorem 1 of [2]. No smoothness is required of the qualitative regularization f . Instead, it is assumed that this function is proxfriendly. This opens up the possibility of using the 1-norm as a regularize, promoting, in some sense, sparsity in the image. No claim is made on the rate of convergence, though the numerical experiments below indicate that, for regular enough functions f , convergence might be locally linear. This remains to be proved. RBPD for Statisitcal Multi-resolution Estimation of STED Images Despite many open questions regarding convergence, random methods offer a way to handle extremely large problems. To make a comparison with the deterministic approaches above, cycles of the RBPD Algorithm 3 are counted in terms of epochs. An epoch contains the number of passes through steps 1-6 of Algorithm 3 required before each block is chosen at least once. After k epochs, therefore, the i-th coordinate of x will be updated, on average, the same number of times for the randomized algorithm as for the deterministic methods. In other words, an epoch for a randomized block-wise method is comparable to an iteration of a deterministic method. As the RBPD updates only one block per iteration, each iteration is less computationally intensive than the the deterministic counterparts. However, in our case this efficient iteration still requires one to evaluate two (possibly) expensive convolution products (embedded in A i x and A T i y). Thus, if these operations are relatively expensive, the efficiency gain will be marginal. Nevertheless, because of the ability to operate on smaller blocks, the randomized method requires, per epoch, approximately half the time required per iteration of the deterministic methods. Although the quantitative convergence analysis remains open, our numerical experiments indicate that the method achieves a comparable step-residual to the EPAPC Algorithm 2 after the same number of epochs/iterations. As with the experiments in the previous Sections, we use three resolutions, which results in one block at the highest resolution, four blocks at the next resolution (four possible shifts of 2 × 2 pixels), and nine blocks at the third resolution (nine different shifts of 3 × 3 pixels). We applied Algorithm 3 with different regularization f in (P 0 ): the 1-norm f (x) = x 1 , Huber loss function f (x) = x 1,α given by (12.35) (α = 0.25) and the squared Euclidean norm. As with the EPAPC experiments, the function g is given by (12.11) with g i given by (12.17) for the parameter γ i = 0.25 * i for i = 1, 2, 3. All of these functions are prox-friendly and have closed-form Fenchel conjugates. The gain in efficiency over the deterministic EPAPC method proposed above (without computation of the pseudo-inverse) is a factor of 2. Figure 12.4a-c shows the reconstructions on the same 64 × 64 image data used in the previous sections. The numerical performance of the algorithm is shown in Fig. 12.4(d). What the more efficient randomization strategy enables is for the full 976 × 976 pixel image to be processed. The result for regularization with the 1-norm is shown in Fig. 12.5. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
9,241.4
2020-01-01T00:00:00.000
[ "Computer Science" ]
The fate of the Littlest Higgs Model with T-parity under 13 TeV LHC Data We exploit all LHC available Run 2 data at center-of-mass energies of 8 and 13 TeV for searches for physics beyond the Standard Model. We scrutinize the allowed parameter space of Little Higgs models with the concrete symmetry of T-parity by providing comprehensive analyses of all relevant production channels of heavy vectors, top partners, heavy quarks and heavy leptons and all phenomenologically relevant decay channels. Constraints on the model will be derived from the signatures of jets and missing energy or leptons and missing energy. Besides the symmetric case, we also study the case of T-parity violation. Furthermore, we give an extrapolation to the LHC high-luminosity phase at 14 TeV as well. JHEP05(2018)049 and no further degrees of freedom in the range up to a TeV. Besides the EWPO from the pre-LHC era and flavor physics observables, both direct searches and the ever more precise measurements of the properties of the Higgs boson (as well as the top quark and weak gauge bosons) are the tools to search for physics beyond the Standard Model (BSM) at the LHC. These are used to constrain any type of BSM model. In this paper we study the Littlest Higgs Model with T -parity (LHT). This is an attractive representative of Little Higgs models [3,4] since fine tuning problems in the Higgs potential can be avoided via a discrete global Z 2 symmetry. Little Higgs models in general regard a naturally light Higgs boson as a pseudo-Nambu-Goldstone boson (pNGB) arising from a (new) global symmetry at high scale, see e.g. refs. [5,6]. However, such a mechanism would require new strong interactions to tie the constituents of the Higgs boson together, which unavoidably would show up in electroweak precision observables. In order to avoid such strong constraints from EWPO, the mechanism of so-called collective symmetry breaking has been applied, i.e. interweaving several global symmetries which all have to be broken in order to give mass to the pNGBs charged under them. This means that the Higgs mass achieves only a logarithmic sensitivity to the cutoff scale at one-loop , while a quadratic sensitivity only arises at the two-loop level, thereby shifting the stronglyinteracting UV completion scale from the multi-TeV to the multi-10 TeV-region. Note, however, that Little Higgs models are effective field theories (with new degrees of freedom beyond the SM like heavy vectors, scalars and quarks) that not necessarily have a direct strongly coupled UV completion, but could also have weakly-coupled sectors at the next scale [7]. In this paper we consider the LHT model just as such an effective (low-energy) field theory consisting of the SM degrees of freedom augmented by (T -odd) heavy vector bosons, heavy quarks (and leptons) as well as additional heavy pNGBs (which turn out to be irrelevant for the phenomenology of that model). All of these particles just have the SM gauge interactions as well as generalizations of the SM Yukawa couplings, which reflect the implementation of both the Little Higgs collective symmetries as well as T -parity. We consider all phenomenologically relevant production mechanisms for the heavy new particles, including all relevant decays in order to compare the predictions within the LHT model with the LHC 13 TeV data from Run 2. The main objective of this paper is to study how far the exclusion limit for the Little Higgs symmetry breaking scale f is pushed compared to the 7 and 8 TeV data, and to investigate how this bound depends on the spectrum and the rest of the parameter space. In addition, we reproduce the constraints from the EWPO. For completeness, we review the status from the 8 TeV Run 1 data. Because of the possibility of T -parity breaking in a strongly coupled UV completion of the LHT, as well as tensions from dark matter (DM) constraints, we also take signatures and limits from a scenario with T -parity breaking into account which is different than the Littlest Higgs model without T -parity. We also give prospects for the upcoming highluminosity runs at the LHC at 14 TeV. The outline of the paper is as follows: in order to make the paper self-contained, in section 2 we briefly summarize the model-building setup of the Littlest Higgs model (with T -parity) needed to understand the phenomenological analyses later on. In section 3 we -2 - JHEP05(2018)049 review the existing limits from EWPO on the LHT model. In the next section, section 4, we discuss the tool chain for generating events and recasting the LHC analyses. We then collect the relevant collider topologies along with cross sections and branching ratios for different regions of parameter space in section 5. Our main collider results are collected in section 6, and compared to the sensitivity from electroweak precision data in section 7. Finally, we give a summary and outlook in section 8. Little Higgs models with T -parity The Littlest Higgs model [8] is based on a non-linear sigma model with a single field Σ parameterizing a SU(5)/SO(5) symmetry breaking structure. 1 The vacuum expectation value (vev) causing the breaking from SU (5) to SO(5), Σ 0 , can be cast into the form of the 5 × 5 matrix (2.1) The gauge group of the Littlest Higgs is G 1 ×G 2 = (SU(2) 1 ×U(1) 1 )×(SU(2) 2 ×U(1) 2 ) embedded in SU (5) as a subgroup such that the vev in eq. (2.1) above breaks it down into the diagonal subgroup SU(2) L × U(1) Y which is identified with the SM electroweak group. The kinetic term for the non-linear sigma model field is where f is the Nambu-Goldstone-Boson (NGB) decay constant of the model. At this scale the symmetry breakings SU(5) → SO(5) and G 1 × G 2 → SU(2) L × U(1) Y take place. The covariant derivative in eq. (2.2) is given by with the generators where σ a are Pauli matrices. The SU(5) → SO(5) symmetry breaking generates a total of 14 NGBs Π a which decompose under the unbroken EW group SU(2) L × U(1) Y as 1 0 ⊕ 3 0 ⊕ 2 ± 1 2 ⊕ 3 ±1 . Four of these NGBs are eaten by the extra gauge bosons, Z H , W H 1 For different implementations of Little Higgs models in terms of product group and simple group models and a way to distinguish them, cf. e.g. [9][10][11]. JHEP05(2018)049 and A H , which get masses of the order f . The remaining ten physical (p)NGBs decompose into the complex Higgs doublet and a hypercharge one complex triplet. The latter is phenomenologically irrelevant as the production cross section for these particles is negligibly small, cf. ref. [12]. Like many other BSM models, the Littlest Higgs model suffers from constraints by electroweak precision observables, particularly as the mass of the heavy hypercharge boson, A H , has an accidentally small prefactor, cf. the right hand side of eq. (2.10). To alleviate these constraints, a discrete symmetry, TeV-or short T -parity has been added [13,14], which phenomenologically plays a similar role as R-parity in supersymmetry (SUSY). Tparity is an inner automorphism that exchanges the sets of the two different gauge algebras G 1 and G 2 , or alternatively, their gauge bosons: This fixes the gauge coupling constants of the two different SU 1,2 (2) and U 1,2 (1) to be equal: The mass eigenstates are then just the (normalized) sum and difference of the two gauge fields, respectively, with mixing angles of π/4. This results in the mass terms of the heavy gauge bosons . (2.10b) In order to implement collective symmetry breaking in the fermion fields, a partner state to the third generation quark doublet has to be introduced, forming an incomplete SU(5) multiplet Ψ and its T -parity partner Ψ (2.11) which are related via Here, q L denotes the quark doublet of the SM following the conventions in [8], while q L and t 2 are the T -parity partner fermions needed to reconcile both T -parity and the collective symmetry breaking mechanism. The T -parity invariant Lagrangian then reads as JHEP05(2018)049 where λ 1,2 denote the top-quark Yukawa couplings, respectively. The T -parity eigenstates are now the (normalized) differences (even states) and sums (odd states) of the primed and unprimed fermion fields t + = (t 1L,+ , t R ), t = (t 1L,− , t 1R,− ), T − = (t 2L,− , t 2R,− ) and T + = (t 2L,+ , t 2R,+ ). Diagonalizing the left-handed T -even fermions yields the (SM) top quark and the heavy T -even top quark, T + . The t − gets a mass with the help of the so-called mirror fermions, cf. below for the first and second generation fermions, while the masses for the SM top quark and the other top partners are given by, 14) R is defined as the ratio between the Yukawa coefficients of the two different possible terms, R = λ 1 /λ 2 and is one of the parameters used for investigating the parameter space in this paper. Up-type quarks for the first and second generations have a similar Lagrangian than the top quark except for the vector-like quark, which is not present as there is no need to cancel the contribution from light quarks to Higgs self energies: The SU(2) 1,2 singlet X with U(1) 1,2 charges (Y 1 , Y 2 ) = (1/10, −1/10) renders the term gauge invariant. There are two different X embeddings as (3, 3) component into the NGB multiplet, namely X = ( ]. These cases do not differ in the context of BSM collider phenomenology which is why we choose Case A in this study. Differences only arise in the discussion of constraints from the Higgs sector and electroweak precision observables and more details can be found in ref. [12]. To give rise to mass terms for the T -odd fermions without introducing any anomalies, another SO(5) multiplet Ψ c is introduced as The q c fields are called mirror fermions. The T -parity invariant Lagrangian for the light fermions is This Lagrangian not only adds the T -odd mass terms but also imposes new interactions between Higgs boson and up-type partners. JHEP05(2018)049 The parameter κ characterizing the coupling between the Higgs and the T -odd fermions is another degree of freedom in the model parameter space we investigated. We will distinguish between κ q for the light quarks and κ l for the leptons. The mass spectrum for heavy T -odd fermions is given (at order O(v 2 /f 2 )) by T -parity violation For the phenomenology of the LHT model, we will also consider T -parity violation. There are two reasons for that: first, in the context of strongly interacting UV completions Tparity violation can naturally occur via an anomalous Wess-Zumino-Witten term, [15,16], secondly, there is a certain tension for the case that the lightest T -odd particle, the heavy photon A H is absolutely stable from relic density calculations and direct detection dark matter experiments [17,18]. In order to avoid any constraints from dark matter bounds, one can assume that the A H only has a microscopic lifetime and that dark matter instead is made up of an axion-like particle in the strongly interacting UV completion of the Little Higgs model or more generically from a completely different sector. As has been studied in [15,19], T -parity violation generates decays of the heavy photon partner A H into the electroweak gauge bosons W W and ZZ similar to the decay of the pion into two photons. Above the kinematic threshold for these A H decays, the partial width is given by: Table 1. Coefficients for the A H TPV decays, cf. eq. (2.26). The indices a, b refer to the color of the respective quarks and we useN = N/48π 2 , c W = cos θ W , t W = tan θ W . f ∼ 1 − 10 TeV and N = O(1), the total A H width Γ A H ranges between 0.01-1 eV which corresponds to a lifetime of order 10 −17 s. This excludes A H from being a viable dark matter candidate. On the other hand, it leads to a mean free path of approximately 10 nm, resulting in nearly prompt decays which do not produce observable displaced vertices in the LHC detectors. Naturalness and fine tuning Together with the model setup, we discuss in this section the definition of fine tuning, that is sometimes used as a guideline for the naturalness of a model or of certain regions of parameter space. The naturalness is generally tied to the radiative corrections to the scalar potential in quantum field theories. In order for a model to be considered natural, those corrections should be of the same order as the scalar mass term from the mechanism that originally created that mass term (the explicit breaking of the global symmetries in Little Higgs models). A fine-tuning measure usually compares the size of the radiative corrections to this bare mass term. In the absence of a special cancellation mechanism, this measure depends quadratically on the typical scale of these corrections; cancellation by means of a symmetry turns this into a logarithmic dependence. In Little Higgs models, the cancellation comes from SM partner particles of like statistics by means of nonlinearly realized global symmetries. The most severe SM radiative corrections from the top quark are cancelled by the T -odd and even top partners, T ± , followed by the cancellations of the EW gauge bosons due to the heavy new gauge bosons, A H , Z H , and W H . In this paper, we adopt the fine-tuning measure defined in [8], which only accounts for the top partners, and neglects the contributions from the gauge boson partners as well as from the heavy pNGBs and the light fermion partners. The fine tuning is then defined as the ratio of the experimentally measured Higgs mass squared and the absolute value of the radiative corrections from the top partners to the Higgs quadratic operator: Here Λ = 4πf is the cut-off scale of the LHT model, i.e. the equivalent to Λ QCD in a strongly-interacting embedding of the LHT, λ t is the SM top Yukawa coupling and M T is a generic mass scale of the top partner sector. Note that this definition of the fine-tuning measure leads to the fact that smaller values of that measure (provided in per cent in general) constitute a higher amount of fine tuning, hence a more finely tuned point of parameter space. While the LHC Run 1 datasets at 7 and 8 TeV together with electroweak precision observables still allowed parameter space with O(1%) [12], we will see in this paper that the fine tuning including LHC Run 2 data is now everywhere around one per cent or even in the sub-per cent regime. This is still comparable with or better than the amount of fine tuning in generic parameter regions of the minimal supersymmetric SM (MSSM), and it is generically (much) better than the fine tuning for Composite Higgs models. Electroweak precision constraints Even before the start of data taking at the LHC, Little Higgs models were already grossly constrained by comparing their predictions to precise measurements in the electroweak sector, the so-called electroweak precision observables (EWPO) [20][21][22][23]. Additional constraints come from flavor data (in the K, D and B sector), as well as for the models with T parity and stable massive particles from dark matter searches. We will not discuss the first point here as this has been studied elsewhere [24,25], and the second point has been addressed in the last section. EWPO mainly contain a list of measurements from e + e − colliders like LEP1, LEP2, SLC, and TRISTAN, and a few selected measurements from hadron colliders where the precision has superseded that from lepton colliders, like the W mass, or was only possible there, like the Higgs mass and couplings. In refs. [12,26,27], both the EWPO as well as the latest Higgs data have been scrutinized in order to give the then best constraints on the parameter space of the LHT model. We will not repeat the complete table of the EWPO fit of the LHT model from [12] here, but just remind that the two main observables with the highest pull in the fit giving the highest constraint are the total hadronic cross section at the Z pole as well as the leftright asymmetry on the b quarks, A (b) LR . Higgs observables in general do not give any further constraints beyond that as EWPO already drive the Little Higgs scale f in a region where the deviations of the Higgs couplings are well within the LHC experimental uncertainties. The only exception to this statement comes from the case when the decay H → A H A H is possible which is ruled out by the LHC limits on Higgs invisible branching ratios and excludes m A H < 62.5 GeV, i.e. f < 480 GeV [12]. The first EWPO constraints that have been applied to Little Higgs models came from oblique corrections, the so-called Peskin-Takeuchi ∆S, ∆T and ∆U parameters [28,29]. These parameterize corrections to the self energies of EW gauge bosons, that are measured in two-(and four-) fermion processes at lepton colliders. T -parity was specifically introduced to minimize the contributions from Little Higgs heavy particles to the oblique parameters as far as possible, as no T -odd particle can contribute to them at tree level. However, at loop-level there are contributions from T -odd heavy quarks, the T -even top quark, the mirror fermions and the heavy gauge bosons. These have been calculated in [30,31]. JHEP05(2018)049 One interesting feature derived in [12,26] from the contribution of the heavy top partners to the ∆T parameter, is the exclusion limit from EWPO as a function of the parameter R, the ratio of the two different Yukawa couplings λ 1 and λ 2 in the top sector. There is an accidental cancellation to the EWPO in terms of R for the value of R=1. This gives an only relatively weak exclusion limit for f 405 GeV at 95% confidence level from EWPO only. For R 1 this bound goes up to roughly 750 GeV while for large R ∼ 3 the bound from EWPO goes up to 1.3 TeV. For our discussion in this paper and the motivation into which regions of parameter space to look at, even more relevant are the contributions from the mirror fermions: (3.1) These expressions come from box diagrams contributing to four-fermion operators with heavy quark and lepton mirror fermions running in the loop: Here, ψ and ψ are any combinations of different SM fermions. These four-fermion operators can be reinterpreted in terms of a contribution to the oblique ∆T parameter. The peculiar feature about them is that they increase with the mass of the mirror fermions for fixed scale f . This is clear from the fact that in that case the Yukawa-type coupling which enters the box diagrams has to be enlarged leading to a larger contribution from the box diagrams. The κ is usually assumed to be a diagonal matrix in flavor space or even proportional to the unit matrix. In this paper, we do not lift the degeneracy in generation space, however, we investigate different values for the κ couplings for mirror quarks and mirror leptons. As was shown in [12,26,27], the end of LHC Run 1 was sort of a turning point where limits from direct searches of heavy particles in Little Higgs models started to become competitive with EWPO, and now with Run 2 even superseded them. As the only relevant EWPO result is eq. (3.2) and the R dependence from the top partner contributions to the ∆T parameter, we do not discuss EWPO any further here, and take eq. (3.2) as a motivation to look into different scenarios of combinations of all-light degenerate mirror fermions, heavy mirror quarks, as well as split scenarios with light mirror leptons and out-of-LHC-reach heavy quarks. Tool framework and scan setup The main goal of this paper is to derive limits on the LHT model from all available LHC run 2 data. In this section we describe the framework that we used in order to derive numerically the current LHC bounds on the LHT model. Used software To be able to generate Monte-Carlo events for our model, we make use of the FeynRules implementation of the LHT model as in refs. [12,26,27]. We slightly extended the model -9 - JHEP05(2018)049 definition such that the heavy fermion Yukawa couplings κ are transformed into independent coupling constants κ and κ q . We then exported the LHT model to the event generators MG5 aMC@NLO [32] and WHIZARD [33][34][35][36] 2 via the UFO file format. 3 The collider phenomenology of the LHT model studied in this paper depends on the mass scale f , the two Yukawa coupling parameters κ and κ q , as well as the ratio of top Yukawa couplings R. For these four parameters we derive the corresponding masses according to eqs. (2.10), (2.16), (2.22) and store these in a spectrum file which follows the definitions of the UFO model. The branching ratios and corresponding decay tables for all LHT particles are calculated analytically using the formulae in the above linked model file. These include all 2-body decays for all relevant particles. Note that within the parameter space that we analyze, no 3-body decays need to be considered as there is always a dominating 2-body final state. The only difference is the anomaly-mediated decay of A H in the case of T -parity violation, see section 2.1. For this, we use the branching ratios as functions of f taken from ref. [19] which will be shown later in this work. For decays into gauge bosons, we assume that for m(A H ) > 185 GeV, i.e. for f 1080 GeV, A H decays via 2-body decays into W W and ZZ. For smaller masses, we formulate 3-body decays for the decay table as follows: we consider all possible decay modes of the W or Z, replace one of the final state gauge bosons with the corresponding decay products and multiply the branching ratio accordingly. For the main tasks of this numerical study, we make use of the collider analysis tool CheckMATE [43][44][45]. This program is useful to test a given BSM model in an automatized way. It makes again use of the aforementioned generator MG5 aMC@NLO to simulate partonic events. By making use of the UFO model description file format, MG5 aMC@NLO or WHIZARD are able to simulate partonic events for a given BSM model which was implemented in a model building framework like FeynRules [46,47] or SARAH [48], e.g. via the WHIZARD-FeynRules interface [49]. The showering and hadronization of these events is subsequently performed by Pythia8 [50], followed by the fast detector simulation Delphes [51] which considers the effect of measurement uncertainties, finite reconstruction efficiencies and the jet clustering of the observed final state objects. These detector events are then quantified by various analyses from both ATLAS and CMS at center-of-mass energies of 8 and 13 TeV (more details below). Events are categorized in different signal regions and CheckMATE determines which signal region provides the strongest expected limit. If the input model predicts more signal events than are allowed by the observed limit of that signal region, CheckMATE concludes that the model is excluded at the 95% confidence level, otherwise the model is allowed. For more details on the inner functionality of CheckMATE, we refer to the manual papers in refs. [43][44][45]. Details on event generation For the event generation, we consider the production of all relevant two-body final states. In Furthermore we distinguish models in which a) T -parity is exactly conserved and b) models where gauge anomalies introduce the T -parity violating couplings In order to reduce the number of free parameters we focus on particular benchmark scenarios with different theoretical and/or phenomenological motivation and with different assumptions on the fermion sector, the heavy top sector and the validity of T -parity. These scenarios result in 3 × 2 × 2 = 12 different benchmark cases, summarized in table 2. Heavy fermion sector: we first discuss the different assumptions on the heavy fermion sector. In the Fermion Universality model we set the two coefficients κ q = κ l equal and hence get a mass degeneracy in the heavy fermion sector. Due to their color charge, the production cross sections for processes involving heavy quarks are significantly higher than the respective cross sections for final states with color-neutral heavy fermions. Hence, we do not consider process 3 of our list in 4.1. The masses of the heavy fermions have two important consequences for the phenomenology: they affect their production cross sections and they change the branching ratios of the heavy gauge bosons V H → ( * ) H . To get an understanding which role this plays when setting bounds on the model we choose two further benchmark cases, each taking into account one of these effects. In the Heavy q H model we decouple the heavy quarks from the model by fixing κ q = 3.0. This raises the heavy quark masses to the multi-TeV scale and hence makes them experimentally inaccessible. Therefore, we do not consider production modes which involve q H , i.e. processes 1 and 2 of 4.1, but take into account H pair production, process 3, instead. The results of this benchmark scenario should give insight to which degree the LHC sensitivity relies on the presence of the color-charged objects and which limits can be determined from searches looking for color-neutral particles only. The Light H benchmark is also designed to lift the degeneracy of the color-charged and color-neutral objects. Here, by fixing κ to a small value of 0.2, the latter are light enough for the heavy gauge bosons to decay into them. We are interested to see how this change in the expected decay patterns affects the bounds compared to the Fermion Universality model. Note that even though the H are light we do not take into account the bounds from H production as we are interested in how only a change in the decay pattern affects the resulting bounds. The bounds resulting from direct H production are determined in the previously discussed Heavy q H benchmark. The results of these three benchmark cases should be sufficient to qualitatively determine the resulting bounds for other κ q − κ combinations and to avoid a full 3D parameter scan in the f − κ q − κ plane. Heavy top partner sector: the main phenomenological difference between the heavy top partners T ± and the other heavy fermions q H is that their mass depends on R instead of κ. We choose two benchmark values for this parameter in such a way that one results in experimentally accessible top partners (R = 1.0) while the other (R = 0.2) does not. The value R = 1.0 also corresponds to a case where minimal fine-tuning can be achieved, see [12,26], and thus this benchmark case tests the natural regions of parameter space of the LHT model. In the Heavy T ± scenario we ignore any processes which involve these particles as they are too heavy to result in an LHC exclusion. The comparison of the two bounds at R = 1.0 and R = 0.2 gives insight to which degree the masses of the particles in this sector are relevant for the overall sensitivity. JHEP05(2018)049 T -parity violation: as discussed in section 2.1, gauge anomalies in the heavy sector can result in anomalous T -parity violating A H − W − W and A H − Z − Z couplings. The presence of these operators may drastically change the expected collider phenomenology as the final state not necessarily contains an invisible particle any more. Supersymmetry motivated searches are however still expected to be sensitive as the leptonic decays of the W and the invisible decays of the Z boson can still produce a significant amount of missing energy. We are interested to see by how much the bounds derived for the T -parity conserving case are changed due to these anomaly-mediated decays. For that reason we analyze each of the above discussed benchmark scenarios once with a stable A H and once with enabling A H → V V decays. Collider topologies For the discussion of the LHC results, it is useful to understand both the values of the production cross sections for all the processes we listed in the last section and the dominant branching ratios of the relevant final state BSM particles. Collider bounds are expected to be set by processes with a large production cross section times a decay topology with only a small Standard Model contamination. In this section we review the parameter dependence of these observables in order to determine the theoretically expected collider topologies of our LHT benchmark scenarios. Many of them are relevant for the discussion of the exclusion bounds that we determine with CheckMATE in the upcoming section. Cross sections We start with a discussion of the production cross sections for all the process sets listed in section 4.2. In figures 1, 2 we show the cross sections for √ s = 13 TeV as a function of the symmetry breaking parameter f with fixed κ and vice versa. As the benchmark case Light H does not affect any production mode, the cross sections are identical to those in the Fermion Universality benchmark. In all cases we show the results in the Light T ± subscenario for which the T ± are kinematically accessible and the cross sections would nearly vanish in the case of Heavy T ± . Note that κ refers to κ q = κ in the Fermion Universality case and to κ q ( = κ ) in the Light H scenario. T -parity violation does not play a role in the discussion of LHT particle production which is why we do not distinguish TPC and TPV here. Results for center-of-mass energies of 8 and 14 TeV are provided in appendix A. Since the mass of all heavy sector particles increases with f , the cross sections for all processes drop with increasing f . 4 Similarly, since the mass of the heavy fermions depends linearly on κ, the cross sections for producing these particles become smaller for larger values of this parameter. As both mass and couplings of the T ± only depend on f and the fixed parameter R, no dependence on κ can be seen. Interestingly, even though the mass of the vector bosons V H also depends on f only, their production cross sections show a small κ-dependence in the Fermion Universality 4 Small fluctuations in the f -dependent qH qH production cross section are caused by numerical noise. scenario. This is due to contributions of t-channel q H which interfere destructively with the s-channel vector-boson diagrams. Since all masses scale linearly with f , this effect appears nearly independently of f at the position κ ≈ 0.5. As a result, the cross section for V H pair production is roughly a factor 5 smaller for small κ ≈ 0.5 than for large values κ 4 when the heavy fermions are decoupled. As the q H are by construction decoupled in the Heavy q H benchmark scenario, the κ dependence of the V H V H production cross section vanishes in the resulting distribution shown in figure 2. The production cross sections can reach values up to 10 3 fb and we thus expect the √ s = 13 TeV LHC to be sensitive to large regions of the parameter space we considered. Even for values of f ≈ 3 TeV, cross sections of order 10 −1 fb and thus detectable event rates can be expected which improves results from LHC Run 1 which were insensitive to values of the symmetry breaking scale above 2 TeV [12,26]. Comparing the results of both the f -σ and the κ-σ planes, it becomes clear that there is no dominant process with a universally largest cross section. The cross sections have very different dependencies on κ and f and thus different regions in parameter space are expected to have different dominating final states. Generally, regions with small values of κ and thus with light q H , H predict a large rate of produced heavy fermions. As expected for a hadron collider, the q H production is about two to three orders of magnitude larger than the production of heavy leptons H and the latter appear only to be relevant for small values f 1 TeV, κ 0.5. In regions with larger values of κ, the production of heavy vector bosons becomes more important as their mass is independent of κ. If heavy top partners T ± are accessible, they are produced with comparable abundance as the heavy vector bosons. 5 Since the T − is always lighter than the T + , the production of the latter appears to be negligible in comparison. Branching ratios We now continue with a discussion of the branching ratios for the relevant partner particles within the given benchmark cases. Note that we combine phenomenologically similar branching ratios which involve q := u, d, c, s, (so we particularly do not distinguish heavy up-and down-type quarks here) = e, µ, τ , ν := ν e , ν µ , ν τ and their respective heavy partner fermions. 6 Also, we only discuss those decays with a branching ratio of at least 1 % 5 Note that this statement in general depends on the specific value of the additional parameter R which we fixed to 1.0 in our benchmark scenario. 6 It is only in this section where we distinguish between the charged heavy fermion H and the neutral particle νH . In the rest of this work, H refers to both heavy charged and heavy neutral leptons. anywhere in the discussed parameter space. Though we do not show it in the plots, we analytically calculated all decay widths and considered all kinematically allowed 2-body final states in the decay tables used in our scans in order to get correct values for the branching ratios. We mainly discuss results for the Fermion Universality and the Light H scenarios as the Heavy q H scenario does not show any differences in the observable decay patternexcept for one difference which we mention along the way. Obviously, it is only the decay of the A H which shows different behavior in the benchmark cases TPV and TPC. These two benchmarks are hence not distinguished in the discussion regarding the decays for the other particles. Within the parameter ranges that we focus on, the particles T − , H and ν H each only have one decay mode in some scenarios: JHEP05(2018)049 For other particles and/or other scenarios there is more than one decay mode and the branching ratios depend on the values of f and/or κ. As these always show asymptotic behavior for large values of κ or f , we focus on the behavior visible at lower parameter ranges than analysed in our collider study. The behavior at larger values can easily be extrapolated from the shown results. In figures 3, 4 we show the dominating branching ratios of the heavy quark partners d H , u H , respectively, in the Fermion Universality/Light H models which show identical results in this regard. As before, we show curves as functions of both κ and f . For both up-and down-type heavy quark partners, the decay into a heavy W H boson and a quark is the most important decay with a branching ratio of nearly 60 % -whenever it is kinematically allowed. They are followed by decays into Z H q of order 30 % and to A H q of order 10 %. A small variation with f becomes visible which is caused by a subdominant dependence of the respective coupling constants on v/f (see e.g. [30]). This dependence differs between up-and down-type quarks and thus the variation with f differs for these two flavors. Note that very small values of κ q 0.5 lead to m(q H ) < m(W H ), m(Z H ) and thus forbids decays q H → (W/Z) H + X. All q H therefore decay to the light A H in this region of parameter space. Note that due to the overall mass degeneracy and the identical quantum numbers within the Fermion Universality model, the decay signatures of all other heavy fermions, except for the T ± , are identical after replacing the corresponding up-and down-type components of the respective SU(2) doublets. For example, the branching ratio for ν eH → W H e is identical to the branching ratio u H → W H d, see figures 3-5. Next, we discuss the decays of the heavy gauge bosons W H and Z H for the Fermion Universality model in figure 7 and for the Light H model in figure 8. We only show results depending on κ as there is no f dependence for the two standard benchmark values κ = 1.0, 2.0 which we considered. In case of Fermion Universality, the decay V H → f H f into a heavy fermion partner is only allowed for κ 0.5 and in this region decays into heavy quarks dominate. For larger values of κ, the only available decays are W H → W A H and Z H → hA H . In the Light H scenario, this picture changes by construction: the H are fixed to light masses and thus for κ q 0.5 both heavy gauge bosons decay to 50 % into H and ν H ν. Again, for smaller values of κ q decays into q H are kinematically accessible and have a dominant branching ratio. The branching ratio curve for the benchmark scenario Heavy q H corresponds to the one for Fermion Universality with the only exception that the decay V H → q H q disappears for κ < 0.5 and the branching ratios for the other modes scale up accordingly. In figure 9 we show the branching ratios of the heavy top partner T + (note that T − always decays to tA H as listed above) in the Light Top benchmark, i.e. for R = 1.0. As T + is a T -parity even particle it must decay into pairs of T -odd particles or purely into SM particles. This results in four main decay scenarios. The SM decays follow mainly the pattern of a SU(2) L singlet top partner (cf. e.g. [54]) of 50 % branching ratio into bW + and equally a quarter into th and tZ. This is only slightly modified by the only accessible T -odd particle decay, namely roughly 15 % branching ratio into T − A H . This changes the top-like decay into bW + into nearly 45 % branching ratio, while th and tZ have roughly 20 % branching ratio each. These branching ratios have no dependence on κ and only little dependence on f which originates from the f -dependence of the T ± and A H masses. We finish the discussion with the branching ratios of the A H in the TPV scenario shown in figure 10, which only depend on f . The information shown in this figure has been taken from a detailed calculation performed in ref. [19]. One observes that for f > 1200 GeV, decays into on-shell Standard Model gauge boson pairs dominate. For smaller values of f , the A H mass drops below 180 GeV, the partial decay widths into gauge bosons decrease due to kinematic suppression and the loop-induced decays into Standard Model leptons become equally relevant. For f 900 GeV, A H decays predominantly into SM quark pairs. Expected final state topologies and correspondence to supersymmetric searches In this section we combine the information of the preceding one with the list of dominant production processes given in section 4.2 in order to find the following expected final state signatures. Comparing them to the specialized analyses of the experimental collaborations for supersymmetry, we can make the following classification of the signatures and their applicability to the LHT model: • In general -if T -parity is conserved -all T -odd particles produce decay chains with a stable A H as the lightest T -odd particle at the end. This particle is experimentally invisible and thus produces missing transverse momentum / E T in the event. This is in close analogy to R-parity conserving supersymmetry which produces decay chains with the lightest neutralino at the end which similarly produces / E T . Therefore, many searches looking for R-parity conserving supersymmetry require / E T in the event and thus are sensitive to our model. In the Light Leptons model, the heavy gauge bosons almost always decay into a lepton and the corresponding heavy lepton partner which itself always decays into a lepton and A H . This behavior corresponds to a supersymmetry model with very light scalar leptons for which there exist specific signal regions in experimental searches for electroweakinos. • Final states with heavy q H always produce quarks and A H in their decays and hence result in final states with jets and missing transverse momentum. In most cases these decays produce further heavy gauge bosons V H which, as explained above, add more leptons, b-jets or normal jets to the event. This topology is very similar to supersymmetric scalar quark production with either direct decays into the lightest -20 - • Final states with the T -odd T − produce final states with SM tops and missing transverse momentum, a typical signature of natural supersymmetry with a light scalar top. • Final states with the T -even T + not necessarily produce missing transverse momentum but instead decay top-like into bW + , hence are expected to affect SM top measurements, or decay into top + Higgs/gauge boson final states which is a typical feature of models with an extended quark sector. Since processes involving T + have a reduced production cross section, see our earlier discussion, and since our searches mostly focus on SUSY-like final states, we do not expect these particles to be of great relevance for our results. • If T -parity is violated by small couplings, we still expect the same production and decay topologies as in the T -parity conserving case which typically produce 2 A H and the same hard final state objects which we listed in the previous discussion. However, as now each of these decays into pairs of Standard Model particles, many more final state topologies appear. Especially if f 1.2 TeV we expect four Standard Model vector bosons in the final state and as each of these can decay hadronically or leptonically, a plethora of possible final state exists with various combinations of additional jets and leptons. These can be covered by analyses which target very large final state multiplicities for which the Standard Model background is very small. Furthermore, as both Z and W have sizable decay rates into final states with neutrinos, the final states may even have a significant amount of missing transverse momentum and thus may still be covered by the same supersymmetry-based analysis strategies as mentioned for the T -parity conserving case. All in all we expect various final states which are very similar to those expected in typical supersymmetric models and we expect that this model can be strongly constrained by applying LHC searches originally designed to find supersymmetric particles. Even though theoretically expected, some of these topologies not necessarily will result in a large enough signal event rate to produce a sensible bound and/or various topologies appear simultaneously and it is difficult to say a priori which of these topologies is expected to result in the strongest sensitivity. Fortunately, as many of these searches are implemented in the tool CheckMATE, we expect this tool to perform very well in our scenarios and determine the respectively strongest bounds for each benchmark case conveniently. In table 3, we summarize our above discussion of the expected topologies and their SUSY analogues in cases where an analysis exists in CheckMATE which, according to its respective target signatures, should be sensitive to one or more production times decay patterns of the given topology. Note that even though each analysis only appears once in the table, some analyses may cover more than one topology. For example, an analysis focussing on final states with hard jets and missing transverse momentum may cover both q H and T ± initiated topologies. Table 4. Small summary of all √ s = 13 TeV analyses which appear in the discussion of our results. More details, also on other tested analyses, are given in table 6 in the appendix. As can be seen, the number of possible topologies is very large, especially in T -parity violating scenarios with all their possible combinations of the two final state A H decaying into pairs of leptonically or hadronically decaying vector bosons. Though most of the main decay scenarios are covered by the list of analyses provided in Within the scenarios that we analyze, we do not expect these missing topologies to yield stronger bounds than the ones we derive as either the production cross section (cases 2 and 3) or the branching ratio (case 1) is significantly smaller than those of the topologies we take into account. Collider results from CheckMATE We now discuss the results of our collider analysis performed with CheckMATE. Exclusion lines in the κ-f -plane for all 3 × 2 × 2 scenarios are shown in figures 11-22. For each case, we choose two ways to present our results. On the respective plots in the left column we show the total exclusion line determined by CheckMATE from LHC analyses at 8 TeV and 13 TeV, respectively. The 8 TeV results allow direct comparison to earlier studies, e.g. in [12,26]. Drawing them in the same plot with the updated 13 TeV results illustrates how the increased energy and the higher integrated luminosities significantly improve the sensitivity on the Little Higgs Model with fully or nearly conserved T -parity. In the discussion in the -23 - JHEP05(2018)049 main text of this section we focus on the update from the current results at √ s = 13 TeV and will not discuss the outdated results at 8 TeV center-of-mass energy. In the same set of plots we also show mass contours of the most relevant particles to understand the bounds. These are • the heavy gauge boson mass Z H (= W H ), • the heavy quarks q H for all models except Heavy q H , • the heavy leptons H for the model Heavy q H , • the T -odd heavy top partner T − mass for Light T ± benchmarks and • the heavy photon mass A H for TPV models. To keep the plots readable we do not show all contours in all plots. With the exception of T ± whose mass values are only meaningful in the Light T ± scenario, all plots with same heavy fermion sector scenario (see table 2) have the same particle spectrum and therefore, each iso-mass contour can be understood to appear in all other plots of the same main benchmark scenario. Alongside the above results we show a second plot each for all benchmark scenarios where we focus on the experimental signature(s) which lead to the overall bound. For each benchmark study, we show the respective CheckMATE analyses which cover the excluded region at √ s = 13 TeV. The names in the legend correspond to the CheckMATE analysis identifiers and we provide a small summary of their respective covered topologies in table 4 for convenience. Note that regions with small κ and small f are typically covered by many more LHC analyses but we only show the minimal set of analyses sufficient to cover the entire excluded region. A full list of all CheckMATE analyses that we considered for this study can be found in table 6 in the appendix B. Fermion universality We start with a discussion of the Fermion Universality model in which the heavy fermion Yukawa couplings are set to be equal, κ q = κ , and thus features a degenerate spectrum of heavy quarks and heavy leptons. T -parity conserved and heavy T ± . In figure 11 we start with the subscenario of conserved T -parity and with the heavy top sector decoupled. The excluded parameter spaces can be separated into two main regions: • For large f ≥ 1 TeV, the exclusion line depends both on κ and f and runs nearly parallel to the iso-mass contours of the heavy quarks. It thus nearly follows the inequality f × κ < f κ max with f κ max ≈ 1.5 TeV at √ s = 8 TeV and ≈ 2 TeV at √ s = 13 TeV. The most sensitive analysis looks for at least two hard jets and a large amount of missing transverse momentum, a topology which in this region appears through heavy quark pair production with each heavy quark decaying into a quark, an invisible heavy photon and possible additional particles via more complicated casscades in the decay, q h q H → qqA H A H + X. The expected event rate for this QCD-induced process mainly depends on the mass of the heavy quarks and thus explains why the bound runs nearly parallel to the q H iso-mass contours. Still, tchannel heavy vector bosons also have a small effect on the production cross section and thus the bound drops slightly faster with higher f , i.e. with larger m(V H ), than the m(q H ) iso-mass contour. The results translate into a bound on m q H of ≥ 3 TeV for f ≈ 1 TeV which decreases to m q H > 2 TeV for f 3 TeV. • For smaller values of f , the bound becomes nearly independent of the specific values of f or κ q and absolutely excludes f > 900 GeV. For large enough values of κ, the heavy quarks are not created abundantly enough and hence we are only sensitive to the electroweak production of heavy gauge bosons V H whose mass is indepedent of κ q . The given limit can then be interpreted as an absolute mass bound m Z H = m W H 600 GeV. Even though their mass is κ-independent, the bound still becomes stronger for increasing value of κ q . This is -see our discussion in section 5.1 -due to κ q affecting the mass of the heavy quarks who in turn interfere destructively with their contribution to the total V H V H production cross section. Thus the weakest bound f > 800 GeV appears for κ q ≈ 2.5 and improves to f 950 GeV for κ 5.0. Interestingly, even though the main production channel has changed, the most sensitive study is the same multijet analysis as before. The required topology is created from hadronically decaying W -bosons in W H → W A H and from b-jets in the decay to a Higgs boson of Z H → hA H . T -parity conserved and light T ± . To see how the sensitivity to the heavy top partners compares to the previous bound, we show below in figure 12 the results of the same model, but now with R = 1.0 and thus including processes which involve the production of heavy T ± . Note that for fixed R, the mass of the T ± only depends on f which is why for large f these particles are not experimentally accessible. Thus, we get the same bound on (f κ max ) as explained for the previous benchmark. However, if the T ± are kinematically accessible they play an important role for the overall bound. For our special case with R = 1.0, we observe that the absolute bound on f increases to f ≥ 1.3 TeV and becomes entirely κ independent as the T ± production modes, as opposed to the V H modes discussed before, do not depend on the heavy quark sector. Again, we observe the search for multijets plus missing transverse momentum to be most sensitive for the bound. 7 Clearly, the precise value of the lower limit on f depends on the mass of the heavy top partner particles which implicitly depends on the value of R. We emphasize here that the choice R = 1.0 just serves as a benchmark case and any other R value would directly affect the bound, see eq. (2.16), in either direction. We chose R = 1 here for the reason that it JHEP05(2018)049 is rather special as it minimizes the LHT contributions to the EWPO, cf. section 3. Our more general conclusion from this benchmark study is thus that searches for V H and for T ± can yield competitive absolute lower bounds on f , and while the bound derived from V H production is nearly independent of the chosen benchmark, the presence of light top partners may put further constraints on the model. T -parity violated. In figures 13, 14 we show the results in case we include the anomalymediated decays of the heavy photon A H into vector boson or lepton pairs, both without (figure 13) and including ( figure 14) the heavy top sector. We again split the discussion into the two main parameter regions already discussed before: • We again observe a κ-dependent bound for large values of f which follows the iso-mass contour of the heavy quarks. However, compared to the T -parity conserving case the bound is now slightly weaker, m q h ≥ 2.5 TeV for f ≈ 1 TeV and m q h ≥ 1.5 TeV for f ≈ 3 TeV. There are two analyses with nearly identical sensitivity in this region, namely the already discussed zero-lepton-multijet plus / E T analysis and the related multijet analysis which requires one lepton in the final state. The fact that their sensitivity is fairly similar can be qualitatively understood from the fact that we expect many additional final state gauge bosons which produce additional leptons and/or jets. Thus, both multijet studies with and without leptons become sensitive and we get an overall similar signal event rate in the respective signal regions of these two studies. In fact, as the branching ratio to W W increases for smaller f , see figure 10, and as W -bosons produce on average more charged leptons than Z-bosons, we expect analyses which require a final state lepton to become slightly more sensitive for smaller f -a feature which we exactly observe in our results in figure 13, on the right hand side. At first, it appears unexpected that the bound is not significantly weakened, even though the originally invisible A H now decays into Standard Model particles and thus appears to remove crucial missing transverse momentum from the event. However, one should bear in mind that we expect four additional boosted gauge bosons, two from each A H , in the final state. Thus we expect to pass the / E T constraints if at least one of these decays into neutrinos. Even though on average the branching ratio V → ν+X is only around 25 %, as we have four gauge bosons the probability of having an A H A H pair decaying into at least one neutrino and thus producing / E T is above 70 %. This reduces the / E T cut acceptance slightly but not drastically compared to the T -parity conserving case. Furthermore, we get the same visible final state objects as in the T -parity conserving case, together with additional boosted particles from the gauge boson decays which may even improve the final state acceptance. It thus can be understood why the sensitivity does not drop significantly if T -parity violation is considered. • Similarly to before, for a symmetry breaking scale f of the order 1 TeV we observe a κ independent bound. Interestingly, the bound has even improved after turning on T -parity violation and excludes f 1 TeV for κ ≈ 1.5 and f 1100 GeV for κ ≈ 4.0. To understand why the limit becomes stronger one needs to look at the -27 - JHEP05(2018)049 analysis coverage map on the right of figure 13. We see that the bound derived from the multijet analysis, which was most sensitive in the T -parity conserving case, slightly weakened. This can be understood with the same arguments as given before for the large-f region. However, we also observe that the sensitivity is now dominated by electroweakino-motivated searches, more specifically by analyses which look for final state leptons and missing transverse momentum. A more detailed look in the results of that analysis reveals that it is in fact the signal region SR-Slep-e which produces the bound. This signal region requires 3 high-p T charged leptons which do not originate from a leptonically decaying W -Z-pair and a significant amount of missing transverse momentum. Interestingly, such a signature could not be reached in the previous T -parity conserving benchmark case, because the most important topology pp → W H W H → W W A H A H only produces two leptons. Including Tparity violation, we can get a third, highly energetic lepton if one of the four final state gauge bosons is a leptonically decaying W . Furthermore, since this signal region has no constraints on the final state jet multiplicity, the decays of the other three gauge bosons is irrelevant. As such, a large signal event rate is expected for this analysis if T -parity is violated. If the top partners are kinematically accessible, see figure 14, the absolute bound on f only increases slightly by about 100 GeV. The electroweak search stays the most sensitive analysis for this model. The resulting bounds increase as more events from the topology pp > T −T − → (bW )(bW )W W V V are expected. Again, the impact on the bound depends on the precise value of R and we only show one example here which illustrates that the details of the heavy top partner sector are relevant for the overall LHC limit. Interestingly, the multijet analysis does not seem to get a significant contribution from the presence of the T ± even though it did in the previous case when T -parity was conserved, cf. figures 15, 16. To understand this behavior one needs to consider the details of the experimental search: this analysis tries to cover various hierarchies and decay topologies that can appear in the supersymmetric squark-gluinog,q sector and defines many signal regions which target different jet multiplicities. Different mass scales in the supersymmetric sector are taken into account by gradually increasing the requirements on the sum of jet p T in the event as well as the total amount of / E T , more specifically by using cuts which require minimum values for the ratio / E T / (jet p T ). In supersymmetry, jet multiplicity, total hadronic energy and missing transverse momentum increase simultaneously as heavier particles on average produce longer decay chains and give more momentum to the visible jets and the invisible neutralino and thus a cut on / E T / (jet p T ) has a good signal acceptance in supersymmetry. However, such a cut is disadvantageous for our most important topology T ± → tA H if A H decays via TPV: the additional decay of A H into gauge bosons is expected to produce a significantly larger amout of jets and hadronic energy while reducing the amount of missing transverse momentum, resulting in a large drop in the signal acceptance. Therefore adding the T ± to the experimentally accessible spectrum hardly increases the amount of signal events in this case and the bound only improves little. Heavy q H We continue with the discussion of the results for the Heavy q H scenario which fixes κ q to 3.0 and thus effectively decouples the q H from the experimental reach. The results for all subscenarios (with/without T -parity violation and ex-/including the heavy top partner sector) are shown in figures 15-18. The plots show the same information as in the previous section 6.1, however note that the ordinate is now chosen to be the free parameter κ and the iso-mass contours are given for the H instead of the q H now. 8 To understand how the bounds change compared to the previous benchmark scenario, it is worth repeating the two main phenomenological consequences of this benchmark case: 1. q H → qV H topologies are replaced by H → V H . Multijet final states are thus replaced by multilepton final states. As the production cross section for H H is 2 to 3 orders of magnitude smaller than the corresponding cross section for q H q H , we expect a far weaker sensitivity in the heavy fermion dominated region (i.e. large f , small κ). 2. σ(pp → V H V H ) was dependent on κ but is independent of κ as no contributions from t-channel q H exist in this benchmark case. Thus we expect the bounds produced from V H pair production to be entirely κ independent and very similar to the case κ = 3.0 of the previous benchmark. With these pieces of information in mind, the results in figures 15-18 compare straightforwardly to the bounds of the earlier benchmark scenario in figures 11-14: • For f ≈ 1 TeV, vector boson production and potential heavy top partner production are the most sensitive channels and they produce κ independent bounds of f 950 GeV (TPC, no T ± ), f 1350 GeV (TPC, with T ± ), f 1100 GeV (TPV, no T ± ) and f 1200 GeV (TPC, with T ± ). The bounds correspond to those for the previous benchmark for large values of κ 4.0. The most dominant topologies also do not change: we observe multijet final states to be the most sensitive ones in case Tparity is conserved while multilepton final states become more important if T -parity is violated. • For κ 0.5, the mass of the H drops below the mass of the heavy vector bosons and thus decays of type V H → H can happen, see figure 7. The boosted final-state leptons of this decay can be observed via a multilepton analysis as can be seen in the right of figure 15. This significantly improves the sensitivity and improves the bound on f to up to 1.9 TeV. As the branching ratios depend on κ, this bound is now slighly dependent on κ. • The "f κ max "-bound which we were able to set in the previous benchmark almost disappears for this scenario where the q H are decoupled. The expected event rates from H H pair production are so small that no feasible bound can be set from this 8 As the mass of the H and qH are identical for κq = κ , see eq. (2.22), the iso-mass contours for H appear at the same position as those for qH in the previous benchmark. It is only in the case of T -parity violation that we can observe an exclusion for very small values of κ which follows the m( H ) = 1 TeV mass contour, caused by a slight increase of the expected multilepton event rates from leptonic gauge boson decays, see our discussion above. All in all we observe that the presence or absence of the q H partner particles plays a very important role for determining the LHC limits in the low κ region, i.e. for κ 1.5. However, the heavy gauge boson sector also puts very important constraints on f and as the collider phenomenology of this sector is almost, but not completely, independent of the heavy fermion sector, the absolute bounds on f are very robust against choices for the heavy quark sector. In fact, they tend to become stronger as the presence of light q H decreases the V H V H production cross section. Light H In our third main benchmark scenario we again scan κ q and thereby the mass of the heavy quarks. However, the degeneracy with the heavy lepton sector is now lifted by fixing κ = 0.2. Since we know from the results of the previous benchmark that no bound can be set if we search for the direct production of H alone, we only consider the effects of the light H with respect to the branching ratio of the heavy gauge bosons. Note again from our results in section 5.3 that while in the Fermion Universality model we dominantly expect bosonic decays Z H → hA H , W H → W A H , the Light Leptons benchmark mainly produces leptonic decays In figures 19-22 we show the results of this benchmark, again for all four subscenarios. • As in the Fermion Universality scenario, we observe two main regions of exclusion which intersect at f ≈ 1.6 TeV and κ q ≈ 1.2. • For small κ q and large f , we again observe a q H dominated bound similar to the one seen in the Fermion Universality scenario. The analysis coverage map reveals that for κ q > 0.5, the bound is set by a multilepton analysis. The already mentioned 3 signal region is very sensitive to the final state topology q H q H → qqW H W H → qq A H A H with one of the leptons not being identified and the other three leptons being highly boosted due to the large q H − W H mass splitting. For κ q < 0.5, the heavy vector bosons start predominantly decaying into hadronic final states -see figure 8 -in which case multijet final states start becoming more sensitive and reproduce the same bound as in the Fermion Universality scenario. • The q H dominated bound is again insensitive to the presence of the heavy top partners. Furthermore, it again slightly weakens in the presence of T -parity violation as the most sensitive final state stays identical but the / E T cut efficiency drops due to the A H decaying. • For larger values of κ q , we again observe a nearly κ q -independent absolute bound on f . This bound is again produced from direct production of heavy vector bosons and shows a small κ q dependence due to the cross section dependence of this parameter, see our discussion before. Compared to the Fermion Universality scenario, the limit has become tremendously stronger due to the presence of light H and improves to f 1.6 TeV for κ q ≈ 1.5 and to f 2 TeV for κ q 5.0. As the analysis coverage map on the right of figure 19 shows, the vector-boson dominated region is now tested by the multilepton analysis which identifies the boosted leptons from the V H → /νA H decays. As this final state has small Standard Model background contamination -most importantly since the leptons do not originate from W or Z decays -it produces a very clean signal and thus leads to a very strong exclusion. • In this scenario, the presence of the heavy top partners does not improve the bound derived from heavy vector boson production at all: the bound derived from T − production -see the Fermion Universality benchmark discussion -is only sensitive to scales f 1350 GeV and thus cannot compete with the much stronger bound set from the vector boson sector. Furthermore, the multilepton final state produced from the V H decays do not get any contributions from any of the expected T − decays. The limit is therefore unaffected. • As the final state leptons from the V H decays already produce a very clean signal, a possible decay of the A H induced by T -parity violation only results in a smaller / E T cut efficiency as explained before. Thus, we only observe that the bound is slightly weakened in models with T -parity violation. To summarize the results of this benchmark, we observe that a lighter H sector changes the decay patterns of the heavy vector bosons and this globally leads to a significant improvement on the bounds. This improvement even overcomes possible contributions from the heavy top partner sector and is only slightly weakened by the presence of Tparity violation. Therefore we again conclude that the lower limits on f derived in the Fermion Universality benchmark from searches for heavy vector bosons are very robust regarding changes in the heavy fermion sector. Note that for this benchmark we chose a specific value of κ and thus in fact only analyzed the impact of light H for a particular assumption for their masses. It is thus worthwhile discussing how changing κ would affect our results: • In our benchmark, the branching ratio V H → H was nearly 100 %. Clearly, the partial decay width V H → H depends on the H mass and thus the leptonic branching ratio may drop if we increase the heavy lepton mass. The resulting bounds would then gradually shift from those derived in the Light H to those in the Fermion Universality benchmark. Table 5. Small summary of all √ s = 14 TeV analyses which appear in the discussion of our results. More details, also on other tested analyses, are given in table 6 in the appendix. Prospects for √ s = 14 TeV As we observed in our results, the update from a center-of-mass energy of √ s = 8 TeV to √ s = 13 TeV and the increase of integrated luminosity between LHC Run 1 and Run 2 yielded significantly stronger bounds for all of the considered benchmark scenarios. In that context, the interesting question arises to which extent the sensitivity is expected to further improve at a high luminosity LHC running at √ s = 14 TeV. For that purpose, we used the ATLAS high luminosity studies implemented in CheckMATE to determine the expected bounds at very high statistics, L = 3000 fb −1 . This gives a rough estimate for the overall sensitivity range of the Large Hadron Collider to the Littlest Higgs Model in general. The corresponding cross sections are shown in figures 31, 32 in the appendix A. Again, all analyses which have been used by this study are listed in table 6 in the appendix and we provide a shortened version in table 5 which only lists those analyses which appear in our discussion of the most sensitive analyses. As one can see in the full table in table 6, at this stage the list of high luminosity analyses is very limited as only few official experimental and some phenomenological high performance studies have been implemented so far. These cover the most important topologies, i.e. missing transverse momentum with either a monojet, multijet or multileptons final state, however these old experimental studies use far fewer, less optimized signal regions compared to their counterparts at lower center-of-mass energies. Hence, our results should only be understood as rough approximations and much more sophisticated studies, especially on the experimental side, would be required to get results which are qualitatively at the same level as our earlier, detailed re-interpretation of existing experimental data. Since the number of tested topologies is fairly small and is not expected to cover all the various final states we discussed before, we do not consider the full set of benchmark models introduced previously at this stage. Instead, we concentrate on the results for TPC × Heavy T ± for the three scenarios Fermion Universality, Heavy q H and Light H . These give a good overview to the general expected sensitivity at high statistics. As can be seen from the results discussed above, the macroscopic structure of the excluded parameter areas are very similar for cases with and without T -parity violation and with the heavy top partners included or not. Hence, one can apply the phenomenological discussions of the previous sections to appoximately determine the excluded areas for the other benchmark cases which we do not explicitly discuss in the following. The JHEP05(2018)049 In general, the structure of the bounds is kept, i.e. there is a (nearly) κ-independent bound for small f and larger values of κ while there is a bound which follows the iso-mass contours for large values of f . • In the Fermion Universality scenario, the q H mass bound for large values of f increases by 1 to 1.5 TeV and excludes heavy quarks with masses m(q H ) 4 TeV for f ≈ 2 TeV and m(q H ) 3 TeV for f ≈ 4 TeV. As before, this bound originates from the high luminosity version of a multijet plus / E T search designed to find heavy squarks or gluinos in supersymmetry. The V H dominated bound for large values of κ probes heavy vector boson masses of order 1 TeV. Compared to the previous result determined at 13 TeV, the most sensitive analysis is now quoted to be the multilepton instead of the multijet final state. To reduce the contamination from pileup which is expected to become an important issue for the high luminosity LHC, the multijet final states require the scalar sum of the transverse momenta of all reconstructed objects to exceed 3 TeV. In the V H dominated region, the expected signal V H → A H V, V → hadrons with m(V H ) ≈ 1 TeV typically does not pass this constraint and for example requires a boosted final state due to a high p T jet from initial-state radiation (ISR) whose requirement significantly reduces the expected event rate. • The Heavy q H scenario at √ s = 14 TeV does not significantly improve the H -induced bound for small values of κ. We expect a weak bound which follows the H mass contour and excludes masses of order m( H ) ≈ 1 − 1.5 TeV. This bound originates from an extrapolated search for dilepton final states. This is however only a minor improvement to the bound which can be set already from today's result. As in the previous benchmark scenario, the V H produces a κ-independent bound of m(V H ) 1 TeV. • Lastly, the bound in the Light H scenario only improves little compared to the current 13 TeV results. In the large f region, the most sensitive analysis channel at LHC Run 2 is a multijet final state with one additional lepton which has a particularly small Standard Model contamination. Unfortunately, we do not have a high luminosity version of this analysis available and can only consider final states with many jets but no final state lepton. As the characteristic feature of the Light H scenario is the appearance of at least one lepton in all relevant final state decay chains, we lose sensitivity due to our restricted amount of available analyses. For larger values of κ, the bound on m(V H ) only increases by about 100 GeV, determined from a search which requires two leptons in the final state. This analysis is designed to target either of the two supersymmetric topologies˜ ˜ → χχ orχ +χ− → W Wχχ followed by leptonic W decays. Though some of the final states produced by our benchmark scenario pass the constraints set for these particular topologies, none of the signal regions are specifically designed for our topology. Thus, again our bound does not represent the full sensitivity which can be expected from the high luminosity LHC but significant aditional effort would be required to determine the necessary experimental predictions for our desired topologies. Comparison of LHC limits with bounds from electroweak precision observables In the previous section we discussed the bounds which can be put on various benchmark scenarios of the Littlest Higgs Model with T -parity (and its possible violation). As explained in section 2, an appealing property of this model is its considerably small amount of fine tuning in the Higgs sector. Moreover, not only do the null results of searches for these new T -odd particles set bounds on this model but also, see section 3, electroweak JHEP05(2018)049 precision observables (EWPO) put tight constraints on f and κ. In the following we want to combine these three pieces of information, putting a particular focus on the relevance of the newest LHC results for the total combined bound on the model. Note that bounds derived from the 4-fermion operators summarized here as being part of the EWPO cannot be considered as stringent as those from direct LHC searches. There could be different operators depending on the details of the UV completion (partially) cancelling each other, or the operators could have accidentally small Wilson coefficients making the bounds derived on them marginal. In figures 26-28 we show compilations of bounds from electroweak precision observables, see section 3, the amount of fine tuning in the Higgs sector according to eq. (2.27) and the 8 and 13 TeV LHC bounds discussed in section 6. We only show results for the case of T -parity conservation as electroweak precision observables are not affected by the presence of T -parity violating operators and the respective TPV collider bounds are very similar, see our results of the previous section. In each figure we show the results for Heavy T ± scenario, i.e. R = 0.2, and Light T ± scenario, i.e. R = 1.0. Note that the choice of this parameter has an important impact on the fine tuning measure ∆. In general, we observe that LHC results produce an absolute lower bound on f for large κ and a lower bound which approximately follows f · κ for small κ. Electroweak precision data, however, tend to produce upper bounds which approximately follow the ratio f /κ. Therefore, we have two very complementary bounds which together exclude a considerably large region of parameter space. This complementarity mostly originates from the opposite dependence of the respective bounds on κ and R: the collider data produce stronger bounds for lighter particles and therefore show their largest sensitivity for small values of κ and/or R = 1.0. Loop corrections to precision observables, however, increase if the corresponding coupling constants increase and therefore show their strictest bounds for large values of κ and R = 0.2. 10 We now move the general discussion to some indiviual results of particular benchmark models: • In the case of Lepton Universality, we observe that the updated collider results from 13 TeV are only relevant in the regions dominated by q H and T ± production. Most importantly, bounds derived from V H V H production only cover the region f < 1 TeV, κ > 2 and are not competitive with the limits from electroweak precision data which cover the same region in the Light T ± scenario and an even much larger region f < 1.3 TeV, κ > 1.5 in the case of Heavy T ± . In the case of Heavy T ± , the combined bound from electroweak precision observables and q H production excludes symmetry breaking scales f below 1.3 TeV, independent of κ, and by that requires a fine tuning below 0.5 %. If the heavy top partners T ± are lighter, the EWPO bounds weaken. However, at the same time the collider bounds increase, resulting in approximately the same bound of f > 1.3 TeV as before which however corresponds to a slightly smaller fine tuning of approximately 0.6 %. JHEP05(2018)049 Judging from the two benchmark scenarios for T ± , we conclude that the combination of electroweak precision data and newest LHC results does not allow for values of f < 1.3 TeV for values of R ∈ [0.2, 1.0]. As the EWPO bounds become stronger for heavier T ± and the collider result becomes stricter for lighter T ± , the lower bound on f should become even stricter for any value of R outside this range. • The combined results of the Heavy q H scenario show a similar complementarity effect as in the previous model: whilst the LHC results are significantly weakened if the heavy quarks are decoupled, the bounds from electroweak precision observables become even stricter due to their dependence on κ 2 , see section 3, and thus become stronger if κ q = 3.0 is fixed. Here, the bounds implicitly depend on the value of R and exclude values of f below 1.5 TeV for R = 1.0 (Light T ± ), and values below 2 TeV for R = 0.2 (Heavy T ± ). Even in the case of Light T ± the LHC result cannot compete. Still, the bounds are already very close to the EWPO limit such that we again conclude that any other value of R should not produce a significantly weaker but potentially an even stronger bound on f if the mass of the T ± is chosen even lighter. Note that for very small values of κ , the LHC bound derived from V H → H pushes the lower bound on f by a few hundred GeV, but not considerably. The minimal allowed fine tuning is around 0.5 % for the Light T ± scenario and reduces to approximately 0.25 % for the Heavy T ± scenario. • For the Light H scenario, the complementarity between LHC and EWPO results appears in the opposite direction as before: due to the small value of κ , electroweak precision observables are slightly weaker than in the previous benchmark cases. However, at the same time the collider bounds improve significantly due to the very distinctive decay topology which produces sevaral hard leptons, see our discussion in the previous section. In this benchmark, the lower bound f > 1.7 TeV originates solely from the collider result and is independent of the details of the heavy top partner sector. It is only the region with large values of f 1.8 TeV, κ 2.5 where the EWPO bound may become more relevant -depending on the chosen value of R. The minimal allowed fine tuning is around 0.35 % in the Heavy T ± and 0.4 % in the Light T ± scenario, respectively. All in all, we observe that without taking the LHC data into account, fine tuning above 1 % would still be allowed in regions with light q H and light T ± . These regions, however, are nowadays testable at collider experiments and results from the first LHC run at 8 TeV already pushed the fine-tuning to the sub-percent level. Using the updated results acquired during the √ s = 13 TeV period, limits derived from the Large Hadron Collider become more and more severe. Though the precise position of the total bound depends on the details of the heavy fermion sector, the heavy top partner masses, and the presence or absence of T -parity violation, we observe that due to their complementary behavior regarding the EWPO bounds, values of f below 1.3 TeV and fine-tuning above 0.6 % seem to be excluded by now. Within our considered benchmark scenarios we observe that Fermion Universality is the most weakly constrained model. However, the newest -39 - JHEP05(2018)049 13 TeV results show a significant improvement already when put in comparison with the earlier 8 TeV bounds. Furthermore, our approximate future sensitivity study in section 6.4 gives us reason to expect an even further improvement by LHC results in the near and far future, putting the Littlest Higgs Model with T parity more and more to the test. Summary In this study we reinterpreted null results from LHC searches for physics beyond the Standard Model in the context of the Littest Higgs Model with conserved and broken T -parity. This model is an elegant implementation of global collective symmetry breaking combined with a discrete symmetry to explain the natural lightness of the Higgs boson as a (pseudo-)Nambu-Goldstone boson. Bounds on the symmetry-breaking scale f from data until 2013 were still as low as roughly 600 GeV. Using the degrees of freedom for the full theory, we defined a set of benchmark scenarios which make different assumptions about the mass hierarchies in the heavy fermion sector, the masses of the heavy top partners and the possible presence of small T -parity violating operators. By making use of the collider phenomenology tool CheckMATE, we systematically analyzed all relevant topologies at the LHC and derived bounds for all benchmark scenarios, excluding those regions which would have predicted a signal in any of the many considered search channels. We also give rough estimates for the bounds expected from a high luminosity LHC running with √ s = 14 TeV and 3 ab −1 of integrated luminosity. Our results show that q H pair production, V H pair production and T − pair production, respectively, produce strong bounds in the model parameter space due to null results in searches dedicated for squarks and electroweakinos in supersymmetry. Most importantly, searches which require a large amount of hard jets and a significant amount of missing transverse momentum produce the strongest results in regions where q H and T − production is important whilst searches for final states with multilepton and missing energy become more relevant as soon as heavy vector boson production is the dominant channel. Colorneutral heavy leptons are mostly irrelevant for the LHC, unless they are light enough to appear in decay topologies like V H → H in which case they are again largely constrained by searches for multileptons and missing energy. Allowing for a small amount of T -parity violation surprisingly only has a minor impact on the result if compared to the case where T -parity is exactly conserved. This can by explained by the fact that in the case of Tparity violation via anomalous WZW-terms, A H decays predominantly into the Standard Model gauge bosons whose leptonic decays can produce the required missing energy plus additional hard particles which improve the signal-to-background ratio. As the masses of the particles q H , H , V H and T ± depend differently on the Yukawalike parameters κ q , κ and R, precise LHC bounds depend on the particular values of these -40 -JHEP05(2018)049 three parameters. On the other hand, all particle masses grow linearly with the symmetry breaking scale f and we conclude that LHC results from the √ s = 13 TeV run exclude any value of f below 950 GeV at 95% confidence level. The weakest bound appears in a scenario where only the heavy gauge bosons are kinematically accessible and all Yukawa parameters are such that the other particles are too heavy for LHC observability. Altogether, this constitutes an improvement of almost 400 GeV compared to the LHC run 1 data as a constraint on the symmetry breaking scale f in the Littlest Higgs model with T-parity. Even stronger bounds are possible if more details about the heavy fermion sector are known and these limits can easily be derived from our exhaustive set of results for the various benchmark scenarios. For parameter regions where either heavy quarks or heavy leptons are accessible, the symmetry breaking scale f must be larger than 1.3 TeV and the fine tuning cannot be better than 0.4 %. The LHC direct search limits are complementary to those derived from four-fermion operator contact interactions as the former only constrain light particles with small Yukawa couplings while the latter put limits on sizable contributions from large Yukawa couplings. The bounds from these operators are however not as tight as those from direct LHC searches as they depend on the details of the UV completion, i.e. there could be cancellations among different operators or accidentally small Wilson coefficients. Though the Littlest Higgs model with T -parity has been constrained much stronger by LHC run 2 data, it is still a rather natural solution to the shortcomings of the electroweak and scalar sector, and we will need full high-luminosity data from the LHC to decide whether naturalness is actually an issue of the electroweak sector or not. A qualitative improvement of all bounds on the model, particularly in the Higgs sector and the heavy lepton sector, might need the running of a high-energy lepton collider (or a hadron collider at much higher energy). Table 6. Full list of all CheckMATE analyses used for this study. The column labelled #SR yields the number of signal regions. Entries for the integrated luminosities L int are given in fb −1 . Table 6 gives the full list of used CheckMATE analyses. The first column shows the CheckMATE idenitifer, the second the purpose for which the analysis was designed for. The last three columns show the number of signal regions in the corresponding -43 - JHEP05(2018)049 analysis (marked #SR), the integrated luminosity for that analysis and the reference to the publication or conference notes from the experimental collaborations. More details on the respective analyses and corresponding validation material can be found on http://checkmate.hepforge.org. High luminosity analyses marked with * do not correspond to official experimental studies but have been implemented by the CheckMATE collaboration. More information can be found in the respective references. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
21,056
2018-01-19T00:00:00.000
[ "Physics" ]
Monitoring of Interfractional Proton Range Verification and Dosimetric Impact Based on Daily CBCT for Pediatric Patients with Pelvic Tumors Simple Summary The research highlights the application of synthetic CT images, which are created by deforming planning CT scans to match daily CBCT anatomy in interfractional proton therapy for pediatric patients with pelvic tumors. The objective of the study is to identify changes in the proton path length and determine the impact of anatomical changes on the treatment plan’s quality. The findings reveal that the water equivalent path length method can effectively estimate proton range deviations on synthetic-CT images. The daily synthetic CT images can also be utilized as a surrogate to calculate dose and predict dosimetric changes in the plan of the day. This approach eliminates the need for frequent rescanning, thereby making the adaptive therapy process more streamlined and less burdensome for young patients. The results have the potential to improve the precision of proton therapy, hence paving the way for more effective treatments. Abstract (1) Background: Synthetic CT images of the pelvis were generated from daily CBCT images to monitor changes in water equivalent path length (WEPL) and determine the dosimetric impact of anatomy changes along the proton beam’s path; (2) Methods: Ten pediatric patients with pelvic tumors treated using proton therapy with daily CBCT were included. The original planning CT was deformed to the same-day CBCT to generate synthetic CT images for WEPL comparison and dosimetric evaluation; (3) Results: WEPL changes of 20 proton fields at the distal edge of the CTV ranged from 0.1 to 12 mm with a median of 2.5 mm, and 75th percentile of 5.1 mm for (the original CT—rescanned CT) and ranged from 0.3 to 10.1 mm with a median of 2.45 mm and 75th percentile of 4.8 mm for (the original CT—synthetic CT). The dosimetric impact was due to proton range pullback or overshoot, which led to reduced coverage in CTV Dmin averaging 12.1% and 11.3% in the rescanned and synthetic CT verification plans, respectively; (4) Conclusions: The study demonstrated that synthetic CT generated by deforming the original planning CT to daily CBCT can be used to quantify proton range changes and predict adverse dosimetric scenarios without the need for excessive rescanned CT scans during large interfractional variations in adaptive proton therapy of pediatric pelvic tumors. Introduction Intensity-modulated proton therapy (IMPT) is currently the most advanced proton delivery technique to create a highly conformal dose distribution to tumors while sparing adjacent normal tissues; however, interfractional anatomical variations along the proton beam path can adversely impact dosimetry leading to an insufficient dose to the tumor and an unplanned dose to normal tissue [1]. To address this challenge, water equivalent path length (WEPL) analysis has been developed to accurately determine the path length of the proton beam through different tissues. WEPL is defined as the distance a proton would travel through water to reach a given point in a tissue. By using WEPL analysis, the energy of the proton beam can be adjusted to ensure that the tumor receives the appropriate dose of radiation while minimizing damage to surrounding healthy tissues [2]. Several techniques are used to measure WEPL, including range measurements and diode detectors [3]. Monte Carlo simulations are also used to model the behavior of protons as they interact with different tissues [4]. Image-Guided Radiation Therapy (IGRT) has emerged as a cornerstone in modern radiation oncology, enabling clinicians to adapt treatment plans to the dynamic anatomical changes that patients undergo during the course of radiotherapy [5]. However, the dependence on daily imaging for precise targeting introduces challenges in terms of both resource utilization and patient comfort. One innovative solution to address these challenges is the creation of synthetic CT images from daily IGRT images [6]. Synthetic CT is a computational technique that leverages deformable image registration methods to generate high-fidelity CT-like images from different imaging modalities, such as CBCT images. These synthetic CT images offer a representation of the patient's anatomy at each treatment session without the need for repeated CT scans, minimizing patient exposure to ionizing radiation, and reducing the burden on imaging resources. Furthermore, while IGRT has been used as a standard practice to address interfractional variations by repositioning the patient based on CBCT images acquired immediately before radiation therapy (RT) delivery [7], it is important to note that IGRT repositioning techniques do not adequately address interfractional uncertainties such as body circumference changes, relative geometric changes between targets and organs at risk (OAR), and anatomical changes (e.g., deformations) throughout the course of proton therapy, if left uncorrected, may reduce the probability of tumor control [8]. In pediatric patients changes in superficial or deep soft tissues may occur due to weight gain/loss, systemic steroids administration, or IV hydration for chemotherapy resulting in a significant impact on plan quality during radiation therapy [9]. WEPL analysis could be an ideal and practical verification method for the changes of weight gain or loss, owing to homogenous expansion or contraction of the external body contour. By allowing for early detection and correction of changes in body contour and tissue distribution, WEPL analysis can contribute to improved treatment efficacy and patient outcomes. In this study, we employed the WEPL method as a surrogate measure for proton range verification and examined the impact on the plan quality of pediatric pelvic treatments. To date, no studies have been published assessing interfractional WEPL in pediatric patients receiving pencil beam scanning proton therapy to pelvic disease sites. Materials and Methods In the methods section, the design of WEPL calculation, synthetic CT generation and the phantom validation is explained in Section 2.1. Patient selection and image data are described in Section 2.2. Treatment planning aspects are outlined in Section 2.3. WEPL Calculation, Synthetic-CT Generation and Phantom Validation The WEPL was calculated up to the distal surface of clinical target volume (CTV) via the linear integration of relative stopping proton stopping power (RPSP) using a stoichiometric calibration curve [10]. The RPSP values were linearly integrated per voxel along with the beam's eye view (BEV) direction from patient's external body surface to the distal surface of CTV [11,12]. The generated 2D map of distal surface of CTV was called WEPL map. Phantom validation was conducted to verify the accuracy of WEPL calculation. An anthropomorphic end-to-end verification 'STEEV' phantom (CIRS Inc., Norfolk, VA, USA) was used to simulate a scenario of 1 cm of weight gain by applying a bolus of 1 cm thickness to the surface of the phantom's external body. STEEV phantom was first imaged with diagnostic CT using Philips IQon Spectral CT (Philips Healthcare, Cleveland, OH, USA) with and without bolus. Secondly, the STEEV phantom was scanned with bolus using an in-room robotic CBCT device (Hitachi, Ltd., Tokyo, Japan) for image-guided radiation therapy purposes. A commercial image manipulation 'MIM' software version 7.3.2 (MIM Software Inc., Cleveland, OH, USA) was used to deform CT images without bolus by generating synthetic-CT images based on CBCT anatomy with bolus. WEPL maps were generated to compare any range differences between the original CT without bolus and synthetic-CT with bolus and also the rescanned CT with bolus and synthetic CT with bolus as shown in Figure 1. thickness to the surface of the phantom's external body. STEEV phantom was first imaged with diagnostic CT using Philips IQon Spectral CT (Philips Healthcare, Cleveland, OH, USA) with and without bolus. Secondly, the STEEV phantom was scanned with bolus using an in-room robotic CBCT device (Hitachi, Ltd., Tokyo, Japan) for image-guided radiation therapy purposes. A commercial image manipulation 'MIM' software version 7.3.2 (MIM Software Inc., Cleveland, OH, USA) was used to deform CT images without bolus by generating synthetic-CT images based on CBCT anatomy with bolus. WEPL maps were generated to compare any range differences between the original CT without bolus and synthetic-CT with bolus and also the rescanned CT with bolus and synthetic CT with bolus as shown in Figure 1. with bolus based on CBCT anatomy (B), and the difference was found as 10.1 ± 1.1 mm (mean ± SD) between the two in terms of WEPL (C). 2D distal projection of the same CTV was calculated on rescanned CT with bolus (D), synthetic CT with bolus based on CBCT anatomy (E), and the difference was found as 0.1 ± 0.2 mm (mean ± SD) between the two in terms of WEPL (F). The Patient Selection and Image Data Ten pelvis patients who had previously undergone a course of pediatric proton therapy and CT-simulated more than once due to external tissue discrepancies were selected in this retrospective study. Institutional Review Board (IRB) approval was obtained prior to analysis. The imaging data of the original and rescanned CT were acquired on 10 patients using a Philips IQon spectral CT with the clinical helical scan protocol of pelvis (120 kVp, auto collimation, 500 mm field of view, 512 × 512 matrix size, and Dose Right Index of 20). Synthetic CT images were created on the same day as the rescanned CT images to keep the anatomy similar during deformable image registration of daily CBCT images using a normalized intensity-based algorithm. Table 1 outlines the patients with diagnoses, age, sex, treatment site, fractionation, beam orientation, and the number of CT rescans that the patients were simulated during the course of pediatric proton therapy besides the original CT simulation. The verification of water equivalent path length deviation from the initial plan was conducted using the WEPL method. This procedure was executed on the original CT, Figure 1. 2D distal projection of CTV was calculated on original CT without bolus (A), synthetic CT with bolus based on CBCT anatomy (B), and the difference was found as 10.1 ± 1.1 mm (mean ± SD) between the two in terms of WEPL (C). 2D distal projection of the same CTV was calculated on rescanned CT with bolus (D), synthetic CT with bolus based on CBCT anatomy (E), and the difference was found as 0.1 ± 0.2 mm (mean ± SD) between the two in terms of WEPL (F). The Patient Selection and Image Data Ten pelvis patients who had previously undergone a course of pediatric proton therapy and CT-simulated more than once due to external tissue discrepancies were selected in this retrospective study. Institutional Review Board (IRB) approval was obtained prior to analysis. The imaging data of the original and rescanned CT were acquired on 10 patients using a Philips IQon spectral CT with the clinical helical scan protocol of pelvis (120 kVp, auto collimation, 500 mm field of view, 512 × 512 matrix size, and Dose Right Index of 20). Synthetic CT images were created on the same day as the rescanned CT images to keep the anatomy similar during deformable image registration of daily CBCT images using a normalized intensity-based algorithm. Table 1 outlines the patients with diagnoses, age, sex, treatment site, fractionation, beam orientation, and the number of CT rescans that the patients were simulated during the course of pediatric proton therapy besides the original CT simulation. The verification of water equivalent path length deviation from the initial plan was conducted using the WEPL method. This procedure was executed on the original CT, rescanned CT, and synthetic CT images. The reference WEPL value was established based on the original CT images, serving as the benchmark. Subsequent WEPL values were derived from the synthetic CT images generated from the daily CBCT images and also the rescanned CT images. All three types of images-the original CT, rescanned CT, and synthetic CT-were integrated into the study's dose calculations and plan quality assessments. Through this comprehensive approach, the study aimed to investigate the influence of anatomical variations on daily plan quality. A comparative statistical analysis was undertaken to assess the efficacy of two distinct methods for validating WEPL: the original CT versus the rescanned CT, and the original CT versus the synthetic CT. To establish the commensurability of the synthetic CT images with the rescanned CT, a statistical t-test approach was employed. This method was utilized to ascertain whether the results produced by these methods are significantly similar or if there exists a noteworthy distinction between the two. The p-value, a key factor in this context, reflects the likelihood of obtaining test results that are equally or more extreme than the observed outcome, assuming the null hypothesis to be accurate. When there is no statistically significant difference between the compared groups, the p-value generally assumes a value exceeding the selected significance level (α). The significance level, determined prior to the statistical assessment, serves as a threshold for evaluating the outcomes of the test. A commonly employed significance level (α) is 0.05 (5%), denoting the threshold at which statistically significant findings are considered. Treatment Planning Aspects The treatment plans of 10 patients were designed using pencil beam scanning proton beams from a commercial proton therapy system (PROBEAT-V, Hitachi America, Ltd., Santa Clara, CA, USA). Beam arrangements were comprised of two posterior-oblique beams to treat lower abdomen and pelvis tumors with a plan goal of CTV D95% = 95%. In brief, 3% range and 3 mm setup uncertainties were used for robust optimization. When large inter-fractional deviations were observed such as weight gain or loss, a replan was created based on the rescanned CT by using the same planning scheme and beam angles. Based on the treatment planning statistics, the number of proton beam energy layers and spots were 956 and 175,113, respectively as shown in Figure 2. The mean and SD of the energy layers were 121.1 MeV ± 30.3 MeV which corresponds to a 10.8 mm water equivalent distance. A total of 3% of range error in the planning was expected to compensate for 3 mm Results The WEPL deviations determined from the original CT to rescanned CT scans ranged from 0.1 to 12 mm at the target's distal edge. A total of 5 out of 10 cases showed significant WEPL deviations from the original CT to the rescanned CT and the synthetic CT as an average of 10.4 mm and 9.8 mm, respectively. For these dramatic changes, protons' spread-out Bragg peak (SOBP) regions were shifted and caused significant reductions in the dosimetric coverage of CTV Dmin as an average of 23.1% and 20.7% for the rescanned and synthetic CT plans, respectively. Statistical analysis on WEPL differences for all 10 cases between (original-rescanned CT) and (original-synthetic CT) revealed that the two methods were strongly correlated (r = 0.93) and showed no statistically significant difference (p = 0.81, α = 0.05), with the mean differences of −0.09 ± 1.65 mm. Table 2 summarizes the patient cohort and outlines the WEPL differences for the distal edge of CTV between the original CT and the rescanned CT and the original CT and the synthetic CT. It reveals the impact on plan quality for CTV target coverage in terms of D95%, Dmin and Dmax parameters from the treatment planning. Results The WEPL deviations determined from the original CT to rescanned CT scans ranged from 0.1 to 12 mm at the target's distal edge. A total of 5 out of 10 cases showed significant WEPL deviations from the original CT to the rescanned CT and the synthetic CT as an average of 10.4 mm and 9.8 mm, respectively. For these dramatic changes, protons' spreadout Bragg peak (SOBP) regions were shifted and caused significant reductions in the dosimetric coverage of CTV Dmin as an average of 23.1% and 20.7% for the rescanned and synthetic CT plans, respectively. Statistical analysis on WEPL differences for all 10 cases between (original-rescanned CT) and (original-synthetic CT) revealed that the two methods were strongly correlated (r = 0.93) and showed no statistically significant difference (p = 0.81, α = 0.05), with the mean differences of −0.09 ± 1.65 mm. Table 2 summarizes the patient cohort and outlines the WEPL differences for the distal edge of CTV between the original CT and the rescanned CT and the original CT and the synthetic CT. It reveals the impact on plan quality for CTV target coverage in terms of D95%, Dmin and Dmax parameters from the treatment planning. Figure 3 demonstrates the WEPL differences per field for the distal edge of CTV between the plans of the original CT and the rescanned CT and the original CT and the synthetic CT with the annotation of maximum Dmin differences for CTV. It was determined that within ±3 mm water equivalent difference, the maximum Dmin deviations from the original plans were less than 5%, whereas, beyond of 3 mm WEPL difference, the CTV coverage significantly dropped for Dmin when compared to the original plans. For fields #1 and #2 of the first patient, Figure 4 shows the transverse plane of the disease site where the rescanned CT was registered via bony anatomy and overlaid on the top of the original CT to emphasize the significant tissue discrepancy of 12 mm. Due to the weight gain, the proton's SOBP was pulled back more posteriorly and caused catastrophic dose degradations revealed in the DVH curves of the CTV. The analysis of synthetic CT confirmed a 10.1 mm WEPL deviation from the distal edge of CTV from the original CT plan. Table 2. 20 fields belonging to 10 patients were analyzed for the WEPL differences between original CT and rescanned CT and original CT and synthetic CT with the impact on plan quality in terms of CTV D95%, Dmin and Dmax. Figure 3 demonstrates the WEPL differences per field for the distal edge of CTV between the plans of the original CT and the rescanned CT and the original CT and the synthetic CT with the annotation of maximum Dmin differences for CTV. It was determined that within ±3 mm water equivalent difference, the maximum Dmin deviations from the original plans were less than 5%, whereas, beyond of 3 mm WEPL difference, the CTV coverage significantly dropped for Dmin when compared to the original plans. For fields #1 and #2 of the first patient, Figure 4 shows the transverse plane of the disease site where the rescanned CT was registered via bony anatomy and overlaid on the top of the original CT to emphasize the significant tissue discrepancy of 12 mm. Due to the weight gain, the proton's SOBP was pulled back more posteriorly and caused catastrophic dose degradations revealed in the DVH curves of the CTV. The analysis of synthetic CT confirmed a 10.1 mm WEPL deviation from the distal edge of CTV from the original CT plan. For fields #1 and #2 of the first patient, Figure 4 shows the transverse plane of the disease site where the rescanned CT was registered via bony anatomy and overlaid on the top of the original CT to emphasize the significant tissue discrepancy of 12 mm. Due to the weight gain, the proton's SOBP was pulled back more posteriorly and caused catastrophic dose degradations revealed in the DVH curves of the CTV. The analysis of synthetic CT confirmed a 10.1 mm WEPL deviation from the distal edge of CTV from the original CT plan. Discussion Adaptive radiation therapy offers the promise of tailoring radiation treatment plans through a "plan of the day" approach, utilizing daily imaging from IGRT to ensure accurate delivery. However, several substantial barriers must be navigated for successful clinical integration of this approach. The complexity of workflow, involving daily imaging, plan adaptation, and quality assurance, can strain resources and necessitate efficient coordination between clinical teams. Generating treatment plans daily can lead to time and resource constraints, impacting schedules and increasing the workload for clinical staff. Rigorous quality assurance processes are vital to ensure the safety and accuracy of newly adapted plans. Dose accumulation and delivery verification methods must be developed to accurately assess the delivered dose against the planned dose over the course of treatment. Study [13] highlights the critical importance of fully characterizing the inherent errors and uncertainties within proton dose calculations and emphasizes the need to establish planning methods that robustly account for these factors. Efficient data management and IT infrastructure are also essential to handle the substantial amount of imaging and planning data generated. Decision-making algorithms that automatically determine when plan adaptation is necessary based on daily imaging data are among the challenges for a clinic that pursues adaptive radiation therapy. Over the past decade, proton therapy facilities have transitioned from traditional 2D kV imaging to employing volumetric imaging systems such as CBCT or CT-on-rails for the purpose of image-guided proton therapy. This shift can be attributed to heightened commercial attention and the enhanced accessibility of volumetric imaging systems, in addition to the transition from passively scattered proton therapy to the adoption of intensity-modulated proton therapy [14]. Given the relatively recent introduction of inroom volumetric imaging within the context of proton therapy, it appears that the frequency of its utilization can exhibit variability across different treatment centers. Illustratively, a group [7] presents a practice wherein daily CBCT imaging is employed for verifying the daily positioning of pediatric patients including bony anatomy and patient volume changes. Conversely, another group [15] proposes a more limited employment of CBCT imaging, utilizing daily CBCT imaging for the initial five fractions and then reducing the frequency to twice weekly for their pediatric craniospinal irradiation patients. The variability in the approaches might be attributed to concerns about the additional imaging dose incurred by patients when transitioning from conventional 2D kV orthogonal imaging to 3D volumetric CBCT imaging. Consequently, it is imperative to carefully assess the supplementary cost against the potential benefits when determining the optimal frequency of volumetric imaging. When considering adaptive proton therapy, the significance of 3D volumetric CBCT imaging becomes apparent as it plays a vital role in quantifying anatomical changes within the body through synthetic CT images. These synthetic CT images can serve as surrogates to calculate the daily proton dose. Nevertheless, it is worth noting that the frequency of CBCT imaging can be fine-tuned, optimizing it according to the volatility in tumor volumes and the extent of interfractional variations based on the disease sites. In this study, we applied proton interfractional range verification utilizing daily CBCT for pediatric patients with pelvic tumors. The daily proton range monitoring was able to reveal considerable variations in water equivalent variations that potentially could affect the clinical target volume coverage, highlighting the importance of proton range verification in daily clinical practice. Changes in the patient's anatomy during the course of treatment, such as weight gain or loss, tumor progression, and regression, or changes due to surgery or other treatments, can affect WEPL estimations. WEPL estimation can be particularly useful in monitoring body circumference changes such as weight gain and loss in pediatric patients undergoing proton therapy due to homogeneous tissue expansion or contraction from the initial anatomy. Estimating WEPL can be challenging when protons pass through various tissues with different densities and compositions, such as the transition from soft tissue to bone or air-filled cavities like the lungs or sinuses [16]. This heterogeneity can lead to uncertainties in WEPL estimation and could affect the accuracy of the proton range. CBCT imaging modality may sometimes produce artifacts, which are inconsistencies or distortions in the images that do not correspond to actual anatomical structures [2]. Therefore, phantom validations are crucial prior to using WEPL and synthetic CT algorithms when estimating daily WEPL variations. Conclusions This study underscores the critical role of proton interfractional range verification based on daily CBCT in pediatric patients with pelvic tumors. By continually monitoring the WEPL, clinicians can adapt to dynamic anatomical changes and ensure optimal dose delivery. This is particularly crucial in a pediatric setting where physiological changes due to growth and development can influence the proton range significantly. There remain challenges and concerns: the additional radiation exposure from the rescanned CT images warrants careful consideration given the lifetime impact of radiation exposure in pediatric patients. Technological advancements and improved protocols are needed to minimize exposure and CBCT-based WEPL monitoring may serve an important role in monitoring daily variations in anatomy. These findings highlight the need for continuous improvement in WEPL estimation techniques and further optimization of daily CBCT-based proton range verification protocols. Balancing the benefits of enhanced treatment accuracy with the necessity to minimize treatment time and radiation exposure remains a pivotal concern in advancing pediatric proton therapy. Informed Consent Statement: Patient consent was waived due to the nature of the retrospective study. Data Availability Statement: The data that support the findings of this study are available from the corresponding author, upon reasonable request.
5,689.6
2023-08-22T00:00:00.000
[ "Medicine", "Physics" ]