text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Measurement-induced entanglement and teleportation on a noisy quantum processor Measurement has a special role in quantum theory1: by collapsing the wavefunction, it can enable phenomena such as teleportation2 and thereby alter the ‘arrow of time’ that constrains unitary evolution. When integrated in many-body dynamics, measurements can lead to emergent patterns of quantum information in space–time3–10 that go beyond the established paradigms for characterizing phases, either in or out of equilibrium11–13. For present-day noisy intermediate-scale quantum (NISQ) processors14, the experimental realization of such physics can be problematic because of hardware limitations and the stochastic nature of quantum measurement. Here we address these experimental challenges and study measurement-induced quantum information phases on up to 70 superconducting qubits. By leveraging the interchangeability of space and time, we use a duality mapping9,15–17 to avoid mid-circuit measurement and access different manifestations of the underlying phases, from entanglement scaling3,4 to measurement-induced teleportation18. We obtain finite-sized signatures of a phase transition with a decoding protocol that correlates the experimental measurement with classical simulation data. The phases display remarkably different sensitivity to noise, and we use this disparity to turn an inherent hardware limitation into a useful diagnostic. Our work demonstrates an approach to realizing measurement-induced physics at scales that are at the limits of current NISQ processors. Measurement-induced entanglement and teleportation on a noisy quantum processor Google Quantum AI and Collaborators* Measurement has a special role in quantum theory 1 : by collapsing the wavefunction, it can enable phenomena such as teleportation 2 and thereby alter the 'arrow of time' that constrains unitary evolution.When integrated in many-body dynamics, measurements can lead to emergent patterns of quantum information in spacetime [3][4][5][6][7][8][9][10] that go beyond the established paradigms for characterizing phases, either in or out of equilibrium [11][12][13] .For present-day noisy intermediate-scale quantum (NISQ) processors 14 , the experimental realization of such physics can be problematic because of hardware limitations and the stochastic nature of quantum measurement.Here we address these experimental challenges and study measurement-induced quantum information phases on up to 70 superconducting qubits.By leveraging the interchangeability of space and time, we use a duality mapping 9,[15][16][17] to avoid midcircuit measurement and access different manifestations of the underlying phases, from entanglement scaling 3,4 to measurement-induced teleportation 18 .We obtain finite-sized signatures of a phase transition with a decoding protocol that correlates the experimental measurement with classical simulation data.The phases display remarkably different sensitivity to noise, and we use this disparity to turn an inherent hardware limitation into a useful diagnostic.Our work demonstrates an approach to realizing measurement-induced physics at scales that are at the limits of current NISQ processors. The stochastic, non-unitary nature of measurement is a foundational principle in quantum theory and stands in stark contrast to the deterministic, unitary evolution prescribed by Schrödinger's equation 1 .Because of these unique properties, measurement is key to some fundamental protocols in quantum information science, such as teleportation 2 , error correction 19 and measurement-based computation 20 .All these protocols use quantum measurements, and classical processing of their outcomes, to build particular structures of quantum information in space-time.Remarkably, such structures may also emerge spontaneously from random sequences of unitary interactions and measurements.In particular, 'monitored' circuits, comprising both unitary gates and controlled projective measurements (Fig. 1a), were predicted to give rise to distinct non-equilibrium phases characterized by the structure of their entanglement 3,4,[21][22][23] , either 'volume law' 24 (extensive) or 'area law' 25 (limited), depending on the rate or strength of measurement. In principle, quantum processors allow full control of both unitary evolution and projective measurements (Fig. 1a).However, despite their importance in quantum information science, the experimental study of measurement-induced entanglement phenomena 26,27 has been limited to small system sizes or efficiently simulatable Clifford gates.The stochastic nature of measurement means that the detection of such phenomena requires either the exponentially costly post-selection of measurement outcomes or more sophisticated data-processing techniques.This is because the phenomena are visible only in the properties of quantum trajectories; a naive averaging of experimental repetitions incoherently mixes trajectories with different measurement outcomes and fully washes out the non-trivial physics.Furthermore, implementing the model in Fig. 1a requires mid-circuit measurements that are often problematic on superconducting processors because the time needed to perform a measurement is a much larger fraction of the typical coherence time than it is for two-qubit unitary operations.Here we use space-time duality mappings to avoid mid-circuit measurements, and we develop a diagnostic of the phases on the basis of a hybrid quantum-classical order parameter (similar to the cross-entropy benchmark in ref. 28) to overcome the problem of post-selection.The stability of these quantum information phases to noise is a matter of practical importance.Although relatively little is known about the effect of noise on monitored systems [29][30][31] , noise is generally expected to destabilize measurement-induced non-equilibrium phases.Nonetheless, we show that noise serves as an independent probe of the phases at accessible system sizes.Leveraging these insights allows us to realize and diagnose measurement-induced phases of quantum information on system sizes of up to 70 qubits. The space-time duality approach 9,15-17 enables more-experimentally convenient implementations of monitored circuits by leveraging the absence of causality in such dynamics.When conditioning on measurement outcomes, the arrow of time loses its unique role and becomes interchangeable with spatial dimensions, giving rise to a network of quantum information in space-time 32 that can be analysed in multiple ways.For example, we can map one-dimensional (1D) monitored circuits (Fig. 1a) to 2D shallow unitary circuits with measurements only Article at the final step 17 (Fig. 1b and Supplementary Information section 5), thereby addressing the experimental issue of mid-circuit measurement. We began by focusing on a special class of 1D monitored circuits that can be mapped by space-time duality to 1D unitary circuits.These models are theoretically well understood 15,16 and are convenient to implement experimentally.For families of operations that are dual to unitary gates (Supplementary Information), the standard model of monitored dynamics 3,4 based on a brickwork circuit of unitary gates and measurements (Fig. 2a) can be equivalently implemented as a unitary circuit when the space and time directions are exchanged (Fig. 2b), leaving measurements only at the end.The desired output state |Ψ m ⟩ is prepared on a temporal subsystem (in a fixed position at different times) 33 .It can be accessed without mid-circuit measurements by using ancillary qubits initialized in Bell pairs (Q Q ′… ′ 1 1 2 in Fig. 2c) and SWAP gates, which teleport |Ψ m ⟩ to the ancillary qubits at the end of the circuit (Fig. 2c).The resulting circuit still features postselected measurements but their reduced number (relative to a generic model; Fig. 2a) makes it possible to obtain the entropy of larger systems, up to all 12 qubits (Q Previous studies 15,16 predicted distinct entanglement phases for |Ψ m ⟩ as a function of the choice of unitary gates in the dual circuit: volume-law entanglement if the gates induce an ergodic evolution, and logarithmic entanglement if they induce a localized evolution.We implemented unitary circuits that are representative of the two regimes, built from two-qubit fermionic simulation (fSim) unitary gates 34 with swap angle θ and phase angle ϕ = 2θ, followed by random single-qubit Z rotations.We chose angles θ = 2π/5 and θ = π/10 because these are dual to non-unitary operations with different measurement strengths (Fig. 2d and Supplementary Information). To measure the second Renyi entropy for qubits composing |Ψ m ⟩, randomized measurements 35,36 are performed on 2e shows the entanglement entropy as a function of subsystem size.The first gate set gives rise to a Page-like curve 24 , with entanglement entropy growing linearly with subsystem size up to half the system and then ramping down.The second gate set, by contrast, shows a weak, sublinear dependence of entanglement with subsystem size.These findings are consistent with the theoretical expectation of distinct entanglement phases (volume-law and logarithmic, respectively) in monitored circuits that are space-time dual to ergodic and localized unitary circuits 15,16 .(1 + 1)-dimensional monitored quantum circuit composed of both unitary gates and measurements.b, An equivalent dual (1 + 1)-dimensional shallow circuit of size L x × L y and depth T with all measurements at the final time formed from a space-time duality mapping of the circuit in a.Because of the non-unitarity nature of measurements, there is freedom as to which dimensions are viewed as 'time' and which as 'space'.In this example, L y is set by the (1 + 1)D circuit depth and L x by its spatial size, and T is set by the measurement rate.c, Classical postprocessing on a computer of the measurement record (quantum trajectory), and quantum-state readout of a monitored circuit can be used to diagnose the underlying information structures in the system.the two-qubit gate composed of an fSim unitary and random Z rotations with its space-time dual, which is composed of a mixture of unitary and measurement operations.The power h of the Z rotation is random for every qubit and periodic with each cycle of the circuit.e, Second Renyi entropy as a function of the volume of a subsystem A from randomized measurements and post-selection on Q 1 …Q 6 .The data shown are noise mitigated by subtracting an entropy density matching the total system entropy.See the Supplementary Information for justification. A phase transition between the two can be achieved by tuning the (θ, ϕ) fSim gate angles. We next moved beyond this specific class of circuits with operations restricted to be dual to unitary gates, and instead investigated quantum information structures arising under more general conditions.Generic monitored circuits in 1D can be mapped onto shallow circuits in 2D, with final measurements on all but a 1D subsystem 17 .The effective measurement rate, p, is set by the depth of the shallow circuit, T, and the number of measured qubits, M. Heuristically, p = M/(M + L)T (the number of measurements per unitary gate), where L is the length of the chain of unmeasured qubits hosting the final state for which the entanglement structure is being investigated.Thus, for large M, a measurement-induced transition can be tuned by varying T. We ran 2D random quantum circuits 28 composed of iSWAP-like and random single-qubit rotation unitaries on a grid of 19 qubits (Fig. 3a), with T varying from 1 to 8.For each depth, we post-selected on measurement outcomes of M = 12 qubits and left behind a 1D chain of L = 7 qubits; the entanglement entropy was then measured for contiguous subsystems A by using randomized measurements.We observed two distinct behaviours over a range of T values (Fig. 3b).For T < 4, the entropy scaling is subextensive with the size of the subsystem, whereas for T ≥ 4, we observe an approximately linear scaling. The spatial structure of quantum information can be further characterized by its signatures in correlations between disjointed subsystems of qubits: in the area-law phase, entanglement decays rapidly with distance 37 , whereas in a volume-law phase, sufficiently large subsystems may be entangled arbitrarily far away.We studied the second Renyi mutual information between two subsystems A and B as a function of depth T, and the distance (the number of qubits) x between them (Fig. 3c).For maximally separated subsystems A and B of two qubits each, I AB (2) remains finite for T ≥ 4, but it decays to 0 for T ≤ 3 (Fig. 3d).We also plotted AB I for subsystems A and B with different sizes (T = 3 and T = 6) as a function of x (Fig. 3e).For T = 3 we observed a rapid decay of I AB (2) with x, indicating that only nearby qubits share information.For T = 6, however, AB I does not decay with distance. The observed structures of entanglement and mutual information provide strong evidence for the realization of measurement-induced area-law ('disentangling') and volume-law ('entangling') phases.Our results indicate that there is a phase transition at critical depth T ≃ 4, which is consistent with previous numerical studies of similar models 17,18,38 .The same analysis without post-selection on the M qubits (Supplementary Information) shows vanishingly small mutual information, indicating that long-ranged correlations are induced by the measurements. The approaches we have followed so far are difficult to scale for system sizes greater than 10-20 qubits 27 , owing to the exponentially increasing sampling complexity of post-selecting measurement outcomes and obtaining entanglement entropy of extensive subsystems of the desired output states.More scalable approaches have been recently proposed [39][40][41][42] and implemented in efficiently simulatable (Clifford) models 26 .The key idea is that diagnostics of the entanglement structure must make use of both the readout data from the quantum state |Ψ m ⟩ and the classical measurement record m in a classical post-processing step (Fig. 1c).Post-selection is the conceptually simplest instance of this idea: whether quantum readout data are accepted or rejected is conditional on m.However, because each instance of the experiment returns a random quantum trajectory 43 and V X Y = ( − )/ 2. At the end of the circuit, the lower M = 12 qubits are measured and post-selected on the most probable bitstring.b, Second Renyi entropy of contiguous subsystems A of the L = 7 edge qubits at various depths.The measurement is noise mitigated in the same way as in Fig. 2. c, Second Renyi mutual information I AB (2) between two-qubit subsystems A and B against depth T and distance x (the number of qubits between A and B).d, AB I as a function of T for two-qubit subsystems A and B at maximum separation.e, AB Article (where M is the number of measurements), this approach incurs an exponential sampling cost that limits it to small system sizes.Overcoming this problem will ultimately require more-sample-efficient strategies that use classical simulation 39,40,42 , possibly followed by active feedback 39 . Here we have developed a decoding protocol that correlates quantum readout and the measurement record to build a hybrid quantum-classical order parameter for the phases that is applicable to generic circuits and does not require active feedback on the quantum processor.A key idea is that the entanglement of a single 'probe' qubit, conditioned on measurement outcomes, can serve as a proxy for the entanglement phase of the entire system 39 .This immediately eliminates one of the scalability problems: measuring the entropy of extensive subsystems.The other problempost-selection-is removed by a classical simulation step that allows us to make use of all the experimental shots and is therefore sample efficient. This protocol is illustrated in Fig. 4a.Each run of the circuit terminates with measurements that return binary outcomes ±1 for the probe qubit, z p , and the surrounding M qubits, m.The probe qubit is on the same footing as all the others and is chosen at the post-processing stage.For each run, we classically compute the Bloch vector of the probe qubit, conditional on the measurement record m, a m (Supplementary Information).We then define τ z a a = sign( • ˆ) , which is +1 if a m points above the equator of the Bloch sphere, and −1 otherwise.The cross-correlator between z p and τ m , averaged over many runs of the experiment such that the direction of a m is randomized, yields an estimate of the length of the Bloch vector, ζ m ≃ a , which can in turn be used to define a proxy for the probe's entropy: where the overline denotes averaging over all the experimental shots and random circuit instances.A maximally entangled probe corresponds to ζ = 0. In the standard teleportation protocol 2 , a correcting operation conditional on the measurement outcome must be applied to retrieve the teleported state.In our decoding protocol, τ m has the role of the correcting operation, restricted to a classical bit-flip, and the cross-correlator describes the teleportation fidelity.In the circuits relevant to our experiment (depth T = 5 on N ≤ 70 qubits), the classical simulation for decoding is tractable.For arbitrarily large circuits, however, the existence of efficient decoders remains an open problem 39,41,44 .Approximate decoders that work efficiently in only part of the phase diagram, or for special models, also exist 39 , and we have implemented one such example based on matrix product states (Supplementary Information). We applied this decoding method to 2D shallow circuits that act on various subsets of a 70-qubit processor, consisting of N = 12, 24, 40, 58 and 70 qubits in approximately square geometries (Supplementary Information).We chose a qubit near the middle of one side as the probe and computed the order parameter ζ by decoding measurement ∼ S proxy as a function of the decoding radius for ρ = 0.3 (triangles) and ρ = 1 (circles).In the disentangling phase, S proxy ∼ decays rapidly to 0, independent of the system size.In the entangling phase, S proxy ∼ remains large and finite up to r max − 1. e, ∼ S proxy at N = 40 as a function of r for different ρ, revealing a crossover between the entangling and disentangling phases for intermediate ρ. f, ∼ S proxy at r = r max − 1 as a function of ρ for N = 12, 24, 40 and 58 qubits.The curves for different sizes approximately cross at ρ c ≈ 0.9.Inset, schematic showing the decoding geometry for the experiment.The pink and grey lines encompass the past light cones (at depth T = 5) of the probe qubit and traced-out qubits at r = r max − 1, respectively.Data were collected from 2,000 random circuit instances and 1,000 shots each for every value of N and ρ. outcomes up to r lattice steps away from that side while tracing out all the others (Fig. 4a).We refer to r as the decoding radius.Because of the measurements, the probe may remain entangled even when r extends past its unitary light cone, corresponding to an emergent form of teleportation 18 . As seen in Fig. 3, the entanglement transition occurs as a function of depth T, with a critical depth 3 < T c < 4. Because T is a discrete parameter, it cannot be tuned to finely resolve the transition.To do this, we fix T = 5 and instead tune the density of the gates, so each iSWAP-like gate acts with probability ρ and is skipped otherwise, setting an 'effective depth' T eff = ρT; this can be tuned continuously across the transition.Results for ζ(r) at ρ = 1 (Fig. 4b) reveal a decay with system size N of ζ(r max ), where r = r max corresponds to measuring all the qubits apart from the probe.This decay is purely due to noise in the system. Remarkably, sensitivity to noise can itself serve as an order parameter for the phase.In the disentangling phase, the probe is affected by noise only within a finite correlation length, whereas in the entangling phase it becomes sensitive to noise anywhere in the system.In Fig. 4c, ζ(r max ) is shown as a function of ρ for several N values, indicating a transition at a critical gate density ρ c of around 0.6-0.8.At ρ = 0.3, which is well below the transition, ζ(r max ) remains constant as N increases (inset in Fig. 4c).By contrast, at ρ = 1 we fit ζ(r max ) at around 0.97 N , indicating an error rate of around 3% per qubit for the entire sequence.This is approximately consistent with our expectations for a depth T = 5 circuit based on individual gate and measurement error rates (Supplementary Information).This response to noise is analogous to the susceptibility of magnetic phases to a symmetry-breaking field 7,30,31,45 and therefore sharply distinguishes the phases only in the limit of infinitesimal noise.For finite noise, we expect the N dependence to be cut off at a finite correlation length.We do not see the effects of this cut-off at system sizes accessible to our experiment. As a complementary approach, the underlying behaviour in the absence of noise may be estimated by noise mitigation.To do this, we define the normalized order parameter The persistence of entanglement with increasing r, corresponding to measurement-induced teleportation 18 , indicates the entangling phase.Figure 4d shows the noise-mitigated entropy for ρ = 0.3 and ρ = 1, revealing a rapid, N-independent decay in the former and a plateau up to r = r max − 1 in the latter.At fixed N = 40, ∼ S r ( ) proxy displays a crossover between the two behaviours for intermediate ρ (Fig. 4e). To resolve this crossover more clearly, we show ∼ S r ( −1) proxy max as a function of ρ for N = 12-58 (Fig. 4e).The accessible system sizes approximately cross at ρ c ≈ 0.9.There is an upward drift of the crossing points with increasing N, confirming the expected instability of the phases to noise in the infinite-system limit.Nonetheless, the signatures of the ideal finite-size crossing (estimated to be ρ c ≃ 0.72 from the noiseless classical simulation; Supplementary Information) remain recognizable at the sizes and noise rates accessible in our experiment, although they are moved to larger ρ c .A stable finite-size crossing would mean that the probe qubit remains robustly entangled with qubits on the opposite side of the system, even when N increases.This is a hallmark of the teleporting phase 18 , in which quantum information (aided by classical communication) travels faster than the limits imposed by the locality and causality of unitary dynamics.Indeed, without measurements, the probe qubit and the remaining unmeasured qubits are causally disconnected, with non-overlapping past light cones 46 (pink and grey lines in the inset in Fig. 4f). Our work focuses on the essence of measurement-induced phases: the emergence of distinct quantum information structures in spacetime.We used space-time duality mappings to circumvent mid-circuit measurements, devised scalable decoding schemes based on a local probe of entanglement, and used hardware noise to study these phases on up to 70 superconducting qubits.Our findings highlight the practical limitations of NISQ processors imposed by finite coherence. By identifying exponential suppression of the decoded signal in the number of qubits, our results indicate that increasing the size of qubit arrays may not be beneficial without corresponding reductions in noise rates.At current error rates, extrapolation of our results (at ρ = 1, T = 5) to an N-qubit fidelity of less than 1% indicates that arrays of more than around 150 qubits would become too entangled with their environment for any signatures of the ideal (closed system) entanglement structure to be detectable in experiments.This indicates that there is an upper limit on qubit array sizes of about 12 × 12 for this type of experiment, beyond which improvements in system coherence are needed. Online content Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-023-06505-7. Fig. 1 | Fig.1| Monitored circuits and space-time duality mapping.a, A random (1 + 1)-dimensional monitored quantum circuit composed of both unitary gates and measurements.b, An equivalent dual (1 + 1)-dimensional shallow circuit of size L x × L y and depth T with all measurements at the final time formed from a space-time duality mapping of the circuit in a.Because of the non-unitarity nature of measurements, there is freedom as to which dimensions are viewed as 'time' and which as 'space'.In this example, L y is set by the (1 + 1)D circuit depth and L x by its spatial size, and T is set by the measurement rate.c, Classical postprocessing on a computer of the measurement record (quantum trajectory), and quantum-state readout of a monitored circuit can be used to diagnose the underlying information structures in the system. Fig. 2 | 1 1 2 Fig. 2 | Implementation of space-time duality in 1D.a, A quantum circuit composed of non-unitary two-qubit operations in a brickwork pattern on a chain of 12 qubits with 7 time steps.Each two-qubit operation can be a combination of unitary operations and measurement.b, The space-time dual of the circuit shown in a with the roles of space and time interchanged.The 12-qubit wavefunction |Ψ m ⟩ is temporally extended along Q 7 .c, In the experiment on a quantum processor, a set of 12 ancillary qubits Q Q ′… ′ 1 1 2 and a network of SWAP gates are used to teleport |Ψ m ⟩ to the ancillary qubits.d, Illustration of Fig. 3 | Fig. 3 | 1D entanglement phases obtained from 2D shallow quantum circuits.a, Schematic of the 2D grid of qubits.At each cycle (blue boxes) of the circuit, random single-qubit and two-qubit iSWAP-like gates are applied to each qubit in the cycle sequence shown.The random single-qubit gate (SQ, grey) is chosen randomly from the set { } X Y W V , , , ±1 ±1 ±1 ±1 , where W X Y = ( + )/ 2 ( 2 ) I versus x for T = 3 and T = 6 for different volumes of A and B. Decoding 4 | Decoding of local order parameter, measurement-induced teleportation and finite-size analysis.a, Schematic of the processor geometry and decoding procedure.The gate sequence is the same as in Fig. 3 with depth T = 5.The decoding procedure involves classically computing the Bloch vector a m of the probe qubit (pink) conditional on the experimental measurement record m (yellow).The order parameter ζ is calculated by means of the cross-correlation between the measured probe bit z p and τ z = sign( ⋅ ) m m a  , which is +1 if a m points above the equator of the Bloch sphere and −1 if it points below.b, Decoded order parameter ζ and error-mitigated order parameter ∼ ζ ζ ζ r = / ( ) max as a function of the decoding radius r for different N and ρ = 1.c, ζ r ( ) max as a function of the gate density ρ for different N. The inset shows that for small ρ, ζ(r max ) remains constant as a function of N (disentangling phase), whereas for larger ρ, ζ(r max ) decays exponentially with N, implying sensitivity to noise of arbitrarily distant qubits (entangling phase).d, Error-mitigated proxy entropy
6,083
2023-03-08T00:00:00.000
[ "Physics" ]
Impact of Base-to-Height Ratio on Canopy Height Estimation Accuracy of Hemiboreal Forest Tree Species by Using Satellite and Airborne Stereo Imagery : The present study assessed the large-format airborne (UltraCam) and satellite (GeoEye1 and Pleiades1B) image-based digital surface model (DSM) performance for canopy height estimation in predominantly mature, closed-canopy Latvian hemiboreal forestland. The research performed the direct comparison of calculated image-based DSM models with canopy peaks heights extracted from reference LiDAR data. The study confirmed the tendency for canopy height underestimation for all satellite-based models. The obtained accuracy of the canopy height estimation GeoEye1-based models varied as follows: for a pine ( − 1.49 median error, 1.52 m normalised median absolute deviation (NMAD)), spruce ( − 0.94 median, 1.97 m NMAD), birch ( − 0.26 median, 1.96 m NMAD), and black alder ( − 0.31 median, 1.52 m NMAD). The canopy detection rates (completeness) using GeoEye1 stereo imagery varied from 98% ( pine ) to >99% for spruce and deciduous tree species. This research has shown that determining the optimum base-to-height (B/H) ratio is critical for canopy height estimation efficiency and completeness using image-based DSMs. This study found that stereo imagery with a B/H ratio range of 0.2–0.3 (or convergence angle range 10–15 ◦ ) is optimal for image-based DSMs in closed-canopy hemiboreal forest areas. Introduction The existence of forests is crucial for the well-being of people and the planet as a whole. Given the role of forests in the global carbon cycle and providing a wide range of ecosystem services, the ongoing assessment of forests' quantitative and qualitative state is critical [1]. Therefore, mapping and collecting precise and up-to-date data related to forest structure, biomass, species composition, and corresponding changes have become a mandatory part of forest management, inventories, and monitoring [2]. In Latvia, calculations of forest carbon stock changes and greenhouse gas (GHG) emissions are based on data provided by the National Forest Inventory (NFI) [3]. According to NFI data, forest covers 3.403 million hectares of land in Latvia, or 55% of the country's territory, the fourth-highest forest cover among all European Union (EU) countries. Since 2004, the Latvian NFI database, maintained by the Latvian State Forest Research Institute (LSFRI ) "Silava", includes complete information related to Latvian forest stand parameters such as tree species, density, stock, forest stand height, biomass, etc., carried out at the plot level. However, traditional practices used for collecting this vegetation information are costly and time-consuming, providing low spatial coverage and requiring destructive fieldwork. Remote sensing complements traditional field methods through data analysis, enabling precise estimation of various forest inventory attributes across the high spatial range and different scales by avoiding destructive sampling and reducing time and cost from data acquisition to final output [4]. It is well known that canopy height, which correlated with other vegetation attributes, is an essential parameter for predicting regional forest biomass [5]. Thus, carbon accounting programs and research efforts on climate-vegetation interactions have increased the demand for canopy height information. Worldwide, LiDAR (Light Detection And Ranging) data, combined with up-to-date advanced data processing methods, have proven to be efficient and precise tools for indirect fine-scale estimation of forest 3D structure parameters (primarily tree height) derived from high-density 3D point clouds [6]. Furthermore, by computing the difference between the canopy surface and the underlying ground, the calculated canopy height model (CHM) accurately reflects the spatial variations of the height of the canopy surface [7]. However, relatively high acquisition costs prevent airborne LiDAR from regularly mapping forest structural state and dynamics. Therefore, considering alternatives to airborne laser scanning (ALS) for continuous wide-area surveys, it is necessary to examine cost-effective approaches that use satellite data. Higher temporal resolution, lower cost with broader area coverage, and spatially more homogeneous image content with multispectral information are the main advantages of satellite data over airborne remote sensing [8]. In the last decade, there has been growing interest in using very high resolution (ground sample distance (GSD) < 0.5 m) satellitederived stereo imagery (VHRSI) to generate dense digital surface models (DSM) analogous to LiDAR data to support forest inventory and monitoring [9]. Structure from motion (SfM) and photogrammetric matching techniques [10,11] reconstruct the 3D object geometry and detect 3D coordinates by simultaneous matching of the same 2D object points located in overlapped stereo airborne and VHRSI imagery. However, while the ALS can penetrate the forest canopy and characterise the vertical distribution of vegetation, the VHRSI imagebased point clouds only represent the non-transparent outer "canopy envelope" [9] or "canopy blanket" cover of dominant trees. Most of the earlier studies regarding VHRSI image-based DSM performance used the plot-based approaches by deriving the main forest metrics such as the mean, maximum canopy heights, and height percentiles. Then, after performing regression with reference data (mostly LiDAR) and obtained estimation accuracy, the metrics are used as explanatory variables for predictive modelling of forest inventory attributes over certain areas. As an example, Grant D. Pearse et al. (2018) [12] compared point clouds obtained from Pleiades tri-stereo imagery to LiDAR data to predict Pinus radiata forest plot inventory attributes, such as mean height (R 2 = 0.81; RMSE = 2.1 m) and total stem volume (R 2 = 0.70; RMSE = 112.6 m 3 ha −1 ). Additionally, L. Piermattei et al. (2019) [13] used Pleiades tri-stereo image-based CHMs to derive forest metrics in the Alpine region, compared to airborne image matching. Based on the applied pixel-wise approach, the forest metrics median errors −0.25 m (H max ), 0.33 m (H p 95), −0.03m (H Std ) showed that satellite-based Pléiades CHMs could be an alternative to airborne images-derived CHMs in mountain forests. Based on calculated height metrics in 5-pixel samples, Neigh et al. (2014) [14] found IKONOS stereo imagery to be a useful LiDAR alternative for DSM calculation (R 2 = 0.84; RMSE = 2.7 to 4.1 m) in dense coniferous and mixed hardwood US forests. St-Onge et al. (2019) [15] successfully manually (RMSE = 0.9 m) measured individual tree heights in stereo mode using WorldView-3 imagery to predict basal area at tree and plot levels in sparse Lichen woodlands. Several recent studies showed successful VHRSI image-based CHM performance connected to European boreal and hemiboreal forest tree species (Scots pine, spruce and birch). Persson and Perko (2016) [16] reported high correlations between WorldView-2 image-derived height metrics and reference LiDAR, with the estimation of Lorey's mean height with RMSE of 1.5 m (8.3%). The study identified the tendency to canopy height underestimation of dominant trees by using image-based CHMs. S. Ullah et al. (2020) [17] performed a plot-wise comparison of airborne, WorldView-2, and TanDEM-X image-based CHMs against field-based Lorey's mean and maximum height in a forest with pure, mixed pines and broadleaf tree species. This research confirmed that airborne stereo is the most accurate option (RMSE = 1.71 m, Lorey's mean height) compared to satellite-based models (RMSE = 2.04 m WorldView-2; RMSE = 2.13 m TanDEM-X). Despite the large offer of VHRSI sensors on the market, image-derived DSM performance for retrieving the forest inventory data of different vegetation species in various geographical regions is still not fully understood. Therefore, referring to the results of the remote sensing expert opinion survey performed by Fassnacht et al. (2017), the potential of VHRSI use for estimation forest attributes such as stand height is still unclear [18]. According to this survey, the mentioned reasons are a few studies and existing uncertainties associated with canopy height estimation accuracy. Plot-wise approaches based on forest metrics have some limitations that sometimes restrict the comprehensive quantitative and qualitative performance evaluation of imagederived CHM models. First, most studies lack information related to image-based CHM completeness (percentage of detected canopy). Secondly, the height metrics do not directly estimate the outer "canopy envelope" DSM surface, which in most cases follows dominant treetops. It is also essential to recognise the differences in the DSM height estimates associated with different vegetation species. Thirdly, the accurate terrain layer (DTM) is needed to perform CHM creation. Thus, the main objective of this study was to perform the direct comparison of calculated image-based DSM models with canopy peaks heights extracted from reference LiDAR data without canopy height model (CHM) generation. The present study assessed the airborne and satellite image-based DSM performance for canopy height estimation in predominantly mature, dense, closed-canopy Latvian hemiboreal forestland using forest inventory data. To achieve this objective, the research: (1) evaluated and compared the vertical accuracy and completeness DSMs derived from stereo imagery of GeoEye1 and Pleiades1B satellites and large-format aerial UltraCam to reference LiDAR data; (2) analysed the differences in the DSM height estimates associated with different tree species; (3) examined the effect of sensor-to-target geometry (specifically base-to-height ratio) on matching performance and canopy height estimation accuracy; (4) investigated the satellite-based different spectral band DSMs performance on canopy height accuracy estimation. Study Area The "Taurkalne large forests" forestland area is located 100 km south-east of Rīga (56 • 30 N, 25 • 00 E), Latvia, Figure 1. The study area covers approximately 350 km 2 , represented a relatively flat region with an elevation range varying between 40 and 80 m above sea level and mean annual rainfall of 690 mm. The selected territory represents a typical hemiboreal forestland pattern across the eastern part of Latvia with predominantly mature, dense, closed-canopy deciduous, and evergreen trees with some small open or grassy areas. The forest vegetation of the study site dominated by evergreen pine (Pinus sylvestris), spruce (Pícea ábies) and deciduous birch (Bétula), black alder (Álnus glutinósa) tree species. These tree species were the focus of this study. Satellite and Airborne Data In total, three sets of stereo imagery, acquired in the summer of 2020 by various optical satellites and airborne sensors, were used as the initial data in this study. The two in-track satellite imagery stereo pairs, GeoEye-1 (GE1) by Digital Globe (USA) and Pleiades1B by Airbus Intelligence (EU), were obtained over the study area. The main characteristics of the imagery are given in Table 1 and Figure 2. Both imagery sets were provided altogether with rational polynomial coefficients (RPCs) data. The radiometrically (16 bit GeoTIFF) and sensor-corrected GeoEye-1 OrthoReady Stereo (OR2A) processing level images were delivered. The GE1 imagery was projected to a plane using a Universal Transverse Mercator map projection and had no topographic relief applied, making them suitable for photogrammetric processing. Very high resolution (VHR) optical satellite Pleiades1B single pair (not tri-stereo) imagery, with 7% of cloud cover, was delivered with preserved true relief information (projected to a plane), radiometrically (12-bit JPEG2000), and sensor corrected. The cloud mask was automatically created and manually checked for further use in the given research. The vendor's pan-sharpening of Pleiades1B imagery resulted in a higher 0.5 m ground sample distance (GSD) spatial resolution of provided final 4-band (NIR-R_G-B) product. To perform the complete research, additionally, twenty 4-band (NIR-R-G-B) aerial images with GSD resolution 0.25 m, acquired by Georeal company (Czech Republic) in July 2020 and provided by Latvian Geospatial Information Agency (LGIA), were used as a third imagery set. The images were taken at a flying height of 4600 m using UltraCam Eagle Mark 1, a frame large-format digital photogrammetric camera with a frame size of 13,080 × 20,010 mm and a focal length of 100.5 mm. The imagery formed a rectangular two strips block with 80% forward overlap and 35% side overlap. While the GE1 and Pleiades1B stereo satellite imagery sets fully covered the study area, the UltraCam airborne stereo imagery block had only 20% (70 km 2 ) coverage ( Figure 1). Reference Data Airborne LiDAR and forest inventory (FI) data were used as reference data for this study. LiDAR open access data were acquired over the study area by MGGP Aero (Poland) at the end of May 2017 and provided by the Latvian Geospatial Information Agency (LGIA). The LiDAR data were collected with a Riegl LMS Q680i full-waveform sensor operating at a 400 kHz pulse repetition rate. An average flying height above ground level (AGL) was 800 m, scan angle 45 degrees, and flying speed of~230 km/h. The average LiDAR point cloud density was more than 5 points per m 2 . LiDAR data pre-processing, included data geo-referencing and point cloud classification, was performed with the Terrasolid software package by LGIA. As LiDAR data acquisition time was three years before collecting stereo satellite data, the change detection related to forest clearcutting was performed across the study area. The clearcutting mask was created by using GE1, Pleiades, airborne orthophoto, and imagederived DSMs data. The automatically created polygons of the change detection mask were visually checked and manually corrected. Forest inventory (FI) data were provided by the Joint Stock Company "Latvia's State Forests" (LVM) and included the forest plot complete metrics information, such as dominant and co-dominant tree species, species composition proportion, canopy height, age, density, estimated timber volume, etc. Across the study area, all FI plots were filtered and separated into four main tree species, based on the provided tree species composition coefficient. The plots with coefficient ≥7, meaning 70% of the dominant corresponding tree class in the given plot, were selected for this study. Finally, after applying the forest clearcutting mask, the forest plots with mature, dense, and closed-canopy forest cover were chosen ( Table 2). Data Processing Overwiew The study performed the direct comparison of calculated image-based DSM models with canopy peaks heights extracted from reference LiDAR data, without canopy height model (CHM) generation ( Figure 3). It was conducted in order to isolate one source of error uncertainty related to the accuracy of LiDAR DTM, generally used for CHM model calculation. The co-registration of the satellite imagery sets with LiDAR was performed during sensor orientation by GCPs measured and transferred from the LiDAR data. The main reason for the bias-compensated bundle adjustment using LiDAR GCPs was to minimise the image-based DSMs and LiDAR co-registration and geo-location discrepancies. Finally, we performed accuracy assessments related to image-derived DSMs performance in canopy height detection and estimation in open terrain and forest areas. The software package Photomod v7.0 (Racurs, Moscow, Russia) was used for all photogrammetric image data processing steps, including imagery bundle adjustment and image matching DSM generation. All works related to LiDAR point cloud handling, such as DSM/DTM calculations and watershed segmentation routines, were carried out using freeware FUSION/LDV v4.20 [19]. Grid DSMs comparison, corresponding grid statistics collection, and GIS-based analysis were performed using freeware SAGA GIS [20] and QGIS [21]. Sensor Orientation and Data Co-Registration Image pre-processing started with pan-sharpening, applied to the GE1 imagery. The most robust enhanced principal component analysis pan-sharpening method was used, as it does not require radiometric correction. External sensor orientation was performed with an empirical model based on rational functions with rapid positioning capability (RPC) data, refined by a zero-order polynomial adjustment. In general, it required just one ground control point (GCP) [22], and 4-5 well-distributed points would be recommended for a stereo pair to achieve the one-pixel accuracy [23,24] by using least-squares bundle adjustment. To achieve the best co-registration of the imagery with LiDAR, the eighteen (18) well-identified artificial GCPs (poles, concrete slab corners, road intersections) and well-identified natural (e.g., tree stumps) objects were transferred from the LiDAR data. The GCPs' height coordinates were extracted from the LiDAR, whereas their planar locations were manually identified in an existing orthophoto (0.25 m GSD) provided by LGIA. All GCPs were well-distributed across the study area and manually measured by using stereo mode in Photomod. Fewer GCPs were used for geo-registration of the airborne UltraCam stereo imagery due to its partial coverage of the study area. The image geo-referencing accuracy and epipolar geometry of all imagery sets were improved by automatically measured tie points. The point measurements and bundle adjustment were performed once for every sensor (GE1, Pleiades, and UltraCam). DSM Extraction from VHR Stereo Satellite Imagery and LiDAR Point Cloud Five GE1 models (PAN, NIR, R, G, B) and five Pleiades1B models (NIR, R, G, B, NIR-G-B) were chosen for 0.5 m resolution grid DSM generation by using an SGM matching algorithm [10]. Additionally, two UltraCAM airborne imagery grid DSMs (NIR-G-B, 0.25 m resolution) with original in-strip overlap 80% and reduced overlap 60% were extracted. The two UltraCam models with different overlap settings were selected to investigate the effect of the base-to-height ratio on generated DSM accuracy. Altogether, twelve grid DSM models were used in the further analysis. After testing various SGM settings, the following slightly modified Photomod SGM default settings were used for image-based DSM calculations: census transform (CT) matching cost function with pixel cost calculation radius 3 and eight calculation paths; decreased penalty value 4 for parallax changes by one pixel and a reduced penalty value 80 for parallax change by more than one pixel. No filters were applied on generated DSM models, except median filter with mask aperture (7 pixels) and threshold (1 m) to recalculate low-correlated "noisy" pixels along feature edges (e.g., forest borders), keeping rest values unchangeable. To fill the gaps (null cells) that appeared on DSMs due to occlusions and bad imagery textures, the SAGA GIS "stepwise resampling" tool was applied using a B-spline interpolation algorithm with grow factor 2. Accuracy Assessment of the Image-Based DSMs in Open Ground Areas Although imagery was vertically co-registered with LiDAR during sensor orientation, the extracted DSMs got a unique vertical bias. Thus, the vertical DSM offset from one to another and LiDAR surface must be calculated before their elevation comparison and final accuracy assessment. Firstly, the 1 m resolution grid DTM from LiDAR dense point cloud was created by assigning the mean elevation of ground classified returns within each grid cell. The created grid LiDAR DTM played as ground truth for further vertical accuracy assessment of the image-based DSMs in selected open ground areas. The corresponding open ground areas were chosen manually using a visual examination of LiDAR, satellite imagery and orthophoto maps to avoid altered and overgrown grass and shrubs. Alto-gether, the 134 open ground polygons (plots) were manually digitised with a mean of 0.9 ha and a total area of 120 ha, well-distributed across the study area. Within this created open ground mask, the image-based DSM ground surfaces were aligned to those of the reference LiDAR-based DTM. As image-based DSMs had higher resolution than 1 m LiDAR DTM, the mean height values of the pixels that are in the cell of LiDAR DTM were chosen. After the pixel-wise ground surfaces comparison, the obtained vertical offsets were applied to all image-based DSMs values for further accuracy assessment in forest areas. Accuracy Assessment of Image-Based DSMs in Forest Areas To perform the quality and efficiency assessment of image-based DSM in selected forest areas, the reference heights of individual canopy peaks were extracted from LiDAR data. To do this, the local maxima approach by using watershed segmentation was used for individual canopy peaks detection and extraction from the LiDAR grid DSM. First, the DSM was interpolated from the LiDAR dense point cloud using the "CanopyModel" routine in Fusion. Based on LiDAR point cloud density quantity, the optimal grid DSM with 0.8 m pixel resolution was generated by assigning the highest return of the LiDAR point cloud within each grid cell. A median convolution smoothing filter with a 3 × 3 window was applied on the generated DSM. The Fusion 'peaks' switch was used to preserve the localised elevation peak (local maxima) values from the filtering. Secondly, the Fusion "TreeSeg" watershed segmentation algorithm was applied to a LiDAR-based DSM to produce segments representing individual canopy peaks. As a result, the calculated high point list, including the heights and locations of individual canopy peaks, was created in shapefile format. The obtained canopy peaks list was filtered using selected forest inventory study plots polygons ( Table 2) and separated into four main dominant tree species. To compensate for the changes in canopy heights due to trees growing in time between LiDAR data (2017) and satellite imagery (2020) acquisitions, the extracted LiDAR canopy peaks heights were adjusted based on each tree species annual growth rate. The trees annual growth rates values were obtained from the Latvian State Forest Service and published by LSFRI "Silava' [25]. The canopy peaks list was finalised by excluding all height values less than <6 m above ground by performing GIS analysis after assigning ground height attribute from earlier generated LiDAR grid DTM. The quality and accuracy assessment of the image-based DSMs in selected forest areas was assessed in two ways: vertical accuracy and completeness. The vertical accuracy assessment was performed by comparing the image-based DSM grid height values with corresponding individual canopy peaks (height maxima) extracted from reference LiDAR data. It was conducted by collecting height metrics statistics of image-based DSM pixel values within a 1 m radius surrounding every appropriate LiDAR-based canopy peak ( Figure 4). The highest DSM grid height value of the surrounding 13 pixels (within 1 m radius) per canopy LiDAR height peak was selected and compared. To perform the canopy completeness (detection) and vertical accuracy assessments, all heights of image-based DSM grid cells assigned to LiDAR individual canopy peaks were filtered. First, all image-based DSM heights less than <2 m above ground, were marked as non-detected canopies and excluded from the final assessments. This GISbased filtering of image-based DSM heights was performed by canopy height calculation using ground height values extracted from corresponding LiDAR grid DTM. Secondly, all measurements with height differences more than >20 m between corresponding imagebased DSM heights and LiDAR peaks were marked as outliers and excluded from the final assessments. Thus, the final completeness of the image-based DSMs was assessed as the proportion of the number of LiDAR local canopy peaks with assigned image-based DSM heights (H canopy > 2 m) with removed outliers to the total number of extracted LiDAR canopy peaks. Finally, the descriptive statistics and linear regression were calculated for all compared DSM and LiDAR heights in every model for each tree species. For all statistics measurements, the normalised median absolute deviation (NMAD) was used, Equation (1), where: ∆h j denotes differences between reference (LiDAR) and extracted DSM cell (j) values, and m-median quantile of the differences: The NMAD is an accuracy measure more suited for photogrammetry-derived and cloud-based DEMs as it is more resilient to outliers than standard deviation [26]. Accuracy of the Sensor's Stereo-Pair Orientation The satellite and airborne stereo-pairs orientation results are based on rational functions with RPC data and least-squares bundle adjustment and root mean square errors (RMSE) on GCPs, shown in Table 3. The sub-pixel imagery orientation accuracy was achieved in every sensor model. The geo-positioning accuracy of the Pleiades imagery was the least accurate due to radiometric and geometric differences of the Pleiades and GE1 imagery, impacting on non-signalised GCPs identification and measurements. At the same time, the vertical sensor orientation accuracy of Pleiades imagery was 50% higher than in the GE1 case due to the higher Pleiades base-to-height ratio parameter [27]. Accuracy of the Image-Based DSMs in Open Ground Areas The results obtained from the pixel-wise comparison of the image-derived DSM-based ground surfaces with extracted LiDAR DTM are presented in Table 4, Figure 5. The table provides only spectral models with the best results (lowest RMSE) as the rest showed almost identical output. Overall, these results indicate that sensor orientation and image-derived DSM coregistration based on transferred LiDAR GCPs were conducted accurately and adequately. The most noticeable finding to emerge from these results is that Pleaides-based DSM demonstrated the highest accuracy of open ground surface detection. The reason for this is most likely in the higher base-to-height ratio of Pleiades imagery. The most surprising aspect of the results is the lower accuracy (RMSE) of airborne-based UltraCam DSM ground surfaces than the Pleiades. A possible explanation for this might be that the geospatial resolution of UltraCam imagery is twice higher than satellite data, also providing more detailed information in a vertical plane with higher variance in vertical error distribution. No significant differences were found in ground detection accuracy between two UltraCamderived DSM models related to using all (80% overlap) and a reduced number (60% overlap) of images. Completeness and Vertical Accuracy of Image-Based DSMs in Forest Areas The vertical accuracy assessment of image-based DSMs in selected forest areas ( Table 2) were based on comparison with heights of individual canopy peaks (H canopy > 6 m) of reference LiDAR data, and are shown in Table 5, Figure 6. From a total of twelve analysed image-based DSMs, only seven are presented in Table 5 and Figure 6: three GeeoEye1 DSMs (PAN, best spectral, worst spectral), two Pleiades (best/worst single spectral DSM), and two UltraCam (with 80% overlap and reduced number 60% overlap). The best/worst models were filtered based on median (50%) error and lowest RMSE/NMAD values. These results indicate an essential connection between the image-derived DSM canopy height accuracy, canopy completeness, and corresponding vegetation tree species. The Figure 7 summarises all previously given results based only on one best model per sensor and two accuracy measures: median error and NMAD. All image-based DSMs show underestimation in height detection for all tree species, except black alder, using the airborne UltraCam sensor. Coniferous trees (pine, spruce) are less accurate in height estimation and have a higher error variance than deciduous tree species (birch, black alder). Comparing the image-based DSMs results of two used satellites reveals that all GeoEye1 DSMs provided higher results in dense forest canopy detection and height accuracy estimation than Pleiades1B models. The best outcome was achieved by using airborne UltraCam imagery with 60% in-track overlap. The possible reasons for the given consequences are discussed in the forthcoming section. Discussion In total, the vertical accuracy and completeness of image-based DSMs are affected by the base-to-height ratio parameter, the canopy vegetation vertical structure, species composition, image band radiometry, sensor-to-target and sun-to-sensor geometry, wind, and other minor factors [28]. The ability to identify or distinguish canopy or its parts based on light scattering and reflection differences is the key to the success of image matching techniques. Sufficient image contrast and brightness between neighbour objects surfaces (e.g., between two crowns; crown and ground) can improve the matching performance and, therefore, crown/canopy detection rate and height accuracy. The current study found that the base-to-height ratio of stereo imagery geometry was the critical factor influencing image-based DSM performance. In our research, the sun-to-sensor viewing geometry was similar for GE1 and Pleiades1B satellite sensors. Therefore, the discussion is mainly focused on sensor-to-target geometry and does not provide a detailed understanding of how the changes in sun-to-sensor geometry (e.g., sun elevation and azimuth angles) influence canopy surface estimates by image-based DSMs. Vertical Accuracy of Image-Based DSMs in Open Ground Areas This study results confirmed the previous findings [27,29] that satellite imaging geometry, particularly the base-to-height (B/H) ratio related to stereo-pair convergence angle, plays a substantial role in the completeness and vertical accuracy of image-derived DSMs. In our study, the Pleiades-based DSMs, with the highest B/H ratio of 0.61, showed the highest performance and accuracy in height estimation of open ground areas. It is somewhat surprising that Pleiades pan-sharpened imagery (0.5 m GSD) was more efficient in open terrain detection than airborne UltraCam images with a 0.25 m resolution. Only Pleiades models achieved the sub-pixel vertical accuracy with RMSE 0.33 m and NMAD 0.31 m, showed a Gaussian error distribution pattern ( Figure 5). Another proof of Pleiades imagery high performance in open ground detection is the sensor's orientation results (Table 3). Despite worse Pleiades planimetry accuracy than GE1 and UltraCam, the achieved vertical accuracy based on GCP measurements was 1.5 higher than GE1 and almost the same as Ul-traCam. Since GCP measurements for sensor bundle adjustment were carried out manually in stereo mode, this also allows me to recommend using imagery with a high B/H ratio for manual stereo restitution of open terrain areas and artificial objects with continuous and solid surfaces. There was an insignificant discrepancy in performance between different spectral band-based DSM models of the same corresponding sensor. Taken together, the findings from this study suggest that stereo imagery with B/H ratio > 0.5 (or convergence angles > 30 • ) preferably have to be used for DSM creation in open ground areas with flat terrain patterns. This conclusion agrees with the findings of other studies [27,29,30], in which the per-point vertical accuracy of image-based DSMs in open-ground areas directly correlates with increasing B/H ratio or convergence angle. Completeness and Vertical Accuracy of Image-Based DSMs in Forest Areas Opposite results were achieved in forest areas related to tree height estimation and completeness, where all Pleiades-based DSM models were worse than GE1 and even more so than UltraCam (Table 5, Figure 6). The completeness of pine (Pinus sylvestris) of Pleiades NIR-based DSM was 25% less (73%) than GE1 DSM performance (98%) and almost half-meter more in canopy height underestimation. In addition, all Pleiades DSM models showed a higher error variance (RMSE, NMAD) than other sensor-based DSMs, noticeably below the first quartile (25%) of the errors. The main reason for this outcome is directly related to the stereo imagery geometry, namely the B/H ratio or stereo-pair convergence angle. The dense closed-canopy forest areas with near-continuous tree cover are characterised by the high surface roughness of different tree shapes. Therefore, the detection efficiency of every part of the canopy depends on the viewing directions of the stereo pair, namely, on how accurately and correctly the same part of the rough canopy surface will be displayed on the images. In forest areas, crown shape and tree structure that strongly influence the sun's reflection come out in the first place [31]. For trees with a conical crown shape, a large convergence angle may lead to a situation where the same part of the canopy may be wrongly displayed on one image due to poor reflection towards the sensor or invisibility (occlusion). The high B/H image ratio creates large parallax for high-rise crowns (canopies), increasing areas that cannot be matched correctly due to occlusions. Furthermore, during image matching, the mismatch of the same feature often leads to wrong height calculations and incomplete DSM [32]. As a result, for trees with conical (tapering) crown shapes, canopy height underestimation is expected and exists. Depending on tree species, the canopy height underestimation in dense closed-canopy forests can reach up to several meters, or 8% of mean canopy height (e.g., pine in the Pleiades case). Broadleaf deciduous tree species increase the sun reflectance by reducing tree crown transparency, smoothing the top of the canopy shape's roughness, and increasing the reflection area. As a result, the image-based DSMs of forests with dominated broadleaf species (e.g., birch and black alder) show higher efficiency and accuracy in canopy height estimation and completeness (Figure 7). Based on the results of this study, it can be concluded that the presence and variability of the types of canopy surfaces negate the advantage of a large convergence angle use, leading to a decrease in SGM matching performance (Pleiades case). This finding corroborates the recent research of Rongjun Qin (2019) [33], which suggested that a smaller convergence angle (can be as small as around 7 • ) yields better results for dense surface reconstruction and complete DSM in urban areas. Summarizing the results of this study, it is possible to construct a relationship between the obtained efficiency (vertical accuracy and completeness) of image-based DSMs and the base-to-height (B/H) ratio of corresponding imagery in hemiboreal predominantly mature, dense, closed-canopy forestland (Figure 8). However, this graph must be interpreted with caution because we omitted the effect of sun-to-image geometry, which was almost the same for satellite-based sensors in this study, and resolution differences between satellite and airborne imagery. It should also be borne in mind that the convergence angle in the case of frame aerial imagery is variable over the entire overlap of each stereo model. In most cases depend on the location on a stereo model, the convergence angle is less than the nominal values calculated based on the B/H ratio, Table 1. Based on airborne imagery performance, further research should be conducted in order to investigate the efficiency of satellite stereo imagery with the B/H ratio range between 0.15 and 0.25 (e.g., Pleiades tri-stereo approach) in similar forest areas. Overall, this study suggests that stereo imagery with a B/H ratio range of 0.2-0.3 (or convergence angle range 10-15 • ) is optimal for image-based DSMs in closed-canopy hemiboreal forest areas. To improve the performance of semi-global matching, the research also checked the efficiency of SGM matching settings by aggregating the cost along 16 paths instead of 8 (used in given research). The results showed a variable difference in accuracy with no significant improvements and a substantial calculation time increase. Moreover, the gains by using 16 paths were only noticeable for dominant stand-alone trees or groups of trees, which were out of the given study objectives. Vegetation Reflectance and Image-Based DSM Performance Besides the base-to-height ratio, the canopy detection performance also depends on how a vegetation surface interacts with light (reflectance and scattering) described by sun-sensor viewing geometry and accounted by the bidirectional reflectance distribution function (BRDF) [31]. In this study, it is affected by a complex mixture of variables, including crown/canopy shape/structure, species composition, partial crown transparency, leaf orientation, and shadows. Consequently, the tree species used in this study were arranged in the following order according to the obtained canopy height accuracy (from worst to high): pine, spruce, birch, and black alder (Figure 7). A tapering ovoid shape characterises the pine with an average crown diameter of around four meters, branchless for most of the trunk length. Furthermore, it does not have dense foliage (relatively transparent) with upward-pointing branches at the top of the crown, influencing light scattering ( Figure 9a). Consequently, the matching could not detect the top of the trees, provided median −1.5 m canopy height underestimation with satellite (GE1) and even −1.2 m with airborne UltraCAM (Figure 7), which makes up about 6-7% of average canopy height. Moreover, due to the pines' scattered reflectance and non-uniformity at the tops of the crowns, altogether with a high B/H ratio, the Pleiadesbased models have shown unsatisfactory results in pine canopy detection (73%) and height underestimation (~−2 m). In turn, the Norway spruce (Figure 9b), with the classic conic shape and needle-like leaves that grow around the upward-pointing branches, is non-transparent for sunlight. As a result, spruce has a very high (>99.5%) detection rate, but due to the sharp and narrow treetop, the canopy height underestimation is still high~−1 m for satellite (GE1) sensors. Interestingly, in the spruce canopy height estimation, the highest difference between satellite and airborne DSM results was obtained among the rest tree species. UltraCam showed twice better results than the best GE1 image-based model (−0.5 m against −1 m GE1). According to Liang and Matikainen (2007) [34], it can be inferred that for a cone-shaped spruce crown, the lower and upper crown parts can fall into the same raster cell. Thus, thanks to the better imagery resolution (0.25 vs 0.5m in GE1), the UltraCAM model has shown significant improvement in spruce canopy height estimation. Deciduous birch (Betula pendula) trees (Figure 9c) have main branches upward, with pendulous thin branches forming a "loose" crown, often with multiple peaks and changeful crown width and shape. The birch canopy height estimation is much better than coniferous pine and spruce tree species but still negative~−0.25 m for all image-based DSM models (except Pleiades −0.50 m). Matured broadleaf black alder (Figure 9d) trees with one or more trunks develop an arched, dense, and gently sloping crown shape opaque to light. It provides the best results in canopy height estimation, close to zero with the highest (99.9%) canopy detection rate. The structure of the crown of black alder also contributes to acceptable performance for satellite imagery obtained with a high base-to-ratio parameter (Pleiades case). Spectral Band Performance of Satellite Image-Based DSMs Overall, the current study found minor differences in image-based DSM performance related to spectral band imagery selection among the four tree species used ( Figure 6). In the GE1 case, the highest canopy height accuracy was achieved using PAN and BLUE bands, and the worst when using NIR or RED ones. In all cases, the spread of median error between the best and worst model did not exceed 10%, at the same time showing almost identical results in error distribution and completeness. No significant differences were found between Pleiades and GE1 image-based DSMs, except the pine case. The discrepancy between the NIR and RED Pleiades image-based DSMs achieved 12% in pine detection (completeness) and showed the 0.5 m shift of mean error. One significant and unanticipated finding was that GeoEye1 BLUE-based DSMs showed the best performance in canopy height estimation for all tree species, including deciduous. At the same time, the NIR and RED band-based DSMs were the worst, regardless of tree species. This finding was unexpected because vegetation spectra dominated by chlorophyll have the highest reflection in the NIR/RED [35]. The author's recent research [28] in Australian savannas showed that near-infrared BRDF, which is sensitive to canopy cover with higher contrast between canopy and the bare ground surface, provides the best efficiency in sparse Eucalypt vegetation detection. It seems that in cases where the ground surface is fully covered by dense closed-canopy forest, the NIR/RED resulted in insufficient local image contrast between the sunlit top of the crowns and surrounding shadows for improved canopy detection. These results agree with other studies' findings [36] that outlined the importance of the BLUE channel for forest species pixel-based classification and coniferous tree species discrimination. Immitzer et al. (2016) demonstrated the importance of the blue band for vegetation mapping using the Random Forest model, emphasising the weakness of nearinfrared spectral information. Unfortunately, the conformation to study findings related to high BLUE band performance for canopy height detection using stereo-satellite imagery has not been found in literature. Thus, this could be an essential issue for future research. The discrepancy between the best-performed BLUE (GeoEye1) and GREEN (Pleiades) image-based DSM models could be attributed to the pre-defined spectral ranges of a BLUE band in given satellites: GeoEye1 450-510, Pleiades 430-550 nm. Therefore, based on its spectrum, the Pleiades BLUE channel corresponds more to the BLUE-GREEN range. In summary, considering that the difference in canopy height estimation between the PAN and pan-sharpened spectral models was minimal (GeoEye), this study recommends using a high-resolution stereo PAN band for DSM calculations in closed-canopy hemiboreal forest areas. Aspects, Limitations and Recommendations for Data Processing by Stereo Satellites This research has shown that determining the optimum B/H ratio is critical for the efficiency of image-based DSM in dense, closed-canopy forests. As the B/H ratio increases, the number of pixels comprising the canopy surface decreases due to insufficient reflation and occlusions, with the likelihood that neighbouring pixel similarity also decreases. Due to the conical crown structure of coniferous tree species and relative transparency (pine case) affecting the BRDF, a high satellite sensor B/H ratio can lead to relatively poor image-matching results (Pleiades case). In its turn, decreasing the number of potential pixel matches reduces the ability to estimate the object surface or canopy height correctly. The study results should be interpreted cautiously, as the current research has only examined the hemiboreal dense, close-canopy forest areas. The canopy height underestimation of satellite image-based DSMs has to be considered when derived information will be used for further calculations of the forest inventory parameters. Thus, further research needs to be conducted to validate a B/H ratio performance for other vegetation types with varied canopy densities and located in different geographical regions. This research has several practical applications to be applied in dense Latvian forests. Although LiDAR data provide higher tree detection rates and more accurate canopy height estimates, the spatial coverage and temporal resolution are limited due to the cost and time needed for data acquisition. Thereby, this increases the need for a regular flow of optical data acquired by national mapping agencies to support AGB mapping, forest inventory, and monitoring. In Latvia, a three-year cycle of collecting airborne imagery (0.25 m GSD) is used to perform complete territory mapping (orthophoto). This study showed that large-format aerial photography (e.g., UltraCam) is the optimal solution for creating the most accurate image-based DSM in vast and dense forestland. However, even such a short aerial photography cycle is not enough to quickly register and respond to all changes in vegetation. This study confirmed that satellite-based image matching (with optimal B/H ratio) is an adequate low-cost alternative for detecting canopies in hemiboreal forest areas with over 98% canopy detection rate and sufficient canopy height estimation accuracy (NMAD < 2 m). However, compared to LiDAR, optical sensors are strongly influenced by solar illumination, sun-to-sensor and sensor-to-target geometry (i.e., BRDF). In Latvian conditions, it is vital to remember that the insufficient sunlight during the winter season, and summer season clouds, sometimes restrict the use of satellite sensors, making image-based vegetation monitoring problematic. One surprising finding was related to the indirect link between the human eyes performance of manual stereo data restitution and the computer vision matching technique. In most cases, the better and more accurately the human eyes can identify/detect a canopy by using stereo vision, the higher performance of image matching will be attained. This rule is correct for both sensor-to-target and sun-to-sensor geometry differences. It was especially noticeable during manual stereo comparison of different spectral GE1 and Pleiades imagery pairs. Thus, most likely that an experienced operator using a manual visual stereo check can filter and select the proper stereo imagery pairs for further use in image matching. Conclusions In this investigation, the main aim was to assess airborne and VHRSI satellite imagebased DSM performance for canopy height estimation in predominantly mature, dense, closed-canopy Latvian hemiboreal forestland. Although airborne-based DSMs showed the highest efficiency, this study confirmed that commercially available VHRSI imagery could be a suitable and accurate alternative for detection and estimating canopy height in dense, closed-canopy forests. The canopy detection rates (completeness) by using GeoEye1 stereo imagery varied from 98% (pine) to >99% for spruce and deciduous tree species. After performing a direct comparison of calculated image-based DSM models with reference LiDAR, the study confirmed the tendency for canopy height underestimation for all satellite-based models. The obtained accuracy of the canopy height estimation GE1-based models varied as follows: for a pine (−1.49 median, 1.52 m NMAD), spruce (−0.94 median, 1.97 m NMAD), birch (−0.26 median, 1.96 m NMAD), and black alder (−0.31 median, 1.52 m NMAD). The significant finding was that the base-to-height ratio (convergence angle), a part of sensor-to-target geometry, is critical for canopy height estimation efficiency and completeness using image-based DSMs. Thus, this study suggests that stereo imagery with a B/H ratio range of 0.2-0.3 (or convergence angle range 10-15 • ) is optimal for image-based DSMs in closed-canopy forest areas. Furthermore, besides the B/H ratio, the study confirmed that the canopy height estimation efficiency is affected by a complex mixture of variables, including crown/canopy shape/structure, species composition, partial crown transparency, leaf orientation, and shadows. Finally, this study has found that, generally, the spectral bands of VHRSI imagery have a minor impact on canopy detection rates and canopy height estimation accuracy in dense, closed-canopy hemiboreal forestland. Therefore, in most cases, the study recommends using a satellite high-resolution stereo PAN band for DSM generation. Funding: The financial support for this work was provided to the Institute of Electronics and Computer Science (Latvia) by the European Regional Development Fund (ERDF) within a funded project entitled "Satellite remote sensing-based forest stock estimation technology" (grant number No. 1.1.1.1/18/A/165). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The study did not report any data.
10,154.2
2021-07-27T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Repertoire-wide gene structure analyses: a case study comparing automatically predicted and manually annotated gene models Background The location and modular structure of eukaryotic protein-coding genes in genomic sequences can be automatically predicted by gene annotation algorithms. These predictions are often used for comparative studies on gene structure, gene repertoires, and genome evolution. However, automatic annotation algorithms do not yet correctly identify all genes within a genome, and manual annotation is often necessary to obtain accurate gene models and gene sets. As manual annotation is time-consuming, only a fraction of the gene models in a genome is typically manually annotated, and this fraction often differs between species. To assess the impact of manual annotation efforts on genome-wide analyses of gene structural properties, we compared the structural properties of protein-coding genes in seven diverse insect species sequenced by the i5k initiative. Results Our results show that the subset of genes chosen for manual annotation by a research community (3.5–7% of gene models) may have structural properties (e.g., lengths and exon counts) that are not necessarily representative for a species’ gene set as a whole. Nonetheless, the structural properties of automatically generated gene models are only altered marginally (if at all) through manual annotation. Major correlative trends, for example a negative correlation between genome size and exonic proportion, can be inferred from either the automatically predicted or manually annotated gene models alike. Vice versa, some previously reported trends did not appear in either the automatic or manually annotated gene sets, pointing towards insect-specific gene structural peculiarities. Conclusions In our analysis of gene structural properties, automatically predicted gene models proved to be sufficiently reliable to recover the same gene-repertoire-wide correlative trends that we found when focusing on manually annotated gene models only. We acknowledge that analyses on the individual gene level clearly benefit from manual curation. However, as genome sequencing and annotation projects often differ in the extent of their manual annotation and curation efforts, our results indicate that comparative studies analyzing gene structural properties in these genomes can nonetheless be justifiable and informative. Electronic supplementary material The online version of this article (10.1186/s12864-019-6064-8) contains supplementary material, which is available to authorized users. Background Eukaryotic protein-coding gene structure is characterized by a modular organization of introns and exons (the latter being composed of coding sequence [CDS] and/or untranslated regions [UTRs]; [1]), which are commonly identified (with the notable exception of UTRs) in genome sequences using automated in silico gene annotation procedures [2]. The configuration of exons and introns -GC content, length, and numbervaries among species, as well as by gene type. A major goal in the field of comparative genomics is to elucidate the factors that explain the variance of gene structures within and between species. It has been hypothesized, for example, that differential GC content of exons and introns within regions of low GC content in the genomes of mammals constitutes a marker for exon recognition during splicing and is thus a factor that stabilizes exonintron boundaries [3,4]. As further examples, hypotheses on the evolution of gene structure organization state that introns are generated by the insertion of nonautonomous DNA-transposons [5] or, in birds, that selection on intron size is driven by the evolution of powered flight [6]. Such hypotheses and observations are based on the structural description of protein-coding gene repertoires. These repertoires are typically derived from automated annotations, with only a fraction of the gene models having been refined by manual annotation and curation. Since the 1980s, procedures for automated gene structure prediction have been developed and continuously improved (reviewed by, for example, [7][8][9]), but they are still not error free [10][11][12]. The most commonly encountered errors are false positive and false negative identifications of protein-coding nucleotide sequences [13,14], noncoding nucleotide sequence retention in coding exons [15], wrong exon and gene boundaries [14,16], and fragmented or merged gene models [15,17,18]. With increasing size and structural complexity (i.e., increasing exon count) of genes, annotation errors are increasingly likely to occur and thus impair the accuracy of automated annotations [16,19,20]. Furthermore, gene density can influence annotation results [21]. For example, during the automated annotation of the large, 'genesparse' genome of the bug Oncopeltus fasciatus, many genes were wrongly split across multiple models ("the number of genes resulting from a merged CDS action is far greater than the number of gene models resulting from split CDS actions" [19], Supplement p. 27, and references therein]). In contrast, the 'gene-dense' genome of the centipede Strigamia maritima showed "in a significant number of cases, [that] the automated annotation [...] fused adjacent genes, largely on the basis of confounding RNASeq [sic] evidence" [22], Supplement p. 3]. The severity of the aforementioned annotation errors is influenced by assembly quality [2,20,23], which in turn is influenced by genome size and repeat content [24,25]. The results of automated annotation additionally depend on whether or not extrinsic evidence (i.e., alignments of homologous or orthologous sequences from other species) is used for gene sequence delineation. Algorithms that incorporate extrinsic evidence will likely more reliably predict genes with conserved coding sequence [26]. However, genes that do not resemble the provided extrinsic evidencebeing, for example, taxon-specificcould be missed during automatic annotation [27]. Thus, annotation results depend on the availability and quality of evidence to support the annotation procedure [28,29]. Despite these caveats, advantages of automated gene annotation include the speed and ease of application to (multiple) genome assemblies as well as reproducibility due to the application of explicit algorithms. With an expected average of 21,500 protein-coding genes in a eukaryotic genome [30], the automated approach is the method of choice to comprehensively annotate genes in a given genome, despite the risk of erroneous models. In comparative analyses, erroneous models have been held responsible for (i) false positive and false negative detection of clade-specific genes [31,32], (ii) inference of incorrect gene copy numbers [13], (iii) biased correlations between biological traits [32], and (iv) misleading functional annotations [33]. Errors in the annotation of protein-coding genes have been shown to mislead the analysis of gene family evolution [13], protein innovation rates [31], and the interpretation of gene function [33]. Automatically generated gene models can be reviewed and corrected individually in a subsequent process termed manual annotation or manual curation. Although often used interchangeably, here we use "manual annotation" to refer to adding or correcting gene model structures, and "manual curation" to imply additionally associating gene models with names, symbols, descriptions, or putative functions through examining experimental data and by considering information from the literature. Note that there are alternative understandings of these terms (e.g., within the i5k community [37]), with "annotation" considered the de novo creation of a model and "curation" encompassing review and editing of an existing model, considering all available structural and functional information. Annotation and curation efforts have proven to be most rewarding. For example, manual annotation helped to annotate nested and overlapping genes in the fruit fly [10], doubled the number of identified ionotropic receptors in two mosquitoes [34], and led to the discovery of elevated non-canonical splice site usage in a copepod [35]. To some extent, these examples represent 'special cases' that required manual annotation: the failure of the automated annotation strategies could be explained by gene structural complexity, high levels of gene sequence divergence, or rare deviations from canonical gene features. Beyond such cases, and beyond individual genes, it remains unclear whether manual annotation impacts genome-wide distributions of gene model structural properties, and if so, how and how much? If manual annotation does have a substantial effect, then comparing genome-wide trends in gene structural properties among different species or lineages would need to control for these effects. On the other hand, if the genome-wide effects of manual annotation are negligible, then comparative analyses can confidently employ automatically inferred gene models to characterize true biological/evolutionary differences in gene structural properties. Our thorough search for published assessments of the extent to which manual annotation affects genome-wide trends of gene structural properties in comparatives analyses revealed only one highly relevant but outdated article [10]. Results of such studies are, however, likely of broader interest, given that gene structural properties of both automatically inferred and manually annotated gene models are frequently compared across species. To address this issue, here we compare automatically inferred and manually annotated gene models with respect to five structural properties, namely transcript, protein, intron, and exon lengths as well as exon count. Our data for these comparisons comprises the proteincoding gene sets of seven insect species that represent taxonomically distant clades (last common ancestor ca. 370 million years ago [36]) and whose genomes differ in size and assembly quality from each other ( ). These genomes were processed in the context of the i5K pilot project for insect and arthropod genome sequencing [37] with an identical set of methodologies [39] (i.e., sequenced, assembled, and protein-coding genes annotated with the MAKER2 pipeline [38]). Additionally, substantial subsets of the automatically annotated gene models, hereafter referred to as 'predecessors', were manually annotated in all seven species (3.5-6.9% of the original gene models, > 650 models per species, Table 1). Manual annotation also yielded de novo gene models without predecessors (0.4-2.2% of the OGS, 30-381 models, Table 1). Using the above data, we assessed to what extent the previously mentioned five gene structural properties changed due to manual annotation (relative to the automatically inferred predecessor models). We furthermore studied whether previously reported correlative trends of structural features are detectable when analyzing automatic predictions and manual annotations. Specifically, we tested whether genome size correlates negatively with (i) the coding proportion of the genome (i.e., here total length of all exons relative to genome size; see Methods) [30], and whether genome size correlated positively with (ii) the intronic proportion of the genome [20,30] and (iii) gene count [30]. We also examined whether we are able to confirm a negative correlation between exon/intron count per gene and (iv) exon/intron length and [40] (v) the GC content of the exons/introns [40]. Structural properties of manually annotated gene models and their predecessors We assessed five structural properties of protein-coding genes when comparing automatically generated and manually annotated gene models: (i) unspliced transcript (pre-mRNA) length, (ii) protein length, (iii) exon count per transcript, as well as (iv) median exon and (v) median intron length per transcript. These properties were analyzed in two gene sets: (1) the full set of automatically generated gene models (AUTO) and (2) the full official gene set (OGS; non-redundant merge of gene models that were manually annotated or added and automatically generated models). We additionally studied these gene structural properties in three subsets of gene models: (3) all manually annotated gene models (MAN-SUB), (4) all automatically generated predecessors of the manually annotated gene models (AUTO-SUB), and (5) all manually added de novo gene models (MAN-ADD) (counts of gene models per set and species are given in Table 1). We first asked how well the subsets reflect the structural properties of the full sets. Thus, we compared the gene set AUTO-SUB with the gene set AUTO and the gene set MAN-SUB with the gene set OGS (Additional file 2: Figure SF1). Most distributions and central tendencies of structural properties differ between subsets and full sets (p adj. ≤ 0.05 in 57.1% of AUTO vs. AUTO-SUB comparisons and in 71.4% of OGS vs. MAN-SUB comparisons with Bonferroni-corrected two-sample Kolmogorov-Smirnov [KS-test] and/or two-sample Wilcoxon [W-test] tests, Additional file 1: Table ST3). Furthermore, we employed a jackknife resampling approach to establish confidence intervals of correlation coefficients to assess how well trends observed in our subsets represent those found in the full sets across a total of 28 comparisons (seven species, four correlations: median exon GC content vs. exon count, median exon length vs. exon count, median intron GC content vs. intron count, and median intron length vs. intron count). We found that the correlation coefficient of the AUTO-SUB subset lay outside of the interval established by resampling from the AUTO set in 20 of the 28 analyzed correlations. Likewise, we found that the correlation coefficient of the MAN-SUB subset lay outside the interval established by resampling from the OGS set in 18 of the 28 analyzed correlations (Additional file 2: Figure SF5, Additional File 1: Table ST4). These deviations can be interpreted as instances in which the subset does not reflect the respective full set regarding a certain combination of parameters. For example, in A. rosae, the interval established for the correlation coefficient of exon count compared to GC content drawn from the OGS is r = − 0.04-0.18, with the value of the OGS itself meeting the median (r = 0.06), while the value of the MAN-SUB set (r = − 0.15) is lower than the interval minimum (i.e., r = − 0.04) (Additional file 1: Table ST4). This suggests that models chosen for manual annotation are not in themselves a representative subset of all protein-coding gene models (models are not selected randomly, as researchers usually focus on particular gene families of interest, discussed below). Nonetheless, our primary concern was whether the act of manual annotation appreciably alters the structural properties of the chosen models. In fact, in comparing the subset-wide distributions of structural properties of AUTO-SUB and MAN-SUB with each other (comprising 3.5-6.9% of AUTO/OGS in each species), we find significant differences in the analyzed gene structural parameters for only four parameters in three species ( Table ST3). Complementing these statistical tests, when regarding the subset-wide medians of AUTO-SUB and MAN-SUB (Fig. 2a), we distinguish three species groups by assembly size and overall effect direction in terms of how median transcript length and median protein length are affected by manual annotation (Fig. 2a, Table 1; Additional file 1: Table ST3). These are: (i) two species these tendencies with respect to genome size corroborate the reported species-specific assessments noted above on the effect of gene density on automatic model correctness [19,22]. Lastly, we evaluate the de novo models in the minor MAN-ADD subsets, which contain 30-381 gene models per species. Strikingly, more than 80% of the gene structure property distributions of MAN-ADD gene models differ significantly (KS-tests and/or W-tests: p adj. ≤ 0.05) from the property distributions of the gene models in the gene sets AUTO, AUTO-SUB, OGS, and MAN-SUB (Additional file 1: Table ST4, Additional file 2: Figure SF4). To further explore these differences, we exemplarily analyzed MAN-ADD of O. fasciatus, where 70.2% of the subset's gene models specifically code for cuticle proteins and chemoreceptors (primarily gustatory receptors). Thus, property distributions of MAN-ADD are mainly governed by the specific properties of these gene families; however, we do not go into detail here due to small sample sizes (Additional file 3: Note S2, Additional file 1: Table ST7, Additional file 2: Figure SF5). Sets of predecessors and manually annotated gene models agree when analyzing reported correlations Having established that manual annotation does not greatly affect gene structural properties in themselves, we next assessed how the AUTO-SUB and MAN-SUB gene subsets compare for correlations of genome size and GC content with various structural properties. In only 2 of 28 comparisons (seven species and four property combinations, as above) did we observe a directional change in correlation coefficients from AUTO-SUB to MAN-SUB, with absolute differences of 0.05 and 0.08, respectively (Additional file 1: Table ST4b). Thus, we find almost no differences between correlational trends when comparing structural parameters of genes in the gene subset AUTO-SUB (Fig. 3, left columns) with those of genes in the subset MAN-SUB (Fig. 3, right columns). Our datasets also provide the opportunity to assess insect species for previously reported correlations of genome size with coding proportion, gene count, and intronic proportion, as described by [20,30]. Note that due to the low sample size of seven species (a necessary constraint for ensuring common methodology across species and a reasonably high proportion of manually annotated gene models), we subsequently present only descriptive statistics when assessing correlative trends. Our results are in agreement with the finding [30] that the coding proportion of the genomes is negatively correlated with the genome size and that the total gene count increases with genome size (Fig. 2b, Table 1). In contrast, reports for other correlations [20,30] are not borne out by our insect data. Specifically, we see no or only a weakly negative correlation between the intronic proportion of a genome and genome size (Fig. 2b, Table 1). We found these trends irrespective of whether we compared the gene set AUTO with the gene set OGS or whether we compare the gene subset MAN-SUB with the gene subset AUTO-SUB (Additional file 2: Figure SF2). In line with previous results [40], we do find a negative correlation between exon/intron count and median GC content of exons/introns in 21 of 28 comparisons (the seven species and four gene sets: AUTO, AUTO-SUB, OGS, and MAN-SUB; Additional file 1: Table ST4b). Notably, complex gene models (> 50 exons) are less variable in the GC content of their introns (ca. 20-45%) than less complex models (ca. 10-60%, Fig. 3a, c); this relationship does not seem to be influenced by genomic transcript length (data not shown). F. occidentalis conspicuously has two classes of complex gene models with low (as in the other species, ca. 20-40%) and high (ca. 0-60%) GC content variability in introns (Fig. 3c). Gene models with more than ten exons appear to be restricted to a certain median exon length class (ca 190 bp); this coincides with a negative correlation of exon count and median exon length (28 of 28 comparisons; Fig. 3b, Additional file 1: Table ST4b), as was also reported by [19,40]. In contrast to the report by Zhu et al. [40], we observe mixed trends (among species, not among sets except within A. glabripennis) regarding the correlation of intron count and median intron length (Fig. 3d): some species exhibit a positive correlation (A. rosae, L. decemlineata, O. fasciatus) while others show a negative correlation (C. lectularius, F. occidentalis, O. abietinus) in the four (sub)sets (Additional file 1: Table ST4). Thus, while certain correlations among genome and gene structural properties appear to also apply in insects, other correlations vary across taxa. Limitations of the present study The quality of automatic and manual annotations is strongly impacted by genome assembly quality and by the availability of extrinsic evidence such as orthologous sequences from closely related species, and RNA-seq data [6,20]. The impact of these factors on the correctness of gene models is beyond the scope of our study. Assessing the biological correctness of gene models remains difficult without a validated benchmark set [16,41] or appropriate quality metrics. The BUSCO quality metric [42] indeed makes a distinction between complete and partial orthologs, but this approach is limited to the subset of highly conserved protein-coding genes. However, we ensured comparability between genome assemblies and annotations by a conservative selection of species. The genomes and gene sets of the selected species have been inferred with the same wet lab and bioinformatic approaches [39]. Extending the taxonomic sampling at the time of data collection would have resulted in jeopardizing this methodological consistency and comparability. Thus, we analyzed the largest possible set of i5K species in terms of availability of gene sets before and after manual annotation at the time of data collection. All annotations and the derived statistics are based on de novo assemblies resulting from short-read sequencing paired and mate pair libraries, which are inherently fragmented. It remains to be tested whether the same conclusions can be drawn regarding the suitability of automatically inferred genes sets for comparing gene structural parameters when analyzing the gene sets of genomes assembled to higher quality (as reviewed by [20]). Similarly, our study based on 3.5-7% of protein-coding genes being manually annotated represents an assumed extrapolation whose conclusions could change once all genes would be manually annotated. Repertoire-wide gene structure assessments can rely on automatically predicted gene models The finding that the analyzed subsets (i.e., AUTO-SUB and MAN-SUB) do not fully reflect the property distributions of the respective full sets (AUTO, OGS) may give rise to concern whether generalizations are justified. However, we did not find a bias in either subset towards a certain combination of structural properties. Thus, at least the diversity of gene structures of the full sets appears to be reflected in the subsets. We find that the distributions, gene set-wide medians, and correlative trends of gene structure properties of AUTO-SUB are very similar to that of MAN-SUB (Figs. 1, 2 and 3). The analyses comparing AUTO-SUB and MAN-SUB with the respective full sets were conducted excluding MAN-ADD models, since these are added by curators in the absence of an automatically predicted predecessor. However, the hypothesis that automatically predicted gene models suffice as the basis for comparative analyses of large-scale gene structural properties can only be substantiated if the fraction of missing models is comparatively small. In each of seven species analyzed by us, 2.1% or fewer of the OGS gene models had been added manually (Table 1). However, de novo genes models make up a larger fraction of genes handled by curators (4.3-25.3%; i.e., MAN-ADD as fraction of MAN-SUB + MAN-ADD; Additional file 1: Table ST2). MAN-ADD structural properties differ strongly from the remaining four (sub)sets of gene models. These differences likely reflect the highly biased selection of gene classes for manual annotation based on the research interests of the curators, which we address here for cuticle structural proteins and chemoreceptors as exemplar classes. In particular, chemoreceptor genes are notoriously difficult to automatically predict (rapidly evolving genes with low expression levels of transcripts, (e.g., [19]). Thus, they are frequently added de novo, as found in the annotation of the O. fasciatus genome [19] (Additional file 3: Note S2), and gene structural property distributions may be strongly governed by distinct gene families (Additional file 2: Figure SF5). Although de novo gene models appear to be heavily biased in terms of their structure (Additional file 2: Figures SF1 and SF3), we expect that overall trends and distributions are only negligibly affected by them due to their small overall count. Predecessors and manually annotated gene models agree on correlative trends of gene structure Given the general agreement of gene structure properties between AUTO-SUB and MAN-SUB gene models, we tested whether or not we also find an agreement between automatic and manual annotation when investigating large-scale trends. Specifically, we investigated whether we could confirm previously reported gene structure trends in relation to genome size. Our results are in line with previous reports [20,30] regarding the negative correlation between coding proportion and genome size (Fig. 2b). This result is in line with the hypothesis that genome size is mainly driven by repeat content rather than by gene count [24]. On the other hand, we do not recover the previously reported [20,30] positive correlation between intronic proportion and genome size. Since previous studies analyzed data from four [30] and six [20] phyla of Eukaryota with insects being represented by only few species, we might observe an insect-specific pattern. However, further studies are necessary to verify that this trend is not caused by small sample size or genome quality. If a different pattern of intron evolution can be corroborated in insects, assumptions on general genome evolution would have to be re-evaluated. It was indeed recently shown that there is evidence for a positive correlation of genome size and intron count in insects [19] and for highly dynamic intron evolution in a phytoseiid predatory mite [43]. On the other hand, short read sequencing technologies for genome assembly may limit sensitivity for detecting this correlation, as long introns may fail to be fully assembled. A negative correlation of exon count and exon length, as consistently found in our data (Fig. 3b), has been reported not only in the genomes of human and rice [40], but also in that of insects [19]. Furthermore, we find a negative correlation of exon/intron count and respective GC content as well as an apparent constraint of complex gene models to a medium GC content, especially in introns (Fig. 3a, c), as previously reported [40]. However, we recover the reported [40] negative correlation of intron count and length only in three (O. abietinus, C. lectularius, F. occidentalis) of the seven species in all (sub)sets, while in A. glabripennis we see the trend only in the full sets (AUTO and OGS) (Fig. 3d, Additional file 2: Figure SF3). These results could point towards insect-specific and intron-specific peculiarities in the evolution of gene structure [19,43]. The vertebrate-biased taxon sample used by Zhu et al. [40] (nine vertebrates, two plants, one worm, and one insect) does not allow one to draw conclusions with respect to insects. While an amniote-specific positive correlation of intron and genome size has been shown and discussed in relation to avian powered flight [6], it has yet to be determined whether introns evolve in a manner specific to insects and whether it is affected by other constraints than in amniotes. Conclusions Focusing on a diverse sample of insect genomes, we analyzed whether repertoire-wide distributions of gene structural properties change when automated annotations of protein-coding genes are manually revised. Our results suggest that the influence of manual annotation on the distribution of those properties studied by us is comparatively small, even if individual models may have substantially changed in detail. Thus, our study empirically supports the generally accepted but to date not extensively tested view that automated gene prediction yields reliable gene models. We further conclude that automatically predicted gene models allow the elucidation of commonalities, differences, and driving forces of gene structure evolution: we consistently (with few exceptions) find correlative trends in the analyzed gene structural properties when using either automatically generated or manually annotated models. While manual annotation is fundamentally important to obtain accurate gene models, our results suggest that the insectspecific patterns of gene structure described here can be addressed without the necessity of prior manual annotation when using assemblies and annotations of high quality. Establishing that manual annotation does not substantially impact analyses of genome-wide trends is important for large-scale studies such as carried out within the i5K project [39], where manual annotation of the included species' gene sets varies from none to extensive. Gene sets We prepared two sets and three subsets of data from the available annotations produced by the i5k initiative of each species. Firstly, we distinguished the set of all automatic predictions (AUTO) and the final official gene set (OGS) comprising the non-redundant merge of (i) de novo gene models, (ii) manually annotated genes, and (iii) remaining purely automatic gene models. Secondly, we extracted smaller subsets to analyze certain types of annotation in detail: (i) de novo gene models without automatic predecessors (MAN-ADD), (ii) manually annotated gene models that have an automatically predicted predecessor (MAN-SUB), and (iii) the corresponding automatically predicted predecessors to MAN-SUB (AUTO-SUB) (Additional file 3: Note S1). Structural property and correlative trend analyses Structural properties of the predicted protein-coding genes in the respective gene set of each species were inferred with the software COGNATE [49] version 1.01 using the program's default parameters (COGNATE considers only the longest transcript per gene). Throughout this study, we considered all exons of the longest transcript, also to represent coding sequences. This is due to the fact that UTRs were not consistently annotated (thus, exons and CDSs were identical). All COGNATE results generated for this study (except those of F. occidentalis; these are available upon request due to the ongoing publication process) are available from the Dryad repository (datadryad.org): https://doi.org/10.5061/dryad.v50tm7m. Statistical analyses and visualizations were performed in R [50]. Two-sample Kolmogorov-Smirnov test (KS test, R: ks.test) was used to test for significant differences in structural property distributions between all sets and subsets of each species. Results (across all sets and subsets) were corrected for multiple testing (Bonferroni). In addition, to identify statistical significant differences in central tendencies, each KS-test was supplemented by a two-sample (Mann-Whitney-) Wilcoxon test (Wtest, R: wilcox.test) and results were subjected to multiple test correction (Bonferroni) as well. Both tests address the similarity of distributions, but differ in their sensitivity: the KS test is sensitive to changes in shape, spread, and median between the distributions, while the W test is mostly sensitive to changes in the median. We used a non-parametric approach to test whether subsets (AUTO-SUB, MAN-SUB, MAN-ADD) can be considered representative for the species-specific sets (AUTO, OGS). To overcome the problem of large size differences between sets and subsets, we used an adaption of the jackknife method (implemented in a custom script available at GitHub, see below). For this, we repeatedly (1000 times) subsampled without replacement 1000 entries (i.e., properties of 1000 gene models) of each set (OGS and AUTO) and calculated Spearman's rank correlation coefficients of four property combinations: (i) exon count vs. exon length, (ii) exon count vs. exon GC content, (iii) intron count vs. intron length, and (iv) intron count vs. intron GC content. Additionally, the correlation coefficients of the four combinations were calculated for AUTO, OGS, AUTO-SUB, MAN-SUB, and MAN-ADD (Additional file 1: Table ST4). For each species, Spearman's rank correlation coefficients of the 1000 subsamples are visualized separately for AUTO and OGS, adding the values of the original (sub)sets with a specific color (Additional file 2: Figure SF4). Cuticle proteins and chemoreceptors Intuitively, we expect that fast evolving genes (possibly with rare transcripts) make up a large fraction of genes added de novo during manual annotation. Obvious candidates for such genes are those coding for cuticle proteins (CPs) and chemoreceptors (CRs) [e.g., 5]. The teams of Josh Benoit (Department of Biological Sciences, University of Cincinnati, USA) and Hugh Robertson (Department of Entomology, University of Illinois at Urbana-Champaign, USA) thoroughly manually annotated genes coding for cuticle proteins and chemoreceptors in (at least) A. glabripennis, L. decemlineata, C. lectularius, and O. fasciatus. In a small case study, we focused on O. fasciatus due to time constraints and compared the manually annotated (i.e., with an automatically predicted predecessor) to added (i.e., de novo) CPs and CRs. For both CPs and CRs, gene lists were extracted from the O. fasciatus OGS v 1.1 according to their annotated name, including information on transcript ID, curation status (manually annotated MAKER model or de novo model), and, for CRs, the chemoreceptor class (gustatory [GR], ionotropic [IR], or odorant [OR] receptors) [Additional file 1: Tables ST5 and ST6]. According to the transcript IDs, COGNATE measurements were extracted for the longest transcript per gene (from the COGNATE output files 07-10). Property distributions are visualized in Additional file 2: Figure SF5.
7,086.8
2019-10-17T00:00:00.000
[ "Biology", "Computer Science" ]
An Improved Theoretical Approach to Study Electromagnetic Waves through Fiber Bragg Gratings We show that using the theory of finite periodic systems we obtain an improved approach to calculate transmission coefficients and transmission times of electromagnetic waves propagating through fiber Bragg gratings. We discuss similarities, advantages, and differences between this approach and the well known less accurate one coupled mode approximation and the pseudo-Floquet Mathieu functions approach. Introduction A complete theoretical description of the electromagnetic waves motion through fiber Bragg gratings (FBG) remains to be achieved.Despite the numerous numerical calculations and simulations (see, e.g., [1,2]) the full analytical solution of the wave equation in the presence of a periodic modulation of the refractive index is still an open problem.In the last years, a large number of fiber Bragg gratings devices were developed for different types of applications, among others, for optical communications devices, tunable wavelength filters and temperature, and pressure sensors [3][4][5][6][7][8].In the theoretical approaches proposed to study these systems, one has also to deal, besides of the spatial modulation of the refractive index, with model limitations and the finiteness of the actual Bragg systems.In the small modulation amplitude limit, the wave equation becomes a Mathieu equation, and one has to face divergency problems in the numerical evaluation of these functions.For standard wavelengths of FBG, written with ultraviolet lasers, 1 cm fiber contains thousands of refractive-index modulations.An extensive literature has been published on coupled modes approximation [9][10][11][12][13], and a simplified one-dimensional model of two counterrunning and synchronous modes was introduced by Kogelnik and Shank [14,15].This single mode approximation, called also the one coupled mode approximation (OCMA), has been a useful and simplified approach [16].However this model works well only for frequencies in the neighborhood of the Bragg frequency .For this reason an alternative method that will make it possible to study the transmission properties of electromagnetic waves in the whole range of frequencies is very much called for. In this paper, we will present a theoretical approach, based on the theory of finite periodic systems [17][18][19][20][21][22][23] (TFPS) that will allow a precise calculation of transmission coefficients and transmission phase times through finite fiber Bragg gratings.We will show that, at least, the predictions of the well-established models are fully reproduced. In the next sections we will outline the one coupled mode approximation (OCMA), the Mathieu functions approximation, and the theory of finite periodic systems (TFPS).We will derive the relevant results using the transfer matrix method and we will apply these results to study the evolution of longitudinal electromagnetic waves across a fiber Bragg grating.We will evaluate transmission coefficients and transmission times of specific Bragg grating fibers and compare the specific results of the well-established one coupled mode approximation with those of the theory of finite periodic systems (TFPS) and the Mathieu functions approximation.To overcome divergencies using Mathieu function we will also show results obtained in a combined approach of Mathieu functions and the TFPS. Definition of the Fiber Bragg Grating Suppose we have a Bragg grating fiber of length and refractive index (we use this notation to avoid confusion with the number of unit cells, , in the fiber Bragg grating): where Λ is the grating period (Λ = /2 ), is the effective refractive index, is the vacuum wave length, and is the modulation amplitude.This is a typical periodic system.The space-time evolution of electromagnetic fields in these systems is governed by the wave equation. To study the space-time evolution of electromagnetic waves, we shall, first, recall the one coupled mode approximation and obtain the transmission coefficient and the transmission time as functions of the frequency.We will then obtain the same quantities using the Mathieu functions and, at the end, using the theory of finite periodic systems. 2.1.The FBG in the One Coupled Mode Approximation.In the one coupled mode approximation and following Erdogan's notation, the system of equations for the left and right moving field amplitudes, dominant at the Brag reflection frequency, is written as If we define the wave vector the system of (3) can be written as whose solution, for and independent of , is It is easy to verify that the transfer matrix with and being the Pauli matrices, satisfies the relation Expanding the exponential function, the transfer matrix in (7) becomes [16] where Here = √ 2 − 2 .If we consider the parameters the transfer matrix of a single Bragg grating of length = 2 − 1 becomes with The frequency = − is known as the detuning frequency.It is worth noticing that this matrix lacks the information of the air-BG and the BG-air interfaces.To describe transmission through a finite BG, bounded by air or some other media, one needs in principle to multiply by on the left and right hand sides, respectively.For BG bounded by air = 1 and ≃ .It we take into account these matrices, the effect on the physical quantities is negligible.For this reason one can keep the manageable form (12). Given the simple relation between the transmission amplitude and the transfer matrix elements, the transmission coefficient and phase time of a FBG in the OCMA become Transmission coefficient In the particular case of = − = 0 we have These expressions, written here with the notation of [24], are characteristic from the one coupled mode approach [16]. In Figures 1(a) and 1(b) we plot the transmission coefficient and the tunneling time , as functions of the detuning frequency , for different values of the modulation amplitude .For these graphs we considered = 8.5 mm, = 1.261 × 10 15 Hz, and = 1.452.Increasing the amplitude , the optical gap becomes deeper and wider.At the same time, the transmission time in the gap becomes smaller.In the next sections we will present the Mathieu functions approach and the theory of finite periodic systems and, with the purpose of comparing with the results shown in Figure 1, we will calculate the same physical quantities for a system similar to that considered here. The FBG Described by Mathieu Functions.If we come back to (2) and write the refractive index 2 , for 2 ≪ 1, as the wave equation becomes the Mathieu differential equation whose solutions 1 () = ( , ; + /2) and 2 () = ( , ; + /2) are the well known even and odd Mathieu functions [25], with = /, = 2 2 , = 2 2 2 , and → ± Λ/2.Defining the wave vector it is possible to show that the transfer matrix with satisfies the relation The transfer matrix ( 2 , 1 ) connects the wave vectors of Mathieu functions and their derivatives at any two points, 2 and 1 , of an infinite periodic system.The actual Bragg gratings are finite with thousands of unit cells.In this case, like in the standard approach of infinite periodic systems, we assume a kind of Born-von Kármán approximation, but for the transfer matrix.More precise results will be obtained using the theory of finite periodic functions.To evaluate transmission coefficients and phase times of a BG whose length is , we need to transform the matrix ( , 0) into the transfer matrix ( , 0) that connects wave vectors of propagating functions at two points just outside the BG. When the wave number of the propagating functions in vacuum is , the relation between these matrices is [26] ( After multiplying, the transfer matrix that we obtain for a single Bragg grating is with The transmission coefficients and the phase time are therefore given by In Figure 2 we plot the transmission coefficient , for the same parameter values used in Figure 1.The qualitative and quantitative agreements are good.We do not present the phase time of (28) because the evaluation, using Mathematica code, diverges for Bragg-grating lengths of the order of 10. To overcome this difficulty one can combine this Mathieu functions approach with the theory of finite periodic systems.The results of this combined approach will be shown after discussing the results obtained in the TFPS. The FBG in the TFPS. To simplify the calculation we shall assume that the refractive index is () = (1 + ()) with (), a sectionally constant periodic function defined as In this case the unit cell transfer matrix is with the real and imaginary parts of and given by Here = = Λ/2 are the half-cell lengths with refractive indices 1 = 0 − /2 and 2 = 0 + /2, respectively.In the theory of finite periodic systems, the transfer matrix of the whole -cells Bragg grating is [21] (Λ, 0) = ( with where ( ) is the Chebyshev polynomial of the second kind and order , evaluated at .The transmission coefficient and the phase time of the -cells Bragg grating can be evaluated from In Figure 3 we plot these quantities for the same parameters as in Figure 1.There is also a very good qualitative and quantitative agreement with the results in Figure 1. The FBG Combining Mathieu Functions and the TFPS. To avoid divergencies when standard codes like Mathematica are used to evaluate Mathieu functions and their derivatives, one can divide the Bragg grating of length by some natural number ≳ 1000 such that the Mathieu functions evaluated at the new BG length = / is congruent with Λ.The smaller Bragg gratings can now be taken as the unit cell in the TFPS.For a multiple of 689 with = / Λ = 1.2618710 15 Hz and = 1.452 we obtain the transmission coefficients and phase times shown in Figure 4, for the same values of as in Figures 1, 2 and 3.The qualitative and quantitative agreement is really good. The theory of finite periodic systems and the Mathieu functions approach were also applied to study the double Bragg grating, particularly to understand the effect of the Bragg gratings separation in the tunneling time behavior and to compare with other approaches' predictions [27]. 2.5.Conclusions.We have shown that the transmission of electromagnetic waves, through fiber Bragg gratings, can be faithfully studied using the theory of finite periodic system either alone or combined with the Mathieu functions. 2 Advances in Condensed Matter Physics Figure 2 : Figure 2: The transmission coefficient as a function of the detuning frequency, evaluated with (27) and the Mathieu functions, for different values of the modulation amplitude and the same parameter values used in Figure 1.The behavior compares quite well with Figure 1; however the phase time using a code like Mathematica diverges for BG length above ≃ 10. Figure 3 :Figure 4 : Figure 3: The transmission coefficient (a) and the phase time (b) from the TFPS, as functions of the frequency, for different values of the "modulation amplitude" with = 1.261 × 10 15 Hz, 0 = 1.452, and = 8.5 mm.
2,501.8
2017-01-10T00:00:00.000
[ "Physics" ]
Augmented reality mobile information system for the obtaining detailed and non-intrusive information in large environments Introduction: This article is the result of the augmented reality project “AR position U” for the effective visualization of information of interest dynamically through augmented reality, carried out in the year 2022 in the city of Popayan, capital of the department of Cauca. Problem: Problems have been identified regarding the location of spaces in very large places by people who have no knowledge of the site causing loss of time and frustration, to the point of being late to the desired location. This leads to the following problem question: What would be the impact of the implementation of an augmented reality application, whose purpose is to provide detailed information to people in a non-intrusive way in large environments? Objective: To propose a mobile information system that allows users to orient themselves and obtain detailed information in a dynamic and non-intrusive way in large environments through augmented reality. Methodology: The experimental methodology is used, since a series of tests were determined to reach the validation of the information system, in order to have a product that was articulated with the guide of good practices of project management and meets the needs of the population to which it could be directed. Results: As a result, the information system based on augmented reality was obtained, allowing for the easy location of spaces due to the usability of the product. Conclusion: Based on the experimental methodology, it is determined that, although augmented reality improves the detailed descriptions of large environments, it is very limited to the external conditions; however, the tool meets the proposed objectives. Originality: Augmented reality information system for orientation in large places and obtaining information in a non-intrusive way, under the PMBOK and XP methodology. Limitations: Due to complexity and time constraints, it is not possible to cover all faculties. On the other hand, 3D interface is only generated by QR tags. On the other hand, the correct functioning of the system can be affected by lighting factors, the distance of the user from the tag, the hardware and software capacity of the smartphone and the internet connection. Augmented reality mobile information system for the obtaining detailed and nonintrusive information in large environments Resumen Introducción: Este artículo es resultado del proyecto de realidad aumentada "AR position U" para la visualización efectiva de información de interés de forma dinámica mediante realidad aumentada, llevado a cabo en el año 2022 en la ciudad de Popayán, capital del departamento del cauca. Limitaciones: Debido a la complejidad y limitantes de tiempo, no es posible abarcar todas las facultades. Por otro lado, interfaz 3D solamente se genera mediante etiquetas QR. Por otro lado, el correcto funcionamiento del sistema se puede ver afectado por factores lumínicos, la distancia del usuario respecto a la etiqueta, la capacidad tanto de hardware como software del smartphone y la conexión a internet. INTRODUCTION Many years ago, augmented reality [1] was conceived as science fiction, but today it has scientific bases that support it, in addition to technological advances and the miniaturization of things, which has led to an increase in processing capacity within computer systems and a decrease in costs, which has made access to this technology possible. It is important to highlight the use that can be given to augmented reality, which is very varied: it has great potential in education, business, and the video game sectors, among others. Great benefits have been seen in using augmented reality both for different daily tasks as well as in the professional field. An example of this is the interaction with interfaces in the real world that allows architects or the final client to see the predesigned structures in the same place, giving a more approximate and effective idea of visualizing the final result. In this same sense, the use of this technology has a beneficial potential that, when implemented in large complexes such as: airports, educational campuses, factories, among others, will allow real-time visualizations to be issued, such as showing people where to go or providing virtual advertising of said place without damaging the real environment with visual noise. Consequently, the need arises for an augmented reality tool that helps people find their way around large places, avoiding interference if necessary. For the development of this idea, an experimental methodology is used, where the effectiveness of the system is tested, which consists of a mobile information system that allows, through the use of a camera, to generate superimposed elements on the real world. Said element consists of a graphical interface that shows the information of an environment in a detailed and dynamic way, having the ability to interact with the user. As a way of validation, this system was implemented under a controlled scenario in the University Institution to which it belongs, allowing students to organize their schedule and optimize their time, obtaining a better coexistence within the facilities. RELATED WORKS The search for similar jobs where Augmented Reality was implemented helped clarify the different environments in which it can be applied and the benefits they bring; as a reference, we have the following: Medellín -Colombia [2] looks to be at the forefront of technology and sustainability to offer a unique experience to tourists or nationals. This is done through the Smart Tourism Center, the first in the country, that through the Medellín Travel mobile application provides key information about different tourist places in the city and also has the option of augmented reality that allows you to view typical elements of the Paisa culture, old photographs and thus learn a little about its history. Augmented reality as a didactic resource for the teaching and learning of historical heritage. The building of the mosaic of the loves of the archaeological complex of Cástulo (linares, jaén). The archaeological discoveries that have been made in different parts of the world and are on display do not show enough information to convey what is displayed, even more so when the observer has no knowledge of archeology. In this way, by implementing augmented reality, it facilitates the visualization of information for the visitor and provides a 3D image of how it could have been in the past, generating unique experiences and helping the visitor understand the history that it brings with it [3]. Augmented reality mobile application for the location of the classrooms in the Porvenir Campus of the University of the Amazon. Being able to locate the classrooms within an educational complex in the first days of class at the beginning of the semester is complicated as a new student, which has the negative effect of being late to the classrooms, entering classrooms by mistake or simply causing interruptions in the development of the work. At the University of the Amazon, a mobile application (SARA) that uses augmented reality offers an alternative for being able to discover the different environments that the Future Campus of the entity contains, thus guiding the student in a more effective and timely manner [4]. Campus Guide using Augmented Reality Techniques The purpose of the application is to allow students and teachers to obtain information on: • Classrooms: Know if it is available or not, if it has an internet connection. • Administrative offices: Office hours, contact information to facilitate communication. In addition to that, it is practical and interactive for the user, since information is obtained through augmented reality using the camera of the mobile device, which is of great help for all staff and even more so when you are new to the university, locating the user within the educational complex and thus improving arrival times to classrooms [5]. Augmented reality using Vuforia for residential marketing Through the application, the way in which the sale of real estate is understood and promoted is changed, since it is not necessary to travel to its physical location to be able to observe it in detail. This was done through the application of augmented reality to generate the property in 3D, allowing the potential buyer to visualize the place interactively and also making it easier for them to obtain more information, rendering it no longer necessary to make an appointment with the owner [6]. An Augmented Reality App for Smart Campus Development -MSKU Campus Prototype Smart cities allow locals or tourists to improve interactions and thus guarantee meaningful experiences when visiting the place, with this approach applied to the smart campus. For example, "The University of Exeter disappeared a dynamic AR landscape of flora and fauna. Using Augmented Reality, the campus was transformed into accessible learning materials and resources to support the formal and informal curriculum" [7]. This kind of implementation benefits all staff of an educational institution, whose benefit is obtained through the use of a mobile device that facilitates interaction between students and the entity by having information more efficiently. Service architecture for the location of people and objects in closed spaces This was a thesis in marking for a Master's degree in Computer Science carried out by the authors [8]. Due to the difficulty generated when arriving at a shopping center for the first time, where it is difficult to locate and find a point of reference without any guide, the authors propose a service architecture that allows adapting in an easy and less complex way to the different modular components of a localization application in closed spaces, allowing configuration without modifying the services it provides. This is taken as a reference given that the purpose is similar, differentiating it in terms of the use of the technologies used for its construction. ICT applied to cultural heritage: use, dissemination and accessibility. The roman amphitheater of Cartagena In this project, the authors [9] propose the design and use of a free augmented reality and virtual reality tool to help the archaeologist to fully digitize the Roman amphitheater of Cartagena to provide accessibility to the elderly and people with motor disabilities that prevents them from visiting. This work is very important because it shows the advantages of making use of this type of emerging technology to navigate a place. Mobile augmented reality prototype for a mass transportation system in the city of Barranquilla The authors [10], through augmented reality, seek that public transport users can visualize nearby routes and parking lots with the aim of facilitating movement by providing effective and truthful information using the Android platform and the OpenGL library. This article contributed the study of the platforms which was important for the implementation of the app that is proposed as AR position U. Means of tourist interpretation through the use of augmented reality in the Cuicocha lagoon The author [11] posited that the use of augmented reality technology has become an important factor in different fields of action. To comply with this work, research methods, techniques and instruments were specified, as well as the tourist context where the implementation of augmented reality was carried out in order to manage the provision of information for the Lagoon of Cuicocha in an interactive and dynamic way. This project shows the importance of using augmented reality as a means of tourist support for those who visit this place. Augmented reality and geographic content in educational itineraries. Didactic proposal for its enhancement in the training of Primary Education teachers In this article [12], the authors present a proposal where they intend to show the geographical contents in a significant way through the didactic methodology based on constructivism in order to create a significant teaching-learning process through augmented reality as a complementary instrument. Traveling to other worlds. Virtual reality and augmented reality in cultural spaces Here, the author [13] presents the realities regarding the use augmented and virtual reality with respect to the inclusion of cultural dissemination and its commercial exploitation. This article was very important since it could support the justification of the proposed tool. Use of augmented reality in teaching -learning of Natural Sciences In this article [14], the authors propose augmented reality as an instrument for the teaching and learning of Natural Sciences, where animations were recreated in a dynamic way that helped reinforce the learning of the contents in an interactive way. This work was referenced for its methodological process model. It is important to mention the different theoretical bases that support the work and, in turn, highlight the use of augmented reality in different scenarios: Augmented reality There are two technologies that can be confused; therefore, it is important to highlight the differences between "virtual reality" vs "augmented reality". Despite having similarities, virtual reality detaches the user from reality and augmented reality allows reinforcing reality with digital projections through equipment such as smartphones or tablets [15]. History In 1901, the writer Frank Baum imagined glasses that allowed information about people to be visualized. In 1957, Morton Heiling, a cinematographer, implemented augmented reality through his Sensorama, improving the user experience. However, time passed, and in the 1990s, the Boeing researcher Thomas Caudell coined the term augmented reality [16]. Definitⁱon Augmented reality is a technology that allows generating new experiences by combining the digital world and the physical world from an easily accessible device such as the mobile phone [1]. How does it work Augmented reality is based on superimposing virtual objects that can be in 2D or 3D on the real world. For its execution or operation, it requires the following elements: • Camera: which is responsible for capturing images of the real world. • Device that allows for the visualizing of the result of the combination of the virtual and the real. • The application in charge of carrying out the combination. • A Marker: a physical image that acts as the trigger for augmented reality, and to which the virtual elements are anchored. It is identified by the device's camera and the software processes the image to carry out the required actions. Augmented reality applications The use of augmented reality when applied facilitates the interaction and understanding of the real world. This is applicable in different areas where human beings interact: Games Games are a clear example that allow us to demonstrate the evolution of this technology. Augmented reality transforms the game environment and allows greater realism, thus generating new experiences. The Pokémon video game was launched in 1996 for the Nintendo Game Boy console, and after 20 years through the course and evolution of the different game consoles, it has now reached the screens of mobile devices with "Pokemon Go" that implements augmented reality which, according to the dynamics of the game, through the camera and GPS of the mobile device, you must search and catch the creatures called Pokémon that are in the player's environment [17]. Teaching [18] The strategies that teachers use to teach imply a lot in the speed of learning of the given subject, even more so when there are technological tools that support the development of this activity, allowing to visualize objects in 3D from what facilitates having a better perception and comprehension. One example is for architecture students; they can generate elements of a building and in this way speed up the generation of ideas and architectural proposals. Another example would be the Google SkyMap application, which superimposes information regarding stars as the user observes the sky through the mobile device's camera. Marketing and sales [19] Marketing represents a fundamental part of many companies, because the demand for a product depends on the marketing strategies used to capture the attention of the consumer; even more so when they involve innovative sales methods or seek the comfort of the client. By implementing augmented reality in sales, customers will be able to test the product without having to have it physically. Benefits can be had when there is a requirement to measure clothing or accessories, or when trying to fit furniture for a house. Travel and tour guides Tourism is an area that has evolved in the way it is promoted in order to attract tourists. In many places, they have implemented technological means that allow tourists to relive the history of the environment through the mobile device. "Latest generation holograms, augmented reality and immersive videos are part of the experiences offered by the center, located in Parques del Río" [20]. It is the first smart park located in the city of Medellín, which focuses on being friendly to the environment using renewable energy and through the Medellín Travel mobile application, the tourist will be able to access images, videos and audio logs regarding the culture of the region. Maintenance processes Complex machinery, such as vehicles or aircraft, require strict maintenance processes. Augmented reality allows for the optimization of the repair or maintenance work times by overlaying information about the parts that are being manipulated. This technology may be used by large companies such as Boeing in the aircraft wiring process, or simply by non-experts looking to carry out simple repairs of equipment or installations at home. Search processes At present, navigation and search processes have been automated from a device, otherwise it is done through a physical map or by asking people. With technological advances, applications have emerged that optimize this process to find places more easily, such as: parking lots, parks, drug stores, hotels, restaurants, etc. In this way, by implementing augmented reality, the user will be able to view, through the camera and the screen of the device, information on places of interest, or with improved coverage, the streets of a city or the corridors of a building. Medicine Medicine is a field with a variety of high-risk procedures, so professionals in this area require a lot of context information. Having the 3-dimensional layout of organs or bones involved in a surgical procedure is very helpful by providing information digitally and easily accessible to the interested party. It should be clarified that in this situation markers will no longer be used for their actions, but wave-based systems to obtain said information. Augmented reality in the field of medicine is not specifically defined by the complexity, costs, technological equipment and other implications that it entails; However, there are prototypes. An example is at the Virgen de Rocío Hospital in Seville, which has software with the ability to give a three-dimensional view of a heart and thus provide greater precision, thus facilitating timely diagnoses and avoiding invasive procedures in the patient [1]. Visual search engines By a different path, but related to augmented reality, are the visual search engines that allow the user, through the camera of his mobile device, to take a picture of the object, plant, animal or text on which they want to obtain more information. Therefore, the environment would become a kind of catalogue, with infinite possibilities. An example is the Google Lens application which has multiple actions such as searching from an image loaded from the device. Activating the camera, it is possible to translate text, identify and copy text, etc., thus speeding up writing text or voice dictation to the search engine [21]. METODOLOGY For the development of this project, it was divided into phases, which in turn were subdivided into tasks allowing control and order in its elaboration. For this reason, for the elaboration of the ARPositionU proposal that is described in this article, The Guide To The Fundamentals Of Project Management Or Guide PMBOK [22] was referenced in order to establish the phases in which the project was divided. In this way and through the task breakdown diagram Figure 1, it was decided that this would be under an agile development framework, for which three phases will take place: • Phase 1. Initiation and planning. • Phase 2. Execution. In the same way, within the management, a contract and a start-up act were structured to open the proposal that was proposed and start with the development of each of the planned activities. Phase 2. Execution: The execution was carried out under an agile development framework, for which the different existing methodologies for this type of development were taken into account and it was decided to implement the XP methodology (Extreme Programming) [23]. Composed of 5 phases, it stands out for having a functional product in a very short time. Iterations are also continuously carried out to obtain a better quality product and it works well for small work teams thanks to its simplicity. It is important to note that this methodology is also extinct thanks to a systematic review of the literature [10], which provided the necessary arguments to reach their selection. Planning: In this phase of the methodology, the different guidelines that must be met, and the application and the plans to be implemented in the development were defined. It began with the construction of user stories [24], where the functionalities that the system must have were specified. The iteration plan was also established; in this case there were two. In the first iteration, 8 user stories were assigned, which comprised moderate difficulty, and in the second iteration, the remaining 3 user stories. Figure 2. Additionally, the sequence diagram was generated, in which the functionality of the application is evidenced in an organized way. Table 3). 8 test cases were needed, of which the success was 8 out of 8; in this way, it was possible to verify the correct operation of the application. 3.3. Phase 3. Training and closure: In the last phase, a focus group was interacted with, to which the main functions of the application were explained, according to their role; they also have the support of the user manual. In this way, it was possible to verify and analyze the user parameters. Finally, the documentation and source code was delivered to close the project. ANALYSIS AND RESULTS The study of related works was crucial in the consolidation of this proposal, as it allowed us to explore the most suitable platform for developing the tool, and to determine the methodological approach towards its construction. Through the adoption of agile methodologies, we were able to systematically structure each stage of the Extreme Programming (XP) development process. In this regard, the theoretical foundation identified helped us justify the use of Augmented Reality as an interactive means of conveying information. As a result, this theoretical basis enabled us to consolidate our proposal and validate the use of Augmented Reality. The platform used (Unity) facilitated the accelerated development of the application thanks to its graphic tools and integration with technologies such as Vuforia for the implementation of augmented reality, but with the limitations of working with a version equal to or greater than Android 8.0 and permanent Internet use. In the tests carried out, it was shown that the download and registration processes in the system required more time than expected, due to the limited speed of the Internet and the additional verification steps outside the application. After this, the user who logged into the application was able to organize some of the subjects corresponding to the schedule of his semester. Finally, these materials can be represented as augmented reality elements in the place where the users were found, facilitating being able to locate themselves within the facilities. For this, the label located near the room must be approached to be able to visualize and obtain detailed information of the event in real-time. These users could publicize the benefits of the presented utility, where arrival times at a destination will be reduced without the need to interfere with the activities that are in progress in said environment. CONCLUSIONS It was possible to verify the correct functioning of the system, satisfying the needs of users when orienting themselves in a place, avoiding intrusion and improving times, resulting in a useful and accepted tool by the student and teaching community. Augmented reality by superimposing virtual objects on the real world makes it easy for users to obtain information about a university environment, which is available through an application on a smart mobile device that most people have today. In the same way, it is applicable in different fields, such as: tourism, education, medicine and factories, thus promoting the creation of new projects of this type by recognizing the usefulness it can provide. ACKNOWLEDGEMENT I would like to express my gratitude to the Universidad del Cauca, particularly to its GTI research group and the Beta Bit team from the Faculty of Engineering at Colegio Mayor of Cauca University, for their invaluable support in the development of this project.
5,693.8
2023-01-22T00:00:00.000
[ "Computer Science", "Engineering" ]
A Vision-Based Machine Learning Method for Barrier Access Control Using Vehicle License Plate Authentication Automatic vehicle license plate recognition is an essential part of intelligent vehicle access control and monitoring systems. With the increasing number of vehicles, it is important that an effective real-time system for automated license plate recognition is developed. Computer vision techniques are typically used for this task. However, it remains a challenging problem, as both high accuracy and low processing time are required in such a system. Here, we propose a method for license plate recognition that seeks to find a balance between these two requirements. The proposed method consists of two stages: detection and recognition. In the detection stage, the image is processed so that a region of interest is identified. In the recognition stage, features are extracted from the region of interest using the histogram of oriented gradients method. These features are then used to train an artificial neural network to identify characters in the license plate. Experimental results show that the proposed method achieves a high level of accuracy as well as low processing time when compared to existing methods, indicating that it is suitable for real-time applications. Introduction Automatic vehicle license plate recognition (AVLPR) is used in a wide range of applications including automatic vehicle access control, traffic monitoring, and automatic toll and parking payment systems. Implementation of AVLPR systems is challenging due to the complexity of the natural images from which the license plates need to be extracted, and the real-time nature of the application. An AVLPR system depends on the quality of its physical components which acquire images and the algorithms that process the acquired images. In this paper, we focus on the algorithmic aspects of an AVLPR system, which includes the localization of a vehicle license plate, character extraction and character recognition. For the process of license plate localization, researchers have proposed various methods including, connected component analysis (CCA) [1], morphological analysis with edge statistics [2], edge point analysis [3], color processing [4], and deep learning [5]. The rate of accuracy for these localization methods varies from 80.00% to 99.80% [3,6,7]. The methods most commonly used for the recognition stage are optical character recognition (OCR) [8,9], template matching [10,11], feature extraction and classification [12,13], and deep learning based methods [5,14]. In recent years, several countries including the United Kingdom, United States of America, Australia, China, and Canada have successfully used real-time AVLPR in intelligent transport systems [3,15,16]. However, this is not yet widely used in Malaysia. The road transportation department of Malaysia has authorized the use of three types of license plates. The first contains white alphanumeric characters embossed or pasted on a black background. The second type is allocated to vehicles belonging to diplomatic personnel and also contains white alphanumeric characters, but the background on these plates is red. The third type is assigned to taxi cabs and hired vehicles, consisting of black alphanumeric characters on a white plate. There are also some rules the characters must satisfy, for example, there are no leading zeros in a license plate sequence. Additionally, the letters "I" and "O" are excluded from the sequences due to their similarities with the numbers "1" and "0" and only military vehicles have the letter "Z" included in their license plates. The objective of this study was to implement a fast and accurate method for automatic recognition of Malaysian license plates. Additionally, this method can be easily applied to similar datasets. The paper is organized as follows. A discussion of the existing literature in the field is given in Section 2. In Section 3, we introduce the proposed method: localizing the license plate based on a deep learning method for object detection, image feature extraction through histogram of oriented gradients (HOG), and character recognition using an artificial neural network (ANN) [17]. In Section 4, we establish the suitability of the method for real-time license plate detection through experiments, including comparisons with similar methods. The paper concludes in Section 5 with a discussion of the findings. Related Work A typical AVLPR system consists of three stages: detection or localization of the region of interest (i.e., the license plate from the image), character extraction from the license plate, and recognition of those characters [18][19][20]. Detection or Localization The precision of an AVLPR system is typically influenced by the license plate detection stage. As such, many researchers have focused on license detection as a priority. For instance, Suryanarayana et al. [21] and Mahini et al. [22] used the Sobel gradient operator, CCA, and morphological operations to extract the license plate region. They reported 95.00% and 96.5% correct localization, respectively. To monitor highway ticketing systems, a hybrid edge-detection-based method for segmenting the vehicle license plate region was introduced by Hongliang and Changping [2], achieving 99.60% detection accuracy. According to Zheng et al. [23], if the vertical edges of the vehicle image are extracted while the edges representing the background and noise are removed, the vehicle license plate can be easily segmented from the resultant image. In their findings, the overall segmentation accuracy reported was approximately 97%. Luo et al. [24] proposed a license plate detection system for Chinese vehicles, where a single-shot multi-box detector was used for the detection method [25] and achieved 96.5% detection accuracy on their database. A wavelet transform based method was applied by Hsieh et al. [26] to detect the license plate region from a complex background. They successfully localized the vehicle license plate in three steps. Firstly, they used Haar scaling [27] as a function for wavelet transformation. Secondly, they roughly localized the vehicle license plate by finding the reference line with the maximum horizontal variation in the transformed image. Finally, they localized the license plate region below the reference line by calculating the total pixel values (the region with the maximum pixel value was considered as the plate region) followed by geometric verification using metrics such as the ratio of length and width of the region. They achieved 92.40% detection accuracy on average. A feature-based hybrid method for vehicle license plate detection was introduced by Niu et al. [28]. Initially, they used color processing (blue-white pairs) for possible localization of the license plate. They then used morphological processing such as open and close operations, followed by CCA. They used geometrical features (e.g., size) to remove unnecessary small regions and finally used HOG features in a support vector machine (SVM) to detect the vehicle license plate and achieved 98.06% detection accuracy with their database. License Plate Character Segmentation Correct segmentation of license plate characters is important, as the majority of incorrect recognition is due to incorrect segmentation, as opposed to issues in the recognition process [29]. Several methods have been introduced for character segmentation. For instance, Arafat et al. [9] proposed a license plate character segmentation method based on CCA. The detected license plate region was converted into a binary image and eight connected components were used for character region labeling. They achieved 95.40% character segmentation accuracy. A similar method was also introduced by Tabrizi et al. [13], where they achieved 95.24% character segmentation accuracy. Chai and Zuo [30] used a similar process for segmenting vehicle license plate characters. To remove unnecessary small character regions from the detected license plate, a vertical and horizontal projection method, alongside morphological operations and CCA, was used. They achieved 97.00% character segmentation accuracy. Dhar et al. [31] proposed a vehicle license plate recognition system for Bangladeshi vehicles using edge detection and deep learning. Their method for character segmentation involved a combination of edge detection, morphological operations, and analysis of segmented region properties (e.g., ratio of height and width). Although they did not mention any segmentation results, they achieved 99.6% accuracy for license plate recognition. De Gaetano Ariel et al. [32] introduced an algorithm for Argentinian license plate character segmentation. They used horizontal and vertical edge projection to extract the characters, with a 96.49% accuracy level. Recognition or Classification Some researchers recognized license plates using adaptive boosting in conjunction with Haar-like features and training cascade classifiers on those features [33][34][35]. Several researchers have used template matching to recognize the license plate text [10,11]. Feature extraction based recognition has also proven to be accurate in vehicle license plate recognition [12,13,28]. Samma et al. [12] introduced fuzzy support vector machines (FSVM) with particle swarm optimization for Malaysian vehicle license plate recognition. They extracted image features using Haar-like wavelet functions and using a FSVM for classification and they achieved 98.36% recognition accuracy. A hybrid k-nearest neighbors and support vector machine (KNN-SVM) based vehicle license plate recognition system was proposed by Tabrizi et al. [13]. They used operations such as filling, filtering, dilation, and edge detection (using the Prewitt operator [36]) for license plate localization after color to grayscale conversion. For feature extraction, they used a structural and zoning feature extraction method. Initially, a KNN was trained with all possible classes including similar and dissimilar characters (whereas the SVM was trained only on similar character samples). Once the KNN ascertained which "similar character" class the target character belonged to, the SVM performed the next stage of classification to determine the actual class. They achieved 97.03% recognition accuracy. Thakur et al. [37] introduced an approach that used a genetic algorithm (GA) for feature extraction and a neural network (NN) for classification in order to identify characters in vehicle license plates. They achieved 97.00% classification accuracy. Jin et al. [3] introduced a solution for license plate recognition in China. They used hand-crafted features on a fuzzy classifier to obtain 92.00% recognition accuracy. Another group of researchers proposed a radial wavelet neural network for vehicle license plate recognition [38]. They achieved 99.54% recognition accuracy. Brillantes et al. [39] utilized fuzzy logic for Filipino vehicle license plate recognition. Their method was effective in identifying license plates from different issues which contained characters of different fonts and styles. They segmented the characters using CCA along with fuzzy clustering. They then used a template matching algorithm to recognize the segmented characters. The recognition accuracy of their methods was 95.00%. Another fuzzy based license plate region segmentation method was introduced by Mukherjee et al. [40]. They used fuzzy logic to identify edges in the license plate in conjunction with other edge detection algorithms such as Canny and Sobel [41,42]. A template matching algorithm was then used to recognize the license plate text from the segmented region and achieved a recognition accuracy of 79.30%. A hybrid segmentation method combining fuzzy logic and k-means clustering was proposed by Olmí et al. [43] for vehicle license plate region extraction. They developed SVM and ANN models to perform the classification task and achieved an accuracy level of 95.30%. Recent Methods of AVLPR Recently, deep learning based image classification approaches have received more attention from researchers as they can learn image features on their own, in addition to performing classification [44]. Therefore, no feature extraction is required for deep learning approaches. However, despite the advantages of using deep learning in image classification, it requires a large training image database and very high computational power. Li et al. [45] investigated a method of identifying similar characters on license plates based on convolution neural networks (CNN). They used CNNs as feature extractors and also as classifiers. They achieved 97.20% classification accuracy. Another deep learning method based on the AlexNet [46] was introduced by Lee et al. [47] for AVLPR, where they re-trained the AlexNet to perform their task on their database and achieved 95.24% correct recognition. Rizvi et al. [5] also proposed a deep learning based approach for Italian vehicle license plate recognition on a mobile platform. They utilized two deep learning models, one to detect and localize the license plate and the characters present, and another as a character classifier. They achieved 98.00% recognition accuracy with their database. Another deep learning method called "you only look once" (YOLO) was developed for real-time object detection, which is now being used in AVLPR [48]. For example, Kessentini et al. [49] proposed a two-stage deep learning approach that first used YOLO version 2 (YOLO v2) for license plate detection [50]. Then, they used a convolutional recurrent neural network (CRNN) based segmentation-free approach for license plate character recognition. They achieved 95.31% and 99.49% character recognition accuracy in the two stages, respectively. Another YOLO based method was developed by Hendry and Chen [51] for vehicle license plate recognition in Taiwan. Here, for each character, detection and recognition was carried out using a YOLO model, totaling 36 YOLO models used for 36 classes. They achieved 98.22% and 78.00% accuracy for vehicle license plate detection and recognition, respectively. Similarly, Yonetsu et al. [52] also introduced a two-stage YOLO v2 model for Japanese license plate detection. To increase accuracy, they initially detected the vehicle, followed by the detection of the license plate. In clear weather conditions, they achieved 99.00% and 87.00% accuracy for vehicle and license plate detection, respectively. A YOLO based three-stage Bangladeshi vehicle license plate detection and recognition method was implemented by Abdullah et al. [53]. Firstly, they used YOLO version 3 (YOLOv3) as their detection model [54]. In the second stage, they segmented the license plate region and character patches. Finally, they used a ResNet-20 deep learning model for the character recognition [55]. They achieved 95.00% and 92.70% accuracy for license plate detection character recognition, respectively. Laroca et al. [56] used YOLO for license plate detection and then another method proposed by Silva and Jung [57] for character segmentation and recognition. They tested their performance on their own database (UFPR-ALPR), which is now publicly available for research purposes. They achieved 98.33% and 93.53% accuracy for vehicle license plate detection and recognition, respectively. Methodology In the proposed system, a digital camera was placed at a fixed distance and height to be able to capture images of vehicle license plates. When a vehicle is at a predefined distance from the camera, it captures an image of the front of the vehicle, including the license plate. This image then went through several pre-processing steps to eliminate the unwanted background and localize the license plate region. Once this region was extracted from the original image, a character segmentation algorithm was used to segment the characters from the background of the license plate. The segmented characters were then identified using an ANN classifier trained on HOG features. Figure 1 illustrates the steps involved for the proposed AVLPR system. Image Acquisition A digital camera was used as an image acquisition device. This camera was placed at a height of 0.5 m from the ground. An ideal distance to capture images of an arriving vehicle was pre-defined. To detect whether a vehicle was within this pre-defined distance threshold, we subtracted the image of the background (with no vehicles) from each frame of the obtained video. If more than 70% of the background was obscured, it was considered that a vehicle was within this threshold. To avoid unnecessary background information, the camera lens was set to 5× zoom. The speed of the vehicles when the images were captured was around 20 km/h. Camera specifications and image acquisition properties are shown in Table 1. Detection of the License Plate Once the image was acquired, it was then processed to detect the license plate. First, the contrast of the RGB (red, green, and blue) images was improved using histogram equalization. As the location of the license plate in the acquired images was relatively consistent, we extracted a pre-defined rectangular region from the image to be used in the next stages of processing. Figure 2 shows the specifications of the region of interest (ROI). Therefore, we reduced the size of the image to be processed from 4608 × 3456 pixels in the original image to 2995 × 1891 pixels. The region to be extracted was defined using the x and y coordinates of the upper left corner (XOffset and YOffset) and the width and height of the rectangle. To detect the license plate, the ROI extracted in the previous stage was first resized to 128 × 128 pixels. Then, based on previous studies, a deep learning based approach (YOLO v2, as discussed in [50]) was used to extract the license plate region. The network accepts 128 × 128 pixels RGB images as an input and processes them in an end-to-end manner to produce a corresponding bounding box for the license plate region. This network has 25 layers: one Image Input layer, seven Convolution layers, six Batch Normalization layers, six Rectified Linear Unit (ReLU) layers, three Max Pooling layers, one YOLO v2 Transform Layer, and one YOLO v2 Output layer [58]. Figure 3 shows the YOLO v2 network architecture. We used this network to detect the license plate region only, not for license plate character recognition. The motivation behind this methodology is to reduce total processing time with low computational power without compromising the accuracy. For training, we used the stochastic gradient descent with momentum (SGDM) optimizer of 0.9, Initial Learn Rate of 0.001, and Max Epochs of 30 [59]. We chose these values as they provided the best performance with low computational power in our experiments. To improve the network accuracy, we used training image augmentation by randomly flipping the images during the training phase. By using this image augmentation, we increased the variations in the training images without actually having to increase the number of labeled images. Note that, to observe unbiased evaluation, we did not perform any augmentation to test images and preserved it as unmodified. An example detection result is shown in Figure 4. Alphanumeric Character Segmentation In the next stage, the alphanumeric characters that made up the license plate were extracted from the region resulting from the previous step. To this end, we first applied a gray level histogram equalization algorithm to increase the contrast level of the license plate region. Next, we converted the resulting image into a binary image using a global threshold of 0.5 (in the range [0, 1]). Then, the image was smoothed using a 3 × 3 median filter and salt-and-paper noise was removed from it. Any object that remained in the binary image after these operations was considered to represent a character. We segmented these characters using CCA. Each segmented character was resized to 56 × 56 pixels. The character segmentation process is illustrated in Figure 5 and Algorithm 1. Feature Extraction To be able to identify the characters of a license plate, we needed to first extract features from them that define their characteristics. For this purpose, we used HOG features [17] as it has successfully been used in many applications. To calculate the HoG features, the image was subdivided into smaller neighborhood regions (or "cells") [60]. Then, for each cell, at each pixel, the kernels [−1, 0, +1] and [−1, 0, +1] T were applied to get the horizontal (G x ) and vertical (G y ) edge values, respectively. The magnitude and orientation of the gradient were calculated as M(x, y) = (G 2 x + G 2 y ) and θ(x, y) = tan −1 G y G x , respectively. Histograms of the unsigned angle (0°to 180°) weighted on the magnitude of each cell were then generated. Cells were combined into blocks and block normalization was performed on the concatenated histograms to account for variations in illumination. The length of the resulting feature vector depends on factors such as image size, cell size, and bin width of the histograms. For the proposed method, we used a cell size of 4 × 4 and each histogram was comprised of 9 bins, in order to achieve a balance between accuracy and efficiency. The resulting feature vector was of size 1 × 6084. Figure 6 gives a visualization of HOG features for different cell sizes. Artificial Neural Network (ANN) Architecture Once the features were extracted from the segmented characters, we trained an artificial neural network to identify them. As each character was encoded using a 1 × 6084 feature vector, the ANN had 6084 input neurons. The hidden layer comprised 40 neurons and the output layer had 36, equal to the number of different alphanumeric characters under consideration. Figure 7 shows the proposed recognition process along with the architecture of the ANN. Experimental Results Initially, we created a synthetic character image database to train and test the classification method. In addition, we created a database of real images, using the image acquisition method discussed above, to test the proposed method. We justified the selection of HOG as the feature extraction method and ANN as the classifier by comparing its performance with other feature extraction and classification methods. We also compared the performance of our method to similar existing methods. To conduct these experiments, an Intel ® Core ™ i5 computer with 8 Gigabytes of RAM, running Windows Professional 64-bit operating system was used. MATLAB ® framework (MathWorks, Natick, MA, USA) was used for the implementation of the proposed method as well as the experiments. Generation of a Synthetic Data for Training Using synthetic images is convenient as it enables the creation of a variety of training samples without having to manually collect them. The database was created by generating characters using random fonts and sizes (12-72 in steps of 2). In addition, one of four styles (normal, bold, italic, or bold with italic) was also used in the font generation process. To align with the type of characters available in license plates, we included the 10 numerals (0-9) as well as the characters of the English alphabet (A-Z). We also rotated the characters in the database by a randomly chosen angle between ±0°and ±15°. Each class contained 200 samples consisting of an equal number of rotated and unrotated images. In total, 36 × 200 = 7200 images were used to train the ANN. Some characters from the synthetic training database are shown in Figure 8. Performance on Synthetic Data The ANN, discussed in Section 3, was trained on 70% of the synthetic database (5040 randomly selected samples). The remaining images were split evenly for validation and testing (1080 samples each). To determine the ideal number of hidden neurons, ANNs with different numbers of neurons (10, 20, and 40) were trained on these data five times each (see Table 2). The network with the best performance (with 40 hidden neurons) was selected as our trained network. We further increased the number of neurons to 60 and recorded comparably high processing time without reducing the error (%). The overall accuracy of the classification was 99.90% and the only misclassifications were between the classes 0 and O. Table 2. Training performance with respect to hidden neuron size. The best performance is highlighted in bold. Performance on Real Data To test the performance on real-data, we acquired images at several locations in Kuala Lumpur, Malaysia using the process discussed in Section 3.1. In total, 100 vehicle license plates were used in this experiment, where each plate contained 5-8 alphanumeric characters. Then, 671 characters were extracted from the license plate images using the process discussed above. Classes I, O, and Z were not used here as they are not present in the license plates, as discussed above. The number of characters in each class is shown in Table 3. An accuracy level of 99.70% was achieved. The misclassifications in this experiment occurred among the classes S, 5, and 9, likely due to similarities in the characters. Real-time classification results for a sample license plate image are shown in Figure 9. Comparison of Different Feature Extraction and Classification Methods We also compared the proposed method with combinations of feature extraction and classification methods with respect to accuracy and processing time. Bag of words (BoF) [61], scale-invariant feature transform (SIFT) [62], and HOG were the feature extraction methods used. The classifiers used were: stacked auto-encoders (SAE), k-nearest neighbors (KNN), support vector machines (SVM), and ANN [63]. Processing time was calculated as the average time taken for the procedure to complete, as discussed in Section 3 (license plate extraction, characters extraction, feature extraction, and classification). The same 100 images used in the previous experiment were used here. The results are shown in Table 4. Table 5 compares the performance of the proposed algorithm with other similar AVLPR methods in the literature. We reimplemented these methods on our system and trained and tested them on the same datasets to ensure an unbiased comparison. The training and testing was performed on our synthetic and real image databases, respectively. Note that the processing time only reports the time taken for the feature extraction and classification stages of the process. Accuracy denotes the classification accuracy. Since the training dataset was balanced, we did not consider bias performance metrics such as sensitivity and precision. As can be seen in Table 5, the proposed method outperformed the other compared methods. Comparison of Methods with Respect to the Medialab Database We compared the performance of our method to the methods discussed above, on a publicly available database (Medialab LPR (License Plate Recognition) database [64]) to investigate the transferability of results. This database contains still images and video sequences captured at various times of the day, under different weather conditions. We included all still images from this database except for ones that contained more than one vehicle. As our image capture system was specifically designed to capture only one vehicle at a time in order to simplify the subsequent image processing steps, images with multiple vehicles are beyond our scope. The methods were compared with respect to the different stages of a typical AVLPR system (detection, character segmentation, and classification). Table 6 shows the comparison results (detection, character segmentation, and classification accuracy show the percentages of vehicle number plate regions detected, characters accurately extracted, and accurate classifications, respectively). As our method of pre-defined rectangular region selection (shown in Figure 2) was specifically designed for our image acquisition system, we did not consider this step in the comparison as different image acquisition systems were used in obtaining images for this database. Instead, the full image was used for detecting the license plate. Table 6. Performance comparison on the Medialab LPR database. The stages that were not addressed in the original papers are denoted by "-". The best performance per metric is highlighted in bold. Detection Segmentation Classification Jin et al. [ Note that some of the methods in Table 6 do not address all three stages of the process. This is due to the fact that some papers only performed the latter parts of the process (for example, using pre-segmented characters for classification) and others did not clearly mention the methods they used. All methods were trained on our synthetic database as discussed above (no retraining was performed on the Medialab LPR database). As can be seen in Table 6, the proposed method outperformed the other methods compared. However, the overall classification performance for all methods was slightly lower than that on our database (Table 5). We hypothesize that it could be due to factors such as differences in resolution and capture conditions (for example, weather, and time of day) of the images in the two databases. Comparison of Methods with Respect to the UFPR-ALPR Database We further compared the performance of our method with other existing methods (discussed above) on another publicly available database (UFPR-ALPR), which includes images that are more challenging [56]. This database contains multiple images of 150 vehicles (including motorcycles and cars) from real-world scenarios. The images were collected from different devices such as mobile cameras (iPhone ® 7 Plus and Huawei ® P9 Lite) and GoPro ® HERO ® 4 Silver. Since we used a digital camera to capture images in our method, we only considered those images in the UFPR-ALPR database that were captured by a similar device (GoPro ® HERO ® 4 Silver). The database is split into three sets: 40%, 40%, and 20% for training, testing, and validation, respectively. We only considered the testing portion of the database to perform this comparison. In addition, we did not consider motorcycle images here, as our method was developed for cars. First, we converted the original image from 1920 × 1080 to 1498 × 946 pixels (to keep it consistent with the images of our database). Then, we performed the detection and recognition processes on the resized images. As can be seen in Table 7, the proposed method outperformed the other methods with respect to accuracy of detection, segmentation, and classification. However, the overall classification performance for all methods were lower than on ours and Medialab LPR databases (Tables 5 and 6). We hypothesize that this could be due to factors such as the uncontrolled capture conditions (i.e, speed of vehicle, weather, and time of day) of the images in the three databases. Table 7. Performance comparison on the UFPR-ALPR database. The stages that were not addressed in the original papers are denoted by "-". The best performance per metric is highlighted in bold. Detection Segmentation Classification Jin et al. [ Conclusions In this paper, we propose a methodology for automatic vehicle license plate detection and recognition. This process consists of the following steps: image acquisition, license plate extraction, character extraction, and recognition. We demonstrated through experiments on synthetic and real license plate data that the proposed system is not only highly accurate but is also efficient. We also compared this method to similar existing methods and showed that it achieved a balance between accuracy and efficiency, and as such is suitable for real-time detection of license plates. A limitation of this work is that our method was only tested on a database of Malaysian license plate images captured by the researchers and the publicly available Medialab LPR and UFPR-ALPR databases. In the future, we will explore how it performs on other license plate databases. We will also investigate the use of multi-stage deep learning architectures (detection and recognition) in this domain. Furthermore, since we proposed barrier access control for the controlled environment, the current image acquisition system was set up so that only one vehicle was visible in the field of view. As a result, the image only captured one vehicle per frame, simplifying the process of license plate detection. In future work, we will extend our methodology to identify license plates using more complex images containing multiple vehicles. In addition, we will increase the size of the training database in an effort to minimise misclassification of similar classes.
7,081.2
2020-06-01T00:00:00.000
[ "Computer Science" ]
Melatonin antagonizes interleukin‐18‐mediated inhibition on neural stem cell proliferation and differentiation Abstract Neural stem cells (NSCs) are self‐renewing, pluripotent and undifferentiated cells which have the potential to differentiate into neurons, oligodendrocytes and astrocytes. NSC therapy for tissue regeneration, thus, gains popularity. However, the low survivals rate of the transplanted cell impedes its utilities. In this study, we tested whether melatonin, a potent antioxidant, could promote the NSC proliferation and neuronal differentiation, especially, in the presence of the pro‐inflammatory cytokine interleukin‐18 (IL‐18). Our results showed that melatonin per se indeed exhibited beneficial effects on NSCs and IL‐18 inhibited NSC proliferation, neurosphere formation and their differentiation into neurons. All inhibitory effects of IL‐18 on NSCs were significantly reduced by melatonin treatment. Moreover, melatonin application increased the production of both brain‐derived and glial cell‐derived neurotrophic factors (BDNF, GDNF) in IL‐18‐stimulated NSCs. It was observed that inhibition of BDNF or GDNF hindered the protective effects of melatonin on NSCs. A potentially protective mechanism of melatonin on the inhibition of NSC's differentiation caused IL‐18 may attribute to the up‐regulation of these two major neurotrophic factors, BNDF and GNDF. The findings indicate that melatonin may play an important role promoting the survival of NSCs in neuroinflammatory diseases. Introduction Spinal cord injury can be resulted from neural trauma, inflammation or degeneration, and it is a serious clinical disorder. The patients suffer from their immobility, and it also poses enormous economic burden to their family and also to the society [1][2][3][4]. It is estimated that the incidence of spinal cord injury is 40-80 cases of per million yearly and there are around 273,000 patients with spinal cord injury in USA alone [5][6][7]. NSCs are pluripotent, undifferentiated and self-renewing neuronal precursor cells. NSC's transplantation has been promulgated as a promisingly attractive strategy for spinal cord injury and neurodegenerative disorders [8][9][10][11]. NSC can replace lost oligodendrocytes and neurons, and this replacement contributes to the restoration of motor function in animal models [12][13][14][15]. However, low survival rate of transplanted NSCs is a major obstacle to the success of this therapy [16][17][18]. It was reported that when the NSCs were transplanted into spinal cord, their vitality and proliferation were impaired [2,19,20]. This impairment is at least partially mediated by the release of pro-inflammatory cytokines. These cytokines will induce cellular apoptosis, axonal destruction and extensive demyelination during secondary spinal cord injury [21]. Cytokine IL-18 is a member of the IL-1 family [22]. It is produced by a variety of cells, including dendritic cells, adipocytes and macrophages [23,24], and it is also detectable in the ependymal cells, activated microglia, astrocytes, neurons and the pituitary gland. Increasing evidence has confirmed that IL-18 is a pleiotropic cytokine, which regulates both humoral and cellular immunity and plays an important role in the inflammatory cascade [25]. Previous studies have demonstrated that IL-18 mediated astrocyte-microglia interactions in the spinal cord to aggravate neuropathic pain after neural injury [26]. On other hand, neural injury also induced IL-18 expression in the dorsal horn [27]. These reactions form a vicious cycle and manifest the neural injury furtherly. IL-18 also has the capacity to inhibit neuronal survival and differentiation on cultured NSCs [28]. Melatonin (N-acetyl-5-methoxytryptamine), synthesized from the pineal gland as well as in the peripheral tissues, plays the important roles in various physiological processes including the circadian rhythm, reproduction and the cerebrovascular and neuroimmuno-endocrine functions [29,30]. Melatonin also exerts neuroprotective effects in various pathological conditions of the central nervous system including Alzheimer's disease, Parkinson's disease, ischaemic brain injury and spinal cord injury [31][32][33]. Recently, several studies showed that melatonin enhanced growth of NSCs and their differentiation into neurons [32,34,35]. However, it is unknown whether melatonin could promote NSC's proliferation, survival or differentiation under the inflammatory conditions. Herein, we examined the effect of melatonin on the proliferation and differentiation of NSCs in the presence of IL-18. Materials and methods Cell culture and reagents All experimental protocols had been approved by the Clinical Research Ethics Committee of the Peking Union Medical College Hospital. NSCs were obtained from 13.5-day-old embryos of Wistar rats using an established method [36]. In brief, the telencephalon was separated under a stereotaxic microscope and dissected into small pieces. NSCs were released by incubation of these dissected brain tissues with 0.25% trypsin and kept in neurobasal medium (Hyclone, Logan, UT, USA) with 2% B27 (Invitrogen, USA), basic fibroblast growth factor (bFGF; Invitrogen, Carlsbad, CA, USA), epidermal growth factor (Invitrogen) and their inhibitor (TrkB-Fc and MAB212, respectively) at 37°C under a humidified atmosphere containing 5% CO 2 . Melatonin, IL-18 and luzindole were obtained from Sigma-Aldrich. Cells were treated with or without IL-18 (1-100 ng/ml) in the presence of melatonin (10 ng/ml) and/or luzindole (5 lM). All mouse experiments are carried out in accordance with the relevant institutional and national guidelines and regulations, approved by the Animal Care and Use Committee of Peking Union Medical College Hospital and conform to the relevant regulatory standards. Name Sequence ( Western blots Total protein was extracted from cells and separated using 10% sodium dodecyl sulphate-polyacrylamide gel electrophoresis gel. Proteins were transferred onto a nitrocellulose membrane (Millipore, MA, USA), and the membrane was blocked with non-fat milk. Then, the membrane was probed with primary antibodies (dilutions 1:5000) (Sigma-Aldrich) and bound with horseradish peroxidase-conjugated secondary antibodies. Chemiluminescent signal was developed using the ECL kit (Millipore, MA, USA). Enzyme-linked immunosorbent assay Enzyme-linked immunosorbent assay (ELISA) was performed to measure brain-derived neurotrophic factor (BDNF) and glial cell-derived neurotrophic factor (GDNF) levels in the supernatant of cell culture according to the manufacturer's instructions (R&D Systems, MN, USA). The absorbance was measured at a 450-nm wavelength. Cell proliferation Cell proliferation was measured using a Cell Counting Kit-8 (CCK-8, Dojindo, Kumamoto, Japan) according to the manufacturer's instructions. The cells were cultured for 0, 24, 48 or 72 hrs, and the absorbance was measured at a 450-nm wavelength. RNA isolation and reverse transcriptionquantitative PCR (qPCR) Total RNA was extracted using Trizol Reagent (Invitrogen) from cells following to the manufacturer's instructions. PCR was performed with specific primers in 20 ll PCR mixtures for 35 cycles. The levels of mRNA were measured by SYBR Green quantitative PCR performed on the iQ5 Real-Time PCR Detection System (Bio-Rad, Hercules, CA, USA). The primers used for PCR amplification were shown in Table 1. Quantification data were normalized to GAPDH. Statistics analysis Data were expressed as means AE standard deviation (SD). Student's ttest was performed for comparison between two groups, and an analysis of variance (ANOVA) was used to compare multiple groups and followed by Student's t-test for comparison between two groups. P < 0.05 were considered statistically significant. Results The cells cultured in neurobasal medium supplemented with bFGF and B27 proliferated and formed neurospheres on the second day after culture ( Fig. 1A and B). These neurospheres also expressed the NSC-specific marker, nestin (Fig. 1C). Three days after bFGF withdraw, these neurospheres differentiated into neurons and astrocytes ( Fig. 1D and E). The results confirmed that the isolated cells were NSCs. The results indicate that melatonin per se had a profound effect to promote NSCs proliferation. When melatonin co-incubated with IL-18, it partially reversed the inhibitory effect of IL-18 on NSCs (Fig. 3A). Moreover, similar protective effects of melatonin on neurosphere formation in the absence or presence of IL-18 were observed (Fig. 3B). Western blot analysis found that melatonin up-regulated basal nestin expression in NSCs and partially reversed the reduced nestin level caused by IL-18 (Fig. 3C). In addition, NSCs treated with melatonin for 3 days significantly restored b-tubulin-III-positivity levels of NSCs which suppressed by IL-18 (Fig. 3D). qRT-PCR and Western blot analyses also showed that melatonin up-regulated the expression of b-tubulin-III mRNA and its protein while it abrogated the inhibitive effects of IL-18 on them in cultured NSCs (Fig. 3E and F). RT-PCR and Western blot analyses confirmed that mRNA and protein of melatonin receptors (MT1 and MT2) were expressed in NSCs (Fig. 4A and B). However, the protective effects of melatonin on NSCs mentioned above were blocked by co-incubated with luzindole ( Fig. 4C and D). This suggests that the melatonin receptors may be involved in the protective effects of melatonin on the NSC suppression mediated by IL-18 ( Fig. 4C and D). Likewise, luzindole abolished the promoting effect of melatonin on b-tubulin-III positivity (Fig. 4E) as well as b-tubulin-III mRNA (Fig. 4F) and protein (Fig. 4G) expression in IL-18-challenged NSCs. BDNF and GDNF are two important growth factors involved in NSC survival and proliferation. In this regard, ELISA results showed that melatonin drastically increased contents of both BDNF and GDNF and these were also blocked by luzindole ( Fig. 5A and B). Similarly, qRT-PCR and Western blot analyses found that melatonin increased mRNA and protein expressions of both BDNF and GDNF while luzindole co-incubation abrogated these up-regulations (Fig. 5C-F). It was observed that both BDNF inhibitor (TrkB-Fc) and GDNF inhibitor (MAB212) also partially reduced the beneficial effects of melatonin on the NSC proliferation and neurosphere formation under the IL-18 challenging (Figs 6A,B and 7A Discussion NSCs are immature, self-renewing and undifferentiated precursor cells with an potential to differentiate to astrocytes, neurons or oligodendrocytes [37,38]. NSC transplantation could have therapeutic applications in various neuro-pathological conditions, such as Alzheimer's disease, Parkinson's disease and spinal cord injury [39]. However, several shortcomings including the retarded proliferation of the transplanted NSCs as well as the unwanted differentiation of the majority of NSCs into astrocytes hindered their clinical utilization [40]. These problems may be in part due to the biological actions of several pro-inflammatory cytokines, such as TNF-a, IL-1b, IL-6, IL-18, nitric oxide (NO). These pro-inflammatory cytokines induce apoptosis, axonal destruction and extensive demyelination [41]. Among these cytokines, IL-18 plays a pivotal role for pathophysiological actions of NSCs [28]. In the current study, we found that IL-18 suppressed the proliferation and differentiation of rat NSCs in the cultured condition. To overcome the negative impacts of IL-18 on NSAs melatonin was selected for this purpose. Melatonin is a naturally occurring molecule with the potent antioxidant and anti-inflammatory activities [42][43][44][45]. Many studies have proved that melatonin reduces oxidative stress and inflammation in a variety of biological systems [18,44,[46][47][48]. These activities of melatonin are believed to be mediated by its direct free radical scavenging action and also via its induction on antioxidant and anti-inflammatory enzymes [49][50][51]. As a result, melatonin exerted significant neuroprotective effects in Alzheimer's disease, Parkinson's disease, ischaemic brain injury and spinal cord injury [52,53]. Kong et al. [35] reported that melatonin promoted the viability of cultured ventral midbrain-derived NSCs, facilitated roxinehydroxylase-positive neuronal differentiation and inhibited glial differentiation. Fu et al. [54] also found that melatonin produced beneficial effects as a supplement for treating neonatal hypoxic-ischaemic brain injury by promoting the proliferation and differentiation of NSCs. In addition, the melatonin has been shown to act as a protective mediator in lippolysaccharide-challenged NSCs [55]. In another stem cell study, Liu et al. [56] showed that melatonin maintained mesenchymal stem cell survival and promoted their osteogenic differentiation in inflammatory environment induced by IL-1b. Consistent with these observations, we observed that IL-18 inhibited NSC proliferation, neurosphere formation and their differentiation into neurons in cultured rat NSCs and all these negative effects mediated by IL-18 were significantly reduced by melatonin supplementation. The mechanistic studies indicated that melatonin up-regulation the gene expressions of both BDNF and GDNF. BDNF and GDNF are two major neurotrophic factors and they are essential for the NSC proliferation and differentiation in normal conditions [57][58][59]. Both BDNF and GDNF can be considered as the downstream elements of melatonin's protective pathway for NSCs regarding their survival, proliferation and differentiation. It appears that this pathway is initiated by melatonin membrane receptors (MT1 and/or MT2) as the luzindole, a MT1/MT2 antagonist, significantly reduces the protective effects of melatonin on NSCs. Currently, few molecules have been identified to promote the proliferation and differentiation of NSCs. Our results strongly suggest that melatonin administration would increase the survival chance in NSC transplant therapy by promoting the proliferation of these cells. Most importantly, melatonin has the capacity to direct the NSCs differentiation into the neural cells but not the astrocytes and thus can generate the functional neurons after the NSC transplant. The results identified the molecular mechanisms as to how melatonin would increase the survival rate and differentiation of NSCs and provided novel evidence for clinical application of melatonin as a neuroprotective agent in neuroinflammatory diseases, especially, considering the low or non-toxicity of this molecule. Author contributions Z. L, X.Y L, X. Y, J.X. S set up the idea for writing the manuscript; Z. L, X. Y, J.X. S. collected the data regarding the manuscript; Z. L, X. Y, J.X. S analysed the data; Z. L, X. Y, J.X. S. wrote the original manuscript in English; and M.T.V. C, W. K. K. W., D. X. T revised the manuscript, worked on the English and made the final version of the manuscript. All authors reviewed the final version of manuscript.
2,994.6
2017-04-21T00:00:00.000
[ "Biology", "Medicine" ]
Modelling of Solar Radiation Pressure Effects : Parameter Analysis for the MICROSCOPE Mission Modern scientific space missions pose high requirements on the accuracy of the prediction and the analysis of satellite motion. On the one hand, accurate orbit propagationmodels are needed for the design and the preparation of amission.On the other hand, these models are needed for themission data analysis itself, thus allowing for the identification of unexpected disturbances, couplings, and noises whichmay affect the scientific signals.We present a numerical approach for Solar Radiation Pressuremodelling, which is one of the main contributors for nongravitational disturbances for Earth orbiting satellites. The here introduced modelling approach allows for the inclusion of detailed spacecraft geometries, optical surface properties, and the variation of these optical surface properties (material degradation) during the mission lifetime. By using the geometry definition, surface property definitions, and mission definition of the FrenchMICROSCOPEmission we highlight the benefit of an accurate Solar Radiation Pressuremodelling versus conventional methods such as the Cannonball model or aWing-Box approach. Our analysis shows that the implementation of a detailed satellite geometry and the consideration of changing surface properties allow for the detection of systematics which are not detectable by conventional models. Introduction The modelling and propagation of satellite motion are one of the central tasks in mission analysis.The main driver for the evolution of a satellite orbit is the gravitational field of the central attracting mass.While a spherical symmetric approach for the gravitational field delivers undisturbed Kepler orbits, more realistic approaches employ spherical harmonics to model the gravitational potential.Among others, these models implement the effect of Earth oblateness, zonal, and tesseral variations of the mass distribution.Consequently, the introduced corrections of the gravitational field can be interpreted as a gravitational disturbance of an ideal Kepler orbit. However, besides these perturbations, nongravitational disturbance (NGD) effects have a large influence on satellite motion.The largest of these NGDs in low orbit altitudes is the atmospheric drag resulting from the resistance of residual atmosphere against the satellite body moving at high relative speed.For higher altitudes, where the influence of residual atmosphere can be neglected, the dominant NGDs result from interaction of the satellite surface with solar photons, causing a drag force known as the Solar Radiation Pressure (SRP).The magnitude of the SRP acting on the satellite depends on a wide range of parameters.The distance to the Sun and the position of the satellite with respect to Earth and Sun (regarding possible eclipses) define the intensity of the incoming radiation.The geometry of the satellite, the optical properties of the external surfaces, and the actual orientation with respect to the Sun largely influence the orientation and magnitude of the evolving SRP.According to this, any SRP model depends on an accurate implementation of the satellite orbit, the attitude, and the geometric/physical properties of the satellite structure.As a consequence, a high modelling effort has to be made in order to obtain precise results.However, if mission planning and analysis for the satellite mission at hand possess high requirements on orbit modelling precision, a sophisticated SRP model is needed. It has been argued for quite some time that commonly used SRP models like the Cannonball and the Wing-Box model are not sufficient enough for an accurate SRP analysis [1,2].This is particularly true if the involved geometries differ considerably from a spherical shape or a standard bus and 2 International Journal of Aerospace Engineering solar panel assembly.The high gain in modelling accuracy by means of a realistic implementation of the satellite geometry has also been demonstrated with an analysis of NGDs acting during the cruise phases of the ESA Rosetta spacecraft [3].Here a nonphysical solar constant was measured resulting from a parametric fit of the measured contribution of SRP on the total acceleration.By means of a sophisticated SRP and thermal radiation pressure (TRP) (TRP results from photons emitted by the spacecraft itself) model this offset was explained as a nonmodeled TRP correlated with the acting SRP.Further examples for a successful implementation of enhanced SRP models are GNSS satellites, where navigation accuracy directly benefits from an improved SRP modelling approach [4][5][6]. The modelling effort for an accurate analysis of the SRP effect on a given satellite is considerably high.Consequently, a trade-off has to be made between the precision requirements for the specific mission, the effort that one is willing to take, and the possible gain with respect to an improvement of a precise implementation of NGDs.This paper intends to give an overview on the implications of accurate SRP modelling and the expected improvement of NGD implementation.The parameters for the subsequent SRP analysis are derived from the French space mission MICROSCOPE [7] which delivers a suitable test case with respect to the specified mission profile. The MICROSCOPE mission requires a very high accuracy of the spacecraft attitude and attitude stability due to the specific mission specification.In order to realize the high performance of the differential acceleration measurement of the two test masses to test the Weak Equivalence Principle (EP) it is essential to ensure a very low disturbance level (forces and torques acting on the satellite).For this purpose MICROSCOPE will be operated in drag-free mode.Any disturbance will be compensated by forces and torques generated by a cold gas propulsion system in closed loop control.The input to the corresponding controller is given by the common mode acceleration signal of the differential accelerometer while the science signal is extracted from the differential acceleration signal.However, in spite of the drag-free control, the exact modelling of NGDs is still necessary due to couplings between the accelerometers and the satellite structure.As a consequence, external disturbance effects influence the scientific signals since the drag-free control forces and torques needed to compensate the NGDs introduce a disturbance translated by the coupling. The actual requirements of MICROSCOPE are quite demanding.For the EP measurement sessions the residual acceleration of the spacecraft shall be less than 10 −12 ms −2 .At the EP test frequency, the angular pointing stability should be better than 7 rad, and the angular velocity stability is required to not exceed 10 −9 rads −1 , respectively [8]. The sun-synchronous polar MICROSCOPE orbit leads to a force vector due to SRP always directed to one side of the orbital plane.This leads to a linear acceleration normal to the orbital plane which is superimposed by an angular variation due to seasonal change of the angle between the orbital plane normal and the direction to the sun.In case of simulating a drag-free mission a detailed modelling of the corresponding SRP forces and torques is important to estimate the actual control forces of the Attitude and Orbit Control System (AOCS) which keep the spacecraft in the favored state.Considering MICROSCOPE, NGD effects due to SRP can easily reach several N.Divided by the satellite's mass (330 kg) this force induces disturbing accelerations of some 10 −8 ms −2 which is not negligible in a premission endto-end simulation and for developing and implementing data analysis and data processing strategies. Due to the high demands of the mission, the sunsynchronous orbital plane, and the LEO character of the orbit, MICROSCOPE is an ideal test case scenario for the analysis of the general benefit of accurate SRP modelling for space missions.After a general introduction of the SRP modelling method and the derivation of SRP characteristics for chosen orbit and mission examples we will use MICROSCOPE as a test case scenario for a detailed SRP analysis.By looking at different approaches for the implementation of the geometry of the satellite and applying surface degradation models we highlight the possible benefits and the involved costs of high accuracy SRP modelling. Orbit and Attitude Propagation Since the evolving SRP magnitude and orientation depend on the position and the attitude of the spacecraft, a dynamic orbit simulation including the gravitational acceleration caused by the Earth's gravitational field is necessary.The calculation of the gravitational influence and the integration of the equation of motion are realized within the framework of the generic simulation tool High Performance Satellite Dynamics Simulator (HPS) [9].The HPS is a MATLAB/Simulink library which is developed at ZARM in cooperation with the DLR Institute of Space Systems, Bremen.The main focus of HPS is the propagation of satellite orbits and the computation of the satellite's orientation, depending on specific initial conditions and the space environment.Furthermore, the coupled motion of up to eight on-board test masses (arranged pairwise in up to four accelerometers) can be computed in six degrees of freedom. Coupling effects between the satellite and the test masses as well as among the test masses themselves are included in the implemented differential equation systems of each considered body.In the following the satellite's equations of motion are shown exemplarily.The satellite motion is given by (1): Here (i) sat is the mass of the satellite, (ii) → r , is the acceleration of the satellite relative to the ECI frame, (iii) ⃗ , ( ⃗ , ) is the gravitational acceleration, (iv) ⃗ control is the control force, (v) ⃗ dist is the sum of all disturbance forces acting on the satellite, and (vi) ⃗ coupl,sat is the force due to the coupling between the satellite and all considered test masses.The superscript indicates that all components of (1) are given in ECI coordinates. International Journal of Aerospace Engineering 3 The satellite's rotation and the satellite's attitude motion are computed by using (2) and (3): Here (i) , is the angular velocity of the satellite relative to the ECI frame, (ii) It is obvious that the satellite's motion is affected by the acceleration due to the Earth's gravitational field which cannot be considered to be spherically symmetric.This is due to the nonuniform mass distribution of the Earth and it results in (i) perturbations of the pure Kepler orbit and in (ii) perturbations of the satellite's attitude.But apart from this, for a complete orbit and attitude propagation simulation, one has to take into account nongravitational effects acting on the satellite, too.They force it to go astray from its purely gravitational orbit and induce undesired rotations.For many missions, one of the most prominent effects of these NGDs is the SRP which will be discussed in detail in the next sections. SRP Model The disturbance forces and torques due to SRP originate from the interaction of the satellite's surface with the photons emitted by the sun.It is assumed that each photon that hits the satellite is either absorbed or reflected in a specular or diffuse way, thus effectively changing the momentum of the satellite.As a consequence, the resulting force acting on an elemental area can be expressed as the sum of three individual contributions [10]: where (i) SRP is the SRP, (ii) is the elemental area, (iii) ⃗ Sun and ⃗ are the unit vector in Sun direction and the unit vector normal on the elemental area , respectively, and (iv) is the angle between ⃗ Sun and ⃗ .Finally, (v) , , and are the coefficients of absorption, of specular reflection, and of diffuse reflection. With + + = 1 (assuming a nontransparent material) the force due to SRP can be derived as follows: Hence, the computation of ⃗ total requires the modelling of (i) the satellite orbit because the magnitude of SRP depends on the distance to the Sun, (ii) the satellite attitude in order to derive the correct incident angle between ⃗ and ⃗ Sun , (iii) the satellite geometry for defining appropriate values of , , ⃗ , and . The propagation of the satellite orbit and its orientation is one of the basic tasks within the simulation software HPS and can be applied to any Earth orbiting satellite mission.Since most NGDs such as the SRP are surface-based effects, the propagation needs an input model for the satellite geometry specific to the actual mission.Due to the variations in satellite components, general dimensions, and external materials it is not possible to find a suitable standard model that can be used in a flexible way with respect to the variety of spacecraft geometries.This is one of the main setbacks of standard approaches such as the Cannonball or the Wing-Box model.Instead of using a simplistic approach, where ⃗ total is calculated with respect to (i) an effective projected satellite surface area and to (ii) averaged optical surface properties, the focus of the HPS SRP interface lies on the capturing of the influence of details of the satellite geometry and the involved material parameters of each component of the satellite.In order to realize this, the HPS SRP approach is divided into two main steps.Before the actual SRP is calculated, the satellite's surface is discretized in small elements and the different optical properties are assigned to the corresponding elements.For a complex geometry this is realized by means of a finite element (FE) preprocessor, where the meshes of the external surfaces are exported together with their respective optical property definitions.Subsequently, the HPS algorithm for SRP computation evaluates (6) which is the discrete form of (5) for each element that is illuminated by the Sun for the chosen vector ⃗ Sun : The overall force is then derived by computing the sum over all elements: By means of geometric criteria (see [11] for details), the algorithm determines automatically if an element is lit by the International Journal of Aerospace Engineering Sun and considers shadowing by other parts of the satellite as well.In combination with an eclipse model, the global SRP acting on the satellite is calculated with respect to a realistic illumination scenario. In order to speed up the simulation process, resulting SRP magnitude and directions can be derived in a normalized form.Here a lookup table can be derived where parameters of the stored SRP values are the solar elevation and azimuth consequently defining the current sun angle.When the lookup table is computed in preprocessing, the results can be used to determine the dynamical evolution of the SRP during flight within an HPS simulation.For this, the normalized SRP values are converted to the actual SRP with respect to the current solar distance and orientation of the satellite as well as the eclipse condition. Parameter Analysis In order to review the systematics of the SRP force model discussed here, a parameter analysis is performed.The implications of changes of the relevant input parameters such as orbital elements and geometrical and technical features with respect to the overall magnitude of the resulting disturbance force due to SRP are discussed in the following. Solar Radiation Pressure. Since the magnitude of the incident solar radiation does depend not only on the orbit of the satellite around the Earth but rather on the Earth's orbit around the Sun, too, it is sensible to analyse the influence of the implications of the central body orbit during the year.The annual variation of the strength of SRP is depicted in Figure 1.Exemplarily, the resulting SRP in Nm −2 for the CHAMP mission orbit (see Table 1 for details) is presented. Furthermore, the resulting SRP values for the Low Earth Orbit (LEO) missions CHAMP and MICROSCOPE (sunsynchronous orbit (SSO)), the SRP detected for the orbit of the geostationary (GEO) mission Meteosat, and the disturbing radiation pressure for the GALILEO satellites are given in Figure 2 for the timeframe of a single day. The detected strength of the SRP varies on large time scales due to the change of the distance between the Earth and the Sun over the time period of one year (see Figure 1).The variations of the SRP on smaller time scales (see Figure 2) are a consequence of the satellites' motion around the Earth, resulting in additional distance variations with respect to the Sun. Figure 1 shows that in case of CHAMP the SRP changes about 6% during a half-year period (from winter to summer).The magnitude of the small scale variations depends considerably on the type of the satellite's orbit, that is, LEO, GEO, and so forth.This is demonstrated in Figure 2. Another effect is the variability of the SRP due to changing solar activity.The main variation of the intensity of solar radiation shows a period of eleven years.The corresponding variation of the amplitude of the solar constant, which is the total solar irradiance (TSI) at a fixed distance of one AU, is only 0.1-0.2percent [12].Due to the relative small variation compared to the variation induced by the elliptic Earth orbit this effect is negligible and will not be taken into account in the following.The choice of example missions is based on Table 1 which provides an overview about the distribution of operating satellites on the different mission classes.As each category is linked with a typical altitude, the above-named conclusions can be interpreted as a general survey of the evolution of SRP for a broad range of satellite missions. Geometry Models. In contrast to the general analysis of SRP , an investigation of the influence of satellite attitude and design, that is, its geometry and the surface materials, requires higher effort and will be carried out exemplarily for the MICROSCOPE mission.MICROSCOPE will be operated on a sun-synchronous orbit at an altitude of 700 km and an inclination of 98.248 ∘ .In order to provide a stable thermal environment for the payload and to minimize eclipse phases MICROSCOPE will be injected in an orbit with 6:00 hrs or 18:00 hrs local solar time at ascending node.Figure 3 illustrates the attitude of MICROSCOPE with respect to its orbital plane. As stated above, a usual simplification of the satellite's geometry involves the definition of a reference area with mean values for the optical properties.In contrast the HPS concept utilises FE models which demand a certain effort during construction.Between these approaches a range of other geometry models is of common usage.For example, due to its symmetry a sphere may be used as very simple model for the geometry of a satellite.This so-called Cannonball model [13] results in SRP forces completely independent of the attitude if all surfaces share the same optical properties. In reality, satellites possess a more or less complex geometry.The total value of the force can strongly depend on the incident angle even in the case of a homogeneous distribution of optical parameters on the external surfaces.In particular, flat components like solar panels contribute to this dependency.For this reason so-called Wing-Box models are used.They offer the possibility to introduce different optical properties, generally for the satellite body and the solar panels [14]. In addition to the complex FE model we generated different geometry models to demonstrate the impact of geometric complexity on the resulting SRP effects including the most simple approach (disk), a simple box, and a Wing-Box model.They are depicted in the upper row of Figure 4.In order to get the best comparability we set the same projected surface area for each model (with = 90 ∘ and = 0 ∘ , corresponding to the MICROSCOPE solar panel side).In addition, a spherical geometry model was chosen to provide a global comparative value for this analysis.The model is not shown in Figure 4 for reasons of brevity.In the lower row of Figure 4, the values of the projected surface areas for the different geometry models are depicted as a function of the incident sunray described in polar coordinates (small picture in Figure 3).In each case the comparative value of the spherical geometry model appears as constant surface area independent of and .Usually, the disk model is only applicable for vertical incident sunlight.Therefore, it is not expected to be a good choice for MICROSCOPE, as its solar panels will not be exposed to perpendicular solar irradiation most of the time.This results from the fact that the satellite's -axis will be aligned with the orbit normal and not with the vector to the Sun.During the year the incident angle varies in the range of about 30 ∘ which results from the combination of inclination and obliquity of the ecliptic.The simple box, the Wing-Box, and the FE models show characteristic results that represent the symmetries of each of the models.Obviously the simple box model results in large deviations from the FE model, especially for angles far from 90 ∘ because it does not take into account the geometry of the solar panels and the corresponding correct contribution to the total area for these angles.In general, the Wing-Box model gives a good representation of the projected area, but the distribution for the FE model is much smoother.Furthermore, the definition of the reference projected area yields an overestimation of the projected area for ̸ = 90 ∘ for both the simple box and the Wing-Box models.In summary the FE model produces the most accurate results for the projected area. Another reason for using at least simple box models is the fact that different optical properties can be assigned to the single satellite surface cells.Figure 5 shows the FE model of MICROSCOPE in which the different materials are represented.Each color corresponds to specific values of S and . The influence of the optical properties is demonstrated in Figure 6.Here the absolute value of ⃗ SRP is depicted as function of and for a constant value of SRP .In contrast to Figure 4(d) there is a significant difference between the peak at = 90 ∘ and = 0 ∘ corresponding to the solar panel side and the opposite side ( = 90 ∘ , = 180 ∘ ) although the projected area is nearly the same.Looking at the material distribution, the result is not surprising.The back side of the solar panels is covered with White Paint.Consequently the corresponding surface cells have higher reflection coefficients and contribute stronger to the absolute values of ⃗ SRP compared to those on the front side.Overall, the difference between both sides for perpendicular solar irradiation amounts to approximately 13%. When detailed surface models are used, the quality of the obtained force considerably depends on the chosen mesh.On the one hand, geometrical features such as spherical bodies can only be implemented realistically with a considerably small meshed surface grid.The same effect shows in the illumination condition calculations where the shape of the shadow improves with a higher number of elements.On the other hand, computation time considerably increases with a finer mesh.Here the computation of shadowing is the dominant effect.Every surface element has to be checked for shadowing considering its orientation and position with respect to each other surface element included in the model.Besides the obvious quadratic increase of the number of individual computation steps, also the size of the data matrices needed to store the shadowing information increases at the same rate.As a consequence, a trade-off between computational resources available and accuracy demands has to be made.Keeping in mind that the actual illumination condition has to be recalculated for different orientations of the satellite to the Sun, the surface mesh has to be chosen such that acceptable computation times can be realized while the quality of the illumination implementation is not compromised.A suitable method to obtain a good tradeoff is to calculate the projected illuminated area surface at a steep illumination angle (consequently causing long shadows) for different mesh qualities.Figure 7 shows the resulting calculated illuminated surface area for a different number of surface elements.The mean element edge length is used as a mesh criterium and ranges from 50 cm to 2 cm, resembling the range of the displayed model number (with = 1, . . ., 25), where = 1/( ⋅ 2).Keeping in mind the bus size of MICROSCOPE (e.g., + bus side of about 1.1 m × 0.8 m), the first model ( = 1) translates to four elements on the + bus face, while the last model ( = 25) translates to more than two thousand elements on the same surface.A mean value of the obtained area is displayed as a grey dashed line.As can be seen, the solution converges close to the mean value for a fine mesh.However, reducing the element edge size does not directly lead to a better surface area result.The actual element size depends (i) on the region boundary lines, (ii) on the meshing sequences, and (iii) on the free parameters specified by the meshing tool (ANSYS classic preprocessor).The exact values for these parameters may vary for different element edge lengths.When processed for different illumination angles, a mean element edge size = 6.25 cm shows projected areas close to the arithmetic mean.Additionally, for this chosen value of the processing time for a complete assessment of all illumination conditions with a 5 ∘ resolution in elevation and azimuth angle is in the range of 30 min for a conventional desktop PC, which is still acceptable.Consequently, the mesh resulting from = 6.25 cm is the baseline for all further calculations in this work.However, since the optimal configuration highly depends on the actual satellite shape and the positions of its components an optimal mesh has to be assessed for each new satellite that has to be processed. Combined Effect of SRP and Geometry Models. As seen above both the geometrical dependency of the SRP force and the dynamical behaviour of SRP determine the resulting total SRP force acting on the satellite.Consequently, we investigate the behaviour of the SRP force acting on MICROSCOPE with both effects included in the modelling approach.In Figure 8 each line in both pictures represents the absolute value of ⃗ SRP for a specific ⃗ Sun : that is, it is assumed that the satellite's orientation is fixed with respect to the Sun over one year.In the top picture the outcome for normal incident sunlight for each satellite side is depicted.In order to compare more realistic illumination conditions, we considered deviations from the normal vector of the solar panel side of 15 ∘ and 30 ∘ , respectively, which is depicted at the bottom of Figure 8. This resembles the range that is expected for MICRO-SCOPE.Naturally, all lines show the same characteristics due to the variation of SRP over the year which yield a maximum difference in magnitude of about 7%.However, the influence of the satellite's attitude might result in larger differences, for example, 13% for ± as seen above.For the MICROSCOPE case differences of roughly 1% are obtained for deviations from the normal axis of + of 15 ∘ and 9% for deviations of Finally, a simulation of the MICROSCOPE orbit for a simulation time of one year was carried out with all five geometry models.Figure 9 shows the resulting evolution of | ⃗ | SRP . Only for the sphere model, the effect of the changing distance between Sun and Earth becomes visible.For all other models, the variation of the incident sunlight is the dominating effect.For the FE model, the force differs about 16% from the maximum in winter to the minimum in summer.Besides Figure 9 shows an unexpected result: the disk model performs better for the MICROSCOPE scenario which is in contrast to the assumption that a Wing-Box model will resemble the results of a FE model best (according to the projected area in Figure 4). Furthermore, there are steep changes in the evolution of the resulting SRP force that only appear for the Wing-Box model.Figure 10 reveals the problem that occurs for this modelling approach.Here the calculated illumination conditions for both the Wing-Box and the FE model are depicted for two different dates.The first one is chosen at the end of April 2016 right before the steep decrease (upper row) and the second one only a few days later, directly after this strong decrease (lower row).The chosen scenarios are marked with black asterisks in Figure 9.For the FE model the shaded area (red elements) changes little due to the modified incoming sunlight.But for the Wing-Box model, the side panel changes from fully sunlit to completely shaded and therefore does not contribute to the force anymore.Such effects cannot appear for the disk model which yields a smoother evolution of the force.This outcome emphasizes that each scenario has to be investigated individually in order to obtain the best result.behaviour with respect to atomic oxygen, space debris, radiation, and thermal cycles [15].However, for most materials used in space, the mean coefficient of absorptivity (with respect to the solar spectrum) will increase over time, while the mean coefficient of emissivity will not show a drastic change. In order to test the influence of a degradation of the optical properties of external surfaces to the resulting SRP, a variation of solar absorptivity over mission lifetime is considered.Again MICROSCOPE is used as test case.In order to define a model for the degradation rate, a logarithmic evolution of the absorptivity is considered.Assuming that surface degradation leads to a microscopic cratering, effectively the increase of absorptivity will depend on the increase of surface area resulting from the roughened surface.As a consequence, the rate of change in will be high during the first months of mission and decrease over mission time.A suitable model for this behaviour is a reciprocal dependency of the time derivative of the mean coefficient of absorptivity on the time : leading to with where is the total mission lifetime and is the degradation rate scaling factor.The begin-of-life (BOL) and end-of-life (EOL) properties as given by the MICROSCOPE mission definition [16][17][18] are listed in Table 2.Note that the specified values for BOL/EOL values of specular and diffuse reflectivity are modeled values since no actual data on their properties is available. Since the total coefficient of reflectivity is given by = (assuming nontransparent surfaces), (8) can also be applied for an assessment of the evolution of the coefficient of reflectivity.However, not only the total magnitude of reflection but also the ratio between specular and diffuse reflections may change.The individual evolution of and depends on the BOL properties of the respective surface material and the actual conditions experienced in space.Due to the lack of actual data we use a model with a qualitative approach.Since a roughening of a smooth surface causes a drop in specular reflectivity, the ratio between specular and diffuse reflectivities is scaled with an exponential law by the time of duration in orbit: where is a scaling factor for the rate of change from specular to diffuse reflectivity.Due to lack of actual data, all polished and metal surfaces are assumed to be nearly perfect specular reflectors at BOL ( = 10), while MLI is considered to start at a of 1, motivated by the typical crinkled surface structure of MLI.Painted surfaces (such as the rear of the solar panel) also start at a of 1 considering a tarnished coating.The EOL/BOL values for specular and diffuse reflectivity as listed in Table 2 are given by Thus, the coefficients of reflectivity evolve to The scaling factor is now used to model a faster or slower changing ratio between specular and diffuse reflections.As an example the evolution of diffuse and specular reflectivity for the MLI values specified in Table 2 is displayed in Figure 11. For the MICROSCOPE case a moderate change towards diffuse reflection is considered.Therefore, a scaling factor of = 0.1 has been chosen to obtain the EOL values of diffuse and specular reflectivity as listed in Table 2. Figure 12 shows the evolution of the coefficient of absorptivity for external components subjected to degradation following the model described in this section.The assignment of material models to individual components is depicted in Figure 5. Figure 13 shows the resulting evolution of diffuse and specular reflectivity for and values as discussed above.As a consequence of this reflection model, a decrease of SRP force over time can be expected following (5).However, one has to keep in mind that the actual illumination condition also affects the resulting force magnitude.Thus, a change of satellite attitude and position over time may lead to a different trend. The resulting SRP force is calculated with a time resolution of 1 month.Here the position of the MICROSCOPE International Journal of Aerospace Engineering spacecraft with respect to Sun and Earth as well as the spacecraft attitude is fixed; that is, and ⃗ Sun are constant for each investigated case. Figure 14 shows the obtained results.Again, perpendicular solar irradiation for each satellite side and illumination conditions estimated for MICROSCOPE were chosen.The picture in the center of Figure 14 shows that the force decreases and deviates about 5% compared to the BOL value for the solar panel side +.For all other sides the effect is even stronger.This is due to the fact that the degradation of the solar panels which are the main contributors of side + is small compared to all other materials.At the bottom of Figure 14 one can see that the degradation effect is less strong for combinations of and that form a deviation of 15 ∘ from the normal axis of + compared to the result for normal incident sunlight.For deviations about 30 ∘ , the force even increases during lifetime.In all cases different sides of the satellite contribute to the value of the SRP force whereas the dominating effect of the solar panels decreases with increasing deviation from the + normal axis. Finally, in Figure 15 the degradation effect is applied to the MICROSCOPE scenario for the FE model.For comparison the evolution of the force without degradation is also depicted (cf. Figure 9).The value of the force including material degradation was only evaluated at one point per month because an integrated degradation algorithm in the simulation process has not been implemented so far.Nevertheless, the figure shows that omitting the degradation effect will lead to an over-or underestimation of the actual SRP force. Benefits for MICROSCOPE. The analysis presented here shows that a thorough assessment of the influence of SRP is highly relevant with respect to the main scientific goal of MICROSCOPE.The goal of the mission is to detect a differential acceleration signal at the orbit frequency orbit in the inertial pointing mode which would imply a violation of the EP.At the targeted accuracy of the evaluation of the possible EP violation (10 −15 ), the science data has to be cleaned from residual accelerations larger then 10 −12 m/s 2 , especially at the frequency orbit .In order to realize this, MICROSCOPE's AOCS is based on a drag-free concept which keeps the spacecraft in the favored state.However, one has to consider several effects that may lead to external disturbances influencing the internal inertial sensors, regardless of the drag-free control.On the one hand, time delays in controller and actuator responses may cause an influence of external disturbances on the science signal when the satellite state changes with a high rate (as in tumbling or when the satellite enters/leaves eclipse).On the other hand, the inertial sensors are not completely decoupled from external accelerations since misalignments and different response times of the sensor components cause residual internal accelerations affecting the measurement in the range of the orbit frequency.Since the magnitude of the SRP force is in the range of 10 −5 N which results in accelerations of about 10 −8 m/s 2 , residual effects on the science signal cannot be neglected completely.Any SRP residual effect will show up at a frequency of ( orbit + Δ), where Δ is a phase difference caused by the Earth's orbit around the Sun.As a consequence, it might be possible to mitigate this influence by analysing a long time span of science data.However, this is subject to further investigation. Apart from these considerations regarding the later scientific data analysis, a detailed modelling of the corresponding SRP forces and torques is also important to estimate the actual needed control forces of the AOCS for an evaluation of its performance.The analysis procedure described in this paper reveals explicitly that an incomplete information on the SRP disturbance effect only allows for the identification of its frequency.But an additional determination of its magnitude fails if no effort is put into detailed surface modelling.Both pieces of information are needed to obtain a good knowledge of the actual satellite state and thus provide the possibility of, for example, taking into account cross-coupling effects between the sensors due to residual accelerations.As a consequence, the resulting disturbances due to the here studied effects cannot be neglected at the desired level of MICROSCOPE measurement accuracy. Conclusion The modelling of a realistic disturbance force due to SRP is a complex task which involves a multitude of modelling and simulation steps.In our study we used an algorithm for computing the SRP force which utilises a FE model for estimating the satellite's dimensions and surface properties instead of commonly used Cannonball or Wing-Box models.This algorithm is embedded in the simulation software HPS that amongst others propagates the satellite orbit and attitude.Motivated by the high requirements for attitude precision, the MICROSCOPE mission served as example for the parameter analysis of the different contributors to the SRP force.This study reveals that the analysed NGD cannot be neglected at the desired level of MICROSCOPE measurement accuracy.For this mission case example, the magnitude of SRP varies throughout the year about 7% which is a typical value for many satellite missions.For comparison different simple geometry models of MICROSCOPE were used in addition to the FE model.It was shown that the resulting SRP force varies due to the yearly changes of the magnitude of the SRP pressure SRP which were mentioned in this paragraph before.Furthermore, it was shown that a second effect appears which depends strongly (i) on the geometry model of choice and International Journal of Aerospace Engineering (ii) on the satellite's orientation.This second effect is much stronger than the impact of the yearly variation of SRP .In case of MICROSCOPE, the difference between Solar Panel Front and rear, for example, amounts to approximately 13% for the FE model.Although the satellite's solar panel will always point to Sun direction, the incident angle will change during the year which yields to variations of the SRP force of at least 9%.Combining the results for SRP and the FE model a difference of 16% in the magnitude of the SRP force can be expected for MICROSCOPE over one year.The comparison with other geometry models revealed that from the range of "simple" approaches a disk approach resembles the results of the FE model the most.Another point of the study was the influence of the material degradation.It was shown that for the solar panel side the force decreases and deviates about 5% from the BOL value at the end of the mission.Depending on the incident angle the degradation effect can even result in an increasing force over time.In summary the study reveals that a simple answer to the question of the main contributor of SRP force cannot be given easily but depends on the actual mission scenario.However, the introduced SRP modelling approach based on FE models enables the highest modelling precision compared to conventional approaches but also implies a considerably high modelling effort.Therefore, the best approach shall be chosen by means of a trade-off between needed SRP force accuracy and resources at hand. Figure 2 : Figure 2: Resulting SRP for selected satellite missions.The variation over one day is shown. Figure 3 : Figure 3: Illustration of MICROSCOPE orbit with respect to Earth-centred inertial coordinates (ECI).Small picture: definition of vector to the Sun in polar coordinates and . Figure 4 :Figure 5 :Figure 6 : Figure 4: Upper part (a, b, c, and d): different geometry models for the SRP computation with same projected area in -direction.Lower part (a, b, c, and d): projected area ( proj = cos()) as function of incident sunray in polar coordinates and for the corresponding models and the sphere model (constant level). Figure 8 : Figure 8: Absolute value of disturbance force due to SRP for different illumination conditions depicted over one year.(a) Perpendicular solar irradiation of each satellite side.(b) Illumination conditions estimated for MICROSCOPE. Figure 9 : Figure 9: Resulting disturbance force due to SRP for different geometry models over one year.The yellow bar marks the time of eclipse. 4. 4 .Figure 10 : Figure 10: Illumination conditions of Wing-Box model (a) and FE model (b) for different dates of the simulated MICROSCOPE scenario.Upper row: end of April 2016, lower row: a few days later in May 2016.Yellow elements are in full sunlight, blue elements are not exposed to the Sun, and red elements are shadowed by other parts of the satellite. Figure 11 : Figure 11: Time evolution of and for different scaling factors.MLI values considered for starting reflectivities with a of 1. Figure 12 : Figure 12: Considered variation of solar absorptivity over mission lifetime of 18 months for chosen external components of MICRO-SCOPE test case.MLI: Multilayer Insulation, SPF: Solar Panel Front, WP: White Paint, KV: Kevlar, PA: Polished Aluminum, RAD: Radiator surface, and BP: Black Paint. Figure 13 : Figure 13: Considered variation of specular (a) and diffuse (b) reflectivity over mission lifetime of 18 months for chosen external components of MICROSCOPE test case.MLI: Multilayer Insulation, SPF: Solar Panel Front, WP: White Paint, KV: Kevlar, PA: Polished Aluminum, RAD: Radiator surface, and BP: Black Paint. Figure 14 : Figure 14: Influence of material degradation on disturbance force due to SRP.(a) Absolute value for perpendicular solar irradiation of each satellite side.(b) Corresponding percentage deviation from BOL value.(c) Percentage deviation from BOL value for illumination conditions that are estimated for MICROSCOPE. Figure 15 : Figure 15: Comparison: | ⃗ | SRP for the FE model with and without degradation for the MICROSCOPE scenario. Table 1 : Overview of orbit classes including typical orbit parameters and mission examples. Investigation of convergence for the projected area of the FE model.The arithmetic mean of the projected area is represented by the dashed gray line.Additionally, the appropriate numbers of computational steps are shown.The black asterisk (together with the light gray dashed line) marks the model of choice.
9,650.2
2015-11-30T00:00:00.000
[ "Physics" ]
SUMO-2 Promotes mRNA Translation by Enhancing Interaction between eIF4E and eIF4G Small ubiquitin-like modifier (SUMO) proteins regulate many important eukaryotic cellular processes through reversible covalent conjugation to target proteins. In addition to its many well-known biological consequences, like subcellular translocation of protein, subnuclear structure formation, and modulation of transcriptional activity, we show here that SUMO-2 also plays a role in mRNA translation. SUMO-2 promoted formation of the active eukaryotic initiation factor 4F (eIF4F) complex by enhancing interaction between Eukaryotic Initiation Factor 4E (eIF4E) and Eukaryotic Initiation Factor 4G (eIF4G), and induced translation of a subset of proteins, such as cyclinD1 and c-myc, which essential for cell proliferation and apoptosis. As expected, overexpression of SUMO-2 can partially cancel out the disrupting effect of 4EGI-1, a small molecule inhibitor of eIF4E/eIF4G interaction, on formation of the eIF4F complex, translation of the cap-dependent protein, cell proliferation and apoptosis. On the other hand, SUMO-2 knockdown via shRNA partially impaired cap-dependent translation and cell proliferation and promoted apoptosis. These results collectively suggest that SUMO-2 conjugation plays a crucial regulatory role in protein synthesis. Thus, this report might contribute to the basic understanding of mammalian protein translation and sheds some new light on the role of SUMO in this process. Introduction Small ubiquitin-like modifiers (SUMO) are ubiquitin-related proteins that can be covalently conjugated to target proteins in cells to modify their function. To date, four SUMO isoforms encoded by separate genes, designated SUMO-1 to SUMO-4, have been identified in humans [1,2]. The sequence identity and expression of these four SUMO molecules is highly variable. SUMO-2 and SUMO-3 are nearly identical, but share only 50% identity with SUMO-1 [3][4][5]. While SUMO-1, -2 and -3 are expressed ubiquitously, SUMO-4 seems to be expressed mainly in the kidney, lymph nodes and spleen. Protein sumoylation is mediated by activating (E1), conjugating (E2) and ligating (E3) enzymes [6]. Ubc9 is the only identified SUMO E2 conjugating enzyme, which is sufficient for sumoylation. The E3 ligase promotes the efficiency of sumoylation and in some cases has been shown to direct SUMO conjugation onto non-consensus motifs [7]. Furthermore, sumoylation is reversible and is removed from targets by several specific SUMO proteases in an ATP-dependent manner [8]. SUMO modification has emerged as an important regulatory mechanism for protein activity, stability and localization. Most of the SUMO targets identified thus far are involved in various cellular processes, such as nucleocytoplasmic transport, transcriptional regulation, apoptosis, response to stress, and cell cycle progression [9]. Sumoylation regulates several aspects of gene expression, including DNA transcription, mRNA splicing and mRNA polyadenylation [7,9,10]. Furthermore, our recent study demonstrated that SUMO modification also regulates protein translation [11]. In eukaryotes, most proteins are synthesized through capdependent mRNA translation. A rate-limiting stepof this process is formation of the eIF4F complex containing eIF4E (cap-binding protein), eIF4A (ATP-dependent mRNA helicase) and eIF4G (scaffold protein) [12]. Binding of eIF4G to the cap structure of mRNA is competed by a small family of eIF4E-binding proteins (4E-BPs). 4E-BP1 is the most abundant member of the 4E-BP family. Its phosphorylation sites of Ser65 and Thr70 have been shown to participate in formation of the eIF4F complex. In particular, eIF4E phosphorylation at Ser209 and eIF4E SUMO conjugation by SUMO-1 seems to be important for initiation of cap-dependent translation [11,13]. Furthermore, we found that overexpression of UBC9, the only identified SUMO E2 conjugating enzyme, dramatically increased expression of a capdependent luciferase reporter, but overexpression of SUMO-1 only slightly increased the expression of the luciferase reporter. Thus, we speculated that SUMO-2/3 isoform conjugations are involved in the regulation of cap-dependent mRNA translation. However, whether SUMO-2/3 conjugation plays a role in the regulation of cap-dependent mRNA translation and the innate mechanisms are still unclear. In this study, we characterized the role of SUMO-2 conjugation in mRNA translation initiation through a SUMO-2 motif-negative mutation, overexpression and shRNA interference experiments, a translation reporter assay, and an inhibitor treatment. Furthermore, we studied the effect of regulation of mRNA translation by SUMO-2 on cell proliferation and apoptosis. Plasmids, Mutagenesis and Transfection The PCR-amplified cDNAs encoding the processed forms of SUMO-1, SUMO-2, and SUMO-3 containing Gly-Gly at their C-termini were inserted into the pcDNA3-HA3 vector, which was described previously [11]. EcoR I (EcoR V for SUMO-1)/Apa I fragments were used to generate pcDNA3-HA3-SUMO-1/2/3 plasmids. The pcDNA3-HA3-SUMO-D1/2/3 plasmids lacking the C-terminal Gly-Gly were constructed using 39-primers specifically designed with the corresponding mutation. The shRNA oligos were designed and subcloned into pLKO.1 (Puro) according to the Addgene Plasmid 10878 protocol. A negative control vector containing scrambled shRNA was constructed using the same method. Transient transfections were performed using PolyJet (SignaGen Laboratories, Ijamsville, MD) according to attached protocol. The cells were harvested 48 h after transfection. For the shRNA knockdown of SUMO-2, the cells were selected by puromycin for approximately 7 d. The gene-silencing effect was evaluated by Western blot analysis. Cap-dependent reporter gene assay As previously described [14], transcription from reporter gene pYIC DNA produces a bicistronic mRNA encoding epitopetagged yellow fluorescent protein (EYFP) and cyan fluorescent protein. EYFP translation depends on 59 cap sequence,so if a protein interfere with the cap-dependent pathway, EYFP translation is reduced. As EYFP were tagged with three c-myc epitope tags, we can use the corresponding antibody against c-myc or fluorescence intensity to indicate the expression of EYFP, and the results will help us to decide whether the stimulus interferes with protein translation through cap -dependent pathway. m7GTP Pull-down Assay The method has been described previously [15]. Briefly, 10 ml of m7GTP Sepharose beads (Amersham) was washed with 500 ml of PBS three times, added to 300 mg of total protein from the cell lysates pre-cleared with Sepharose resin without m7GTP for 1 hour, and rotated overnight at 4uC. The beads were washed four to five times with PBS. The samples were then denatured, and the supernatants were loaded into SDS-PAGE for Western blot analysis. Reverse Transcription and PCR (RT-PCR) Total RNA was isolated using the TRIzol reagent (Invitrogen, Cincinnati, OH, USA) according to the manufacturer's protocol. RNA (1 mg) was reverse transcribed using Superscript II reverse transcriptase with an oligo (dT) primer (Invitrogen, Carlsbad, CA). The primers used to amplify the 281 bases of the human LUC gene were 59-CCGGGAAAACGCTGGGCGTTA-39 and 59-ACCTGCGACACCTGCGTCGA-39. The 202 bases of the housekeeping gene b-actin were amplified using the following primers: 59-CACCCGCCGCCAGCTCAC-39 and 59-CTTGCT-CTGGGCCTCGTCGC-39. Luciferase Activity Assay The cells were seeded in 24-well plates and co-transfected with the pcDNA3-HA3 reporter plasmid and the target plasmid using the transfection reagent (5:1 ratio; Roche Applied Science) following the manufacturer's protocol. Twenty-four hours later, the cells were lysed and subjected to a luciferase activity assay using a luciferase assay system (Promega) in a luminometer. The relative luciferase activity was normalized to b-galactosidase activity, which was measured as described previously [16]. Cell growth rate analysis In these experiments, 2610 4 cells were plated in each well of 48well plates and cultured for a 48 h period. Cell viability was determined using the MTT assay [17]. Breifly, 50 ml of 5 mg/ml MTT in PBS was added to each well and the plates incubated at 37uC for a further 4 h. The media as then removed and the purple formazan crystals were dissolved in 150 ml dimethyl sulfoxide. After the plate was agitated on a plate mixer, the optical densities were read at 490 nm with a Bio-RAD microplate reader. Statistical Analysis The band intensity for RT-PCR assay and western-blot assay was analyzed by using the UN-SCAN-IT gel-graph digitizing The effects of the overexpressed SUMO isoforms on cap-dependent LUC activities. HCT116 cells were co-transfected with individual SUMO isoforms along with a luciferase reporter gene. Twenty-four hours later, the cells were lysed, and the luciferase activity was measured using a luciferase assay kit (Promega). The absolute relative light units values were in the 10 7 -10 8 range. The expression levels of SUMO proteins in the cell lysates are also shown. (B) SUMO-2 activates cap-dependent EYFP expression. HCT116 cells were co-transfected with pYIC and SUMO-2 or SUMO-3. EYFP and ECFP were detected using c-myc and HA-tag antibodies, respectively. (C) The effect of the overexpression of the SUMO isoforms on LUC mRNA transcription in HCT116 cells. Human HCT116 cells were transfected with each of the SUMO isoforms. Twenty four hours after transfection, the cells were harvested, and the whole cell lysates were used for the RT-PCR assays. The reporter gene LUC was detected. b-actin protein was used as the loading control. (D) The shRNA knockdown of SUMO-2 reduces LUC expression. HCT116 stable cells lines with an empty vector or SUMO2 shRNA were selected on the basis of puromycin resistance, then transiently transfected with the luciferase reporter gene. Twenty-four hours after transfection, the cells were harvested for the luciferase assay. The data is an average of 5 measurements of a single clone. The knockdown effects are also shown. The error bars represent the S.E. LUC, luciferase; shRNA, short hairpin RNA; SUMO, small ubiquitin-related modifier; RT-PCR, reverse transcription and polymerase chain reaction; Vc, vector control. doi:10.1371/journal.pone.0100457.g001 SUMO-2 Promotes Translation PLOS ONE | www.plosone.org software. Results are expressed as means 6 SE. Statistical analyses were performed by one-way analysis of variance followed by Dunnett's post hoc or Tukey's multiple comparison test using the data obtained from three or five independent experiments. SUMO-2 promotes cap-dependent protein translation To date, little attention has been given to the impact of the individual SUMO paralogues, SUMO-1, SUMO-2 and SUMO-3, on protein translation. To examine whether SUMOs affects protein translation, we surveyed cap-dependent luciferase activity, cap-dependent EYFP expression and IRES-dependent ECFP expression under normal and SUMO over-expression conditions. As shown in Fig. 1A, overexpression of SUMO-2 and SUMO-3 by transient transfection induced a striking increase in cap-dependent luciferase activity (almost 5-fold for SUMO-2 and 2.5-fold for SUMO-3), while SUMO-1 had little effect on cap-dependent translation (Fig. 1A). Furthermore, overexpression of SUMO-2 also significantly promoted expression of cap-dependent EYFP, but not affected the expression of IRES-dependent ECFP. On the other hand, the overexpression of SUMO-3 markedly increased expression of IRES-dependent ECFP, but did not affect capdependent EYFP (Fig. 1B). To evaluate whether SUMO-2 promotes protein translation by increasing total mRNA level, we ran RT-PCR for LUC genes. However, there was no significant change between the control and experimental groups (Fig. 1C). SUMO-2 also didn't change mRNA level of ECFP and EYFP (data no shown). In contrast with the effect of overexpression, SUMO-2 knockdown via SUMO-2 The non-conjugatable form of SUMO-2 had no effect on cap-dependent LUC translation. HCT116 cells were co-transfected with a luciferase reporter gene and each of the SUMO isoforms or the corresponding mutant. Twenty-four hours later, the cells were harvested for a luciferase assay. The absolute luciferase relative light values were in the 10 7 -10 8 range. Statistics were performed using a one-way analysis of variance followed by Tukey's multiple comparison test using the data obtained from five independent experiments. (C) The effects of the non-conjugatable SUMO-2 mutant on cap-dependent EYFP protein expression. The empty vector, SUMO-2, and SUMO-2DGG were co-transfected with pYIC in HCT116 cells. EYFP was detected using anti-c-myc antibody. The expression level of SUMO-2 and its mutant protein are also shown. SUMO-2DGG, the non-conjugatable form of SUMO-2that lacks diglycine. doi:10.1371/journal.pone.0100457.g002 SUMO-2 Promotes Translation PLOS ONE | www.plosone.org shRNA reduced cap-dependent luciferase activity (Fig. 1D). Collectively, these data indicate that sumoylation by SUMO-2 plays a positive role in cap-dependent protein translation. SUMO-2 conjugation is a prerequisite for activating capdependent protein translation To exclude the possibility that SUMO-2 exerts its effect through non-covalent modifications, we made SUMO non-conjugatable mutants (mutants lacking the C-terminal Gly-Gly through which SUMOs congjugate with the target protein) and used these mutants to differentiate whether it was through covalent or noncovalent conjugation that SUMO promoted mRNA translation. In HCT116 cells, RanGAP1 is a major substrate of SUMO-1 and SUMO-1 antibody recognizes covalently conjugation RanGAP1-SUMO-1. Since SUMO-2 or SUMO-3 themselves can be sumoylated because they contain an internal SUMO consensus modification site, poly-SUMO-2 or SUMO-3 chains on SUMOmodified substrates can be generated. Fig. 2A confirmed that the SUMO mutants are unable to be covalently conjugated.The capdependent reporterassay using the luciferase reporter (Fig. 2B) and the EYFP reporter (Fig. 2C) both showed that SUMO-2/3 nonconjugatable mutants lost the ability to induce cap-dependent mRNA translation. These results suggested that cap-dependent mRNA translation was induced mainly by covalent SUMO conjugation to certain target proteins. Twenty-four hours after transfection, the cells were harvested and lysed for the m7GTP pull-down assays. The cap-bound eIF4E, eIF4G, and 4EBP1 were eluted from the 7-methyl-GTP (m7GTP)-Sepharose resin using 7-methyl-GDP (m7GDP) and detected by immunoblotting with the corresponding antibodies. (C) SUMO-2 promotes the interaction between eIF4E and eIF4G. The HCT116 cells were transfected with the empty vector, wild-type SUMO-2 or SUMO-2DGG. Twenty-four hours after transfection, the cells were harvested and lysed for the co-immunoprecipitation assays. The immunoprecipitated eIF4E or eIF4A1/2-was blotted with eIF4G, and the immunoprecipitated eIF4G was blotted with eIF4E. (D) SUMO-2 is directly conjugated to the eIF4F complex. Processed the same way as (B), but immunoblotted with HA antibody. eIF, eukaryotic translation initiation factor. *, p,0.05 vs control; **, p,0.01 vs control. doi:10.1371/journal.pone.0100457.g003 SUMO-2 Promotes Translation PLOS ONE | www.plosone.org SUMO-2 promotes formation of the eIF4F complex by enhancing interaction between eIF4E and eIF4G Next, we investigated the mechanisms underlying the SUMO-2 induced cap-dependent mRNA translation. Firstly, we analyzed whether SUMO-2 promotes protein translation through mTORC1 signaling by over-expressing HA-SUMO-2 and nonconjugatable HA-SUMO-2DGG in HCT116 cells. And it was found that the whole mTOR pathway, including the total expression and phosphorylation of key molecules (such as m-TOR, p70S6K, S6, 4EBP1 and eIF4E) in the mTORC1 pathway, showed no obvious changes after overexpression of SUMO-2 or its non-conjugatable form (Fig. 3A). Then, we examined the effect of SUMO-2 on interaction between eIF4E, eIF4A and eIF4G. As expected, the amount of eIF4E and eIF4G in the eIF4F complex increased markedly following SUMO-2 overexpression while 4EBP-1 significantly decreased (Fig. 3B). Furthermore, interactions between eIF4E and eIF4G were also enhanced, while the interaction between eIF4G and eIF4A was not affected (Fig. 3C). In order to further study whether SUMO-2 promotes formation of eIF4F complex directly, we preformed m7GTP pull-down assay and immunoblotted with HA antibody. Intriguingly, when we overexpressed SUMO-2, bands from about 36 kDa to the top were detected, while the other two groups didn't show the same phenomenon ( Figure. 3D). We first considered if the three major components of eIF4F complex can be sumoylated by SUMO-2. We know that bands below 130 kDa were not eIF4G-SUMO-2 for the molecular weight of eIF4G is 220 kDa. However, we can't exclude eIF4G may be sumoylated. Then we demonstrated if eIF4E and eIF4A can be modified by SUMO-2 by immunoprecipitation, but the results turned out to be negative (data not shown). For eIF4F complex contains dozens of other components, it is possible that the substrates of SUMO-2 existing in eIF4F that were not examined through above experiments, so it suggested that more experiment need to be done to find substrate protein of SUMO-2 in eIF4F complex. SUMO-2 rescues the inhibiting effects of 4EGI-1 on the interaction between eIF4E and eIF4G To further confirm the idea that SUMO-2 promotes mRNA translation by enhancing the interaction between eIF4E and eIF4G, we used 4EGI-1, an eIF4E/eIF4G interaction inhibitor, to interfere with the interaction between eIF4G and eIF4E. Using m7GTP pull-down and an immunoprecipitation assay, we found that 4EGI-1 indeed disrupted the formation of the eIF4F complex by inhibiting the interaction between eIF4G and eIF4E, as previously reported [18] (Fig. 4A and B). Intriguingly, overexpression of SUMO-2 partially inhibited the effect of 4EGI-1 on interaction between eIF4E and eIF4G and increased formation of eIF4F complex, while SUMO-2DGG had no such effect (Fig. 4A and B). Furthermore, 4EGI-1 also inhibited cap-dependent LUC and EYFP expression, and SUMO-2 overexpression partially reduced the inhibitory effect of 4EGI-1 on LUC and EYFP expression ( Fig. 4C and D). Taken together, our data indicate that overexpression of SUMO-2 partially induces cap-dependent mRNA translation by promoting the interaction between eIF4E and eIF4G and enhancing formation of active eIF4F complexes. SUMO-2 upregulates expression of two cap-dependent target proteins cyclin D1 and c-myc Given that SUMO-2 substantially increases interaction between eIF4E and eIF4G, we hypothesized that expression of capdependent mRNA will also be promoted. To confirm this hypothesis, we chose two typical cap-dependent proteins, cyclin D1 and c-myc. We first determined the effect of 4EGI-1 on expression of c-myc and cyclin D1. Consistent with our hypothesis, 4EGI-1 at a concentration of 50 mM inhibits the expression of capdependent proteins, and overexpression of SUMO-2 can partially reverse this inhibitory effect (Fig. 5A). Furthermore, neither SUMO-2 nor 4EGI-1 affected the cytoplasmic mRNA levels of the target genes (Fig. 5B), thus excluding the possibility that SUMO-2 or 4EGI-1 alters gene transcription or mRNA transport of the genes. On the contrary, expression of c-myc and cyclin D1 were significantly inhibited when SUMO-2 was reduced by SUMO-2 shRNA (Fig. 5C), while their transcriptional levels were also not affected (Fig. 5D). These observations suggest that SUMO-2 upregulates two cap-dependent proteins c-myc and cyclin D1 translation. SUMO-2 promotes cell proliferation and inhibits apoptosis During culture of SUMO-2 silenced cells, we noticed that cells in the silenced group have much lower rates of cell growth, which was consistent witha previous study [19]. Thus, we assessed cell proliferation and apoptosis. As expected, overexpression of SUMO-2, but not SUMO-2DGG, significantly promotes cell proliferation and partially cancels out the inhibitory effect of 4EGI-1 on HCT116 cell proliferation (Fig. 6A). On the other hand, growth of SUMO-2-silenced HCT116 cells was markedly inhibited (Fig. 6B). Furthermore, transient overexpression of SUMO-2 markedly reduced HCT116 cell apoptosis induced by 4EGI-1 or starvation (Fig. 6, C and D). However, SUMO-2silencing HCT116 cells showed higher rates of apoptosis (Fig. 6E). Taken together, these data indicate that SUMO-2 promotes cell proliferation and inhibits apoptosis. Discussion Considering the remarkably extensive influence of sumoylation in cells, it is important to identify whether the components in sumoylation circulation impact protein translation. As previously mentioned, both SUMO-1 and Ubc9 can promote protein translation, and we suspected that SUMO-2 or SUMO-3 may also play a part in mRNA translation. In this study, we provided evidences to show that SUMO-2 can promote cap-dependent protein translation, while SUMO-3 seems to be involved in IRES- So what is the molecular basis of the SUMO-2 induced activation of mRNA translation? We have four speculations. Firstly, proteomic analyses have revealed that many of the SUMO conjugation target proteins are transcription factors or other nuclear proteins that modulate gene expression [20]. We also know that mRNA levels of many known genes were significantly up-or down-regulated in SUMO-2/-3 miRNA overexpressing cells [19]. Thus, we examined whether SUMOs have effects on total mRNA. However, total mRNA proved to be unchanged. Secondly, the mTORC1 pathway is important for regulation of mRNA translation and integrates various signals, such as nutrients, growth factors, energy, and stress, to regulate cell growth and proliferation. However, overexpression of SUMO-2 has little effect on the mTORC1 pathway. Thirdly, cap-dependent translation is a fundamental operation in almost all aspects of cell functions, and one of the rate-limiting step in cap-dependent translation is the formation of the eIF4F complex containing eIF4E (cap-binding protein), eIF4A (ATP-dependent mRNA helicase) and eIF4G (scaffold protein). We found that the underlying mechanism for this result is the significantly enhanced interaction between eIF4E and eIF4G. The eIF4E/eIF4G interaction inhibitor 4EGI-1 can effectively inhibit the formation of active eIF4F complexes. After we treated HCT116 cells with 4EGI-1, the interaction between eIF4E and eIF4G was reduced and this effect was partly impaired when SUMO-2 was overexpressed. However, the effect was not reduced to the same level as the control groups. This may be because 4EGI-1 [21] and SUMO-2 [22] both have complicated networks within cells. Fourthly, we found that SUMO-2 promotes mRNA translation through covalent conjugation. Although the exact component of eIF4F complex modified by SUMO-2 is yet to be identified despite exclusion of three major components, our results did indicate that SUMO-2 modification is involved in the formation of eIF4F shRNA was also used in this study to elucidate the effects of silencing of SUMO expression on global protein expression. As expected, after shRNA knockdown of SUMO-2, cap-dependent LUC and EYFP protein translation decreased, which again confirmed a role of SUMO-2 in protein translation. The shRNA knockdown of SUMO-2 partially inhibited protein translation and showed that SUMO-2 promotes mRNA translation. The observed upregulation of mRNA translation appears to be a transient process associated with formation of active eIF4F complexes because the overall transcript levels for the relevant components vary slightly between cells cultured normally versus cells cultured under SUMO-2 overexpression conditions. During mRNA translation, the rate-limiting step of translation initiation is mediated by the cap-binding protein eIF4E [23]. Posttranslational modification and phosphorylation events are essential for the translation promoting function of eIF4E. These include phosphorylation of eIF4E [24][25][26][27] and its inhibitory binding protein 4E-BP1 (PHAS-I) [28][29][30][31][32]. When phosphorylated, 4E-BP1 releases eIF4E to form the eIF4F complex and promote translation [30,32]. In addition to phosphorylation, sumoylaiton also plays an important role in this process. Xu et al. demonstrated that eIF4E modified by SUMO-1 can promote mRNA translation [11]. Moreover, the E3 ligase HDAC2 was shown to promote eIF4E sumoylation [33]. Here, our study showed that overexpression of SUMO-2 resulted in a marked increase in translation and a selective enhancement of key regulators of cell cycle progression, including c-myc and cyclin D1. As a reversible post-translational modification, the SUMO family has gained prominence as a major regulatory component that impacts numerous aspects of cell growth and differentiation. Many experiments indicate that SUMO modification is important for passage through the cell cycle [34][35][36][37]. Sumoylation is also implicated in other forms of cellular growth control, such as senescence and apoptosis [38][39][40]. Moreover, the results of the studies described earlier indicate that sumoylation is not only an important regulator of the normal function of many vital cellular proteins but also a contributor in the pathogenesis of some human diseases. For instance, several lines of evidence point to a role of the SUMO modification pathway in tumorigenesis [41,42]. Ubc9 overexpression can increase cancer cell growth [43,44]. The SUMO E3 protein PIAS3 is upregulated in several different cancer types [45], and elevated levels of the SUMO E1 enzyme are associated with more severe hepatocellular carcinomas [46]. Additionally, deletion of the protease genes, like Ulp1in yeast, stops cell cycle progression and further highlights the essential and critical role of sumolation in the cell's life cycle [47]. Overexpression of SUMO-2 and the Uba2 E1 subunit has been correlated with poor survival of hepatocellular carcinoma patients [46]. In our study, the cell cycle and apoptosis assays indicate that overexpression of SUMO-2 promotes cell cycling and inhibits apoptosis. On the other hand, SUMO-2 depletion inhibits the cell cycle and promotes apoptosis, which is consistent with previous studies. We suspect that the observations from our experiments may be derived from the following two aspects. On one hand, many cell cycle-and apoptosis-related proteins and kinases are thought to be the targets of SUMO-2 [48], so it is clear that SUMO-2 modification plays an important regulatory role in these cell processes. On the other hand, many cell cycle-and apoptosisrelated proteins are also cap-dependent proteins, which suggests that faster or slower cell cycle and apoptosis may result from higher or lower expression levels of these proteins. We hypothesize that both of them contribute to this phenomena. Thus, our results further explain the relationship between sumoylation and human diseases. Over the past ten years, SUMOs have been established as essential regulators of many cellular functions. Aberrant SUMO regulation is a likely cause of a variety of human diseases. While new SUMO targets are rapidly identified, many fundamental questions remain unanswered. Our studies show that, like other components in sumoylation circulation, SUMO-2 can also promote mRNA translation by regulating the formation of the eIF4F complex. We believe that our findings could be relevant to how SUMO-2 participates in physiological and pathological processes in cells.
5,572
2014-06-27T00:00:00.000
[ "Biology" ]
Detection and Monitoring Intra/Inter Crosstalk in Optical Network on Chip ABSTRACT INTRODUCTION The high integration rate of transistor has pushed the semiconductor industry to a shift from the single-core to a multi-core in one chip. Besides, a new paradigm is introduced multiprocessor system one chip (MPSoC). However, MPSoC has to face serious problems in terms of energy consumption, execution time, heat dissipation and data flow. Data transfer between cores in MPSoC became the first challenge to improve these parameters. Hence, Network on Chip (NoC) is emerging as a promising solution providing low energy consumption and high bandwidth to improve the performance of MPSoC. However, according the growing of the performance of computer application NoC become a bottleneck for the scalability and power dissipation in the MPSoC. Indeed, electrical interconnects are not able to boost transmission rates and power dissipation which it makes highly desirable to replace them [1]. 4913 Optical communication offers a high bandwidth with lower power consumption. In fact, it can achieve bandwidths in the order of terabits per second by exploiting wavelength division multiplexing. Since photonic system has a potential capacity and become attractive to use it to improve communication in the chip [2]. As a result, optical network on chip "ONoC" is presented to replace the traditional NoC. Microresonators (MRs) and nanophotonic waveguides are the key devices in ONoC. However, these optical devices and waveguides provide transparency (a component is called X-transparent if it forwards incoming signals from input to output without examining the X aspect of the signal) capabilities which they induce optical vulnerabilities "crosstalk" in ONoC where is does not exist in a conventional NoC. Furthermore, one of the serious problems with transparency is the fact that optical crosstalk is additive, and thus the aggregate effect of crosstalk over a whole ONoC may be more nefarious than a single point of crosstalk [3]. Consequently, optical crosstalk degrades the quality of signals and increases their BER (Bit Error Rate) performance [4]. In fact, both forms of optical crosstalk can arise in ONoC system: intercrosstalk and intra-crosstalk. Inter-crosstalk arises when the crosstalk signal is at a wavelength sufficiently different from the affected signal's wavelength that the difference is larger than the receiver's electrical bandwidth. Intercrosstalk can also occur through more interactions that are indirect. For example, if one channel affects the gain seen by another channel, as with nonlinearities. In the second place, intra-crosstalk arises when the crosstalk signal is at the same wavelength as that of the affected signal or sufficiently close to it that the difference in wavelengths is within the receiver's electrical bandwidth. Intra-crosstalk arises in transmission links due to reflections [5]- [6]. Crosstalk is one of the major problem arising in ONoC and it is a barrier of the scalability and the performance for the evolution of the MPSoC. As a result, find a method to detect, localize and monitor crosstalk will be essential. In this paper, we propose a new system to detect and monitor the intra/inter crosstalk in ONoC. Indeed, we present an analytic model for the both forms of crosstalk induced by the ONoC components. Based on this study, we evaluate and point out the importance to detect and monitor crosstalk noise in ONoC. Furthermore, we design and implement a performed hardware system to detect and monitor inter/intra crosstalk in the whole network. The rest of the paper is organized as follows: The second section will present an extensive overview of the related work in the literature. The third section will describe the inner architecture of the basic devices in ONoC and the network model used in this work. The fourth section will depicit the analytic model of the crosstalk noise induced in optical components. Mainly, we will describe the crosstalk progress in the different devices and in the whole network. The fifth section will describe the hardware design and implementation of the proposed system of detection and monitor the crosstalk in ONoC. The sixth section will discuss and analyse the fusibility and the different results of our system. Finally, we will conclude and expose our future works. RELATED WORK Optical network on chip provide a promising solution to increase the requirements in ultra-high bandwidth and lower power consumption in the MPSoC. According the progress of technology, nanophotonic waveguide and optical switch are presented in the chip, which they are the key components for ONoC. In the last decade, many researchers focus their works to propose a performed architecture of ONoC in objective to satisfy the different requirements of MPSoC performance. However, the crosstalk noise is one of the most important issues that researchers face in developing optical network on-chip. In the literature, most of the researchers focus their work on analysis and modeling of crosstalk noises induce by the optical devices in ONoC. Furthermore, many studies isolate their work for modeling and analyzing the crosstalk noise in optical devices separately as router or waveguide. Yiyuan Xie and al. analyse and optimize cornstalk in 5x5 Hitless Silicon-Based Optical Router [7]. Hence, they analysed crosstalk noise at device level and router level. Based on the detailed analysis, they proposed a general analytical model to study the transmission loss, crosstalk noise, optical signal-to noise ratio (OSNR), and bit error ratio (BER). Indeed, they used the crossing angles of 60⁰ or 120⁰ instead of the conventional 90⁰ crossing angle to design the optical router. Using this method OSNR is improved by about 10 dB [7]. Fabrizio Gambini and al. proposed a photonic multi-microring NoC, which the theoretical model based on the transfer matrix method, has been validated through experimental results in terms of transmission spectra [8]. Transmissions at 10 GB/s have been assessed in terms of BER for both single-wavelength and multi-wavelength configurations. The integrated NoC consists of 8 thermally tuned microrings coupled to a central ring, where the BER measurements show performance up to 10 −9 at 10 Gb/s with limited crosstalk and penalty (below 0.5 dB) induced by an interfering transmission [8]. Besides, many works deal with the crosstalk in the whole ONoC. Mainly, it is how the crosstalk noises effect the performance of the network. Particularly, these studies analyse and model the crosstalk progress in the different parts of the network, and present their effect to BER and SNR of the network. Mahdi Niksdat and al. systematically study and compare the worst case as well as the average crosstalk noise and SNR in three well known optical interconnect architectures, mesh-based, folded-torusbased, and fat-tree-based ONoCs using WDM [9]. The analytical models for the worst case and the average crosstalk noise and SNR in the different architectures are presented. Furthermore, the proposed analytical models are integrated into a newly developed crosstalk noise and loss analysis platform (CLAP) to analyze the crosstalk noise and SNR in WDM-based ONoCs of any network size using an arbitrary optical router. The analyses' results demonstrate that the crosstalk noise is of critical concern to WDM-based ONoCs: in the worst case, the crosstalk noise power exceeds the signal power in all three WDM-based ONoC architectures, even when the number of processor cores is small, e.g., 64 [10]. Yiyuan Xie and al. analysed and modelled the crosstalk noise, signal-to-noise ratio (SNR), and bit error rate (BER) of optical routers and ONoCs [11]- [12]. The analytical models for crosstalk noise, minimum SNR, and maximum BER in mesh based ONoCs are developed which an automated crosstalk analyser for optical routers is developed. They, find that crosstalk noise significantly limits the scalability of ONoCs. For example, due to crosstalk noise, the maximum BER is 10 -3 on the 8×8 mesh based ONoC using an optimized crossbar-based optical router. To achieve the BER of 10 -9 for reliable transmissions, the maximum ONoC size is 6×6. As we presented above the research works in optical network on chip has limited to analyse and to model crosstalk noise in both optical components and the whole network. However, Edoardo Fusella, Alessandro Cilardo present different study in ONoC which they proposed a new mapping system for ONoC where it consider the direct effect of the crosstalk noise to the architecture of NoC [13]. They propose a class of algorithms that automatically map the application tasks onto a generic mesh-based photonic NoC architecture such that the worst-case crosstalk is minimized. Furthermore, the results show that the crosstalk noise can be significantly reduced by adopting the aware-crosstalk mapping system, thereby allowing higher network scalability, and can exhibit encouraging improvements over application-oblivious architectures [13]- [14]. BASIC OPTICAL DEVICES AND NETWORK MODEL Optical communication system offers a high bandwidth and high Quality of Service (QoS). However, optical components in ONoC introduce crosstalk noise. In this section, we describe the progress of crosstalk in the optical device moreover in the whole networks [15]. Optical devices model Waveguides and micro-resonator (MRs) are the two basic optical elements and extensively used to construct basic optical switching elements and optical routers. In particular, we have two types of optical switches: the parallel switching element (PSE) and the crossing switching element (CSE). These elements are 1 x 2 optical switching based to MRs and waveguide crossing [16]- [17]. Figure 1 (a) and (b) present the structure of PSE and CSE respectively in the two states OFF and ON. When the state of MRn is ON, we select to switch the wavelength to form the first waveguide to the second this called node in state ON. However, when the state MRn is OFF we have not any switching operation and in this case, the node is in state OFF [17]. Figure 2 presents an overview of an optical NoC communication between two processors C0 and C1. When, the processor C0 decides to establish connection with the core C1 an optical signal will be generated with a specific wavelength λ. Electrical-Optical (E-O) interface on the part of the core C0 starts to convert the electrical signal in optical signal. After, the optical signals passes through the optical on-chip network flowing specific nodes. Finally, the optical signal detected with photodectors in Optical-alectrical (O-E) interface. Thus, optical nodes are the key element in optical NoC and many of them are proposed in different types of network [18]- [19]. To evaluate our system we consider the network model as shown in Figure 3. We assume that the design of the network composed by stages, which the number of stages is direct related with the number of processor cores N. Moreover, the number of stages P given by the equation 1, where N is the total number of processor cores. Network model According the number of stages P in ONoC the total number of nodes R is: The inner architecture of node router composed by particular number β of MR according the number of wavelengths ʎ used in ONoC. Figure 3. Structure of Optical Network on Chip CROSSTALK NOISE IN ONOC Optical crosstalk is presented in ONoC components and degrades the quality of signals, increasing their BER (Bit Error Rate) performance as they travel through the network [20]. In addition, crosstalk noise increases signal-to-noise ratio (SNR) and affects the quality of service (QoS) of ONoC. In fact, both forms of optical crosstalk can arise in ONOC routers: inter-crosstalk and intra-crosstalk [5]. We consider both intrachannel and inter-channel in one crosstalk form and we call it crosstalk noise. The intrinsic characteristic of photonic devices allows the crosstalk inducing to the optical signal according the wavelengths crossing in the optical devices. We assume, that the rate of the crosstalk noise ∆ λ is induced by the node i with wavelength λj and the rate of the crosstalk noise ∆ λ is induced by the waveguide l with wavelength λj [21]- [22]. We consider the worst case, when each nodes and waveguides of the network immediately induce crosstalk noise (inter/intra crosstalk) corresponding with a specific wavelength λi. The process of communication between cores begin by the initiation from the processor core M (CM) to connect with the processor core N (CN). At this time, ONoC defines the route between CM and CN and selects a specific wavelength for this request. This route is composed by a specific number of nodes and waveguides. Therefore, the longest route is composed by P nodes and P+1 waveguides. However, the shortest route is constituted by 1 node and two waveguides. As mentioned before, the optical components induce directly the crosstalk noise to the optical signal passed through them. Furthermore, we define the total crosstalk noise added to the optical signal ʎ induced by the different waveguides is: where k is the number of waveguide which constitute the route between CM and CN and ʎj is the wavelength used in ONoC deprive of ʎi. Additionally, we present the total crosstalk noise appended to the optical signal ʎ induced by the different router nodes is: where L is the number of router nodes which they are a part of the route between CM and CN. Then, the total crosstalk noise power in the CN for the optical signal ʎ is presented by: Finally, the SNR of the optical carried on the wavelength ʎ is: where, λ is the optical power signal and λ is the crosstalk noise power. To point out the dangerous aspect of the crosstalk on the network, we evaluate the SNR and the crosstalk power in ONoC. Besides, we use CLAP tools for the simulation which developed by [9]. We expose the SNR and the power of the crosstalk noise according the size of the network and the number of wavelength used in ONoC. Precisely, we compare between the longest route and the shortest one. The Figure 4 (a) and (b) present the SNR and Crosstalk power noise according the size of the network for the number of ʎ equal 8 and 16 respectively. Notice that the crosstalk power noise slightly increase according the size of network, but the SNR decreases. Furthermore, it is clear that the crosstalk power noise is greater for the longest route than the shortest one, is due to the additive nature of crosstalk noise which it induces each time when they has cross in the route. Consequently, the crosstalk power noise accumulates through the optical route between source core and destination one. We notice that the SNR become so high when the number of wavelength employed in the network are increased. We evaluate the crosstalk power noise and the SNR according the wavelength number employed in the ONoC as depict in the Figure 5. Particularly, we fixed the network size by 16x16 cores and we sweep wavelength number from 2 to 32. On account of the increase of the wavelength number, the crosstalk power noise takes an exponential progress. However, the SNR considerably decreases when the number of wavelength increase. Definitely, the presence of high number of optical signals in the same waveguide or optical component causes a direct effect to crosstalk induced in the corresponding wavelength. Finally, we conclude that the crosstalk noise became critical when we increase the wavelength number or the network size. Accordingly, detect and monitor crosstalk in ONoC became indispensable. CROSSTALK NOISE DETECTION AND MONITOR SYSTEM As mentioned before, crosstalk noise is serious obstacles to develop optical network on chip which the reliability and performance of the MPSoC will be curb. As a Result, find a system that has a capability to detect and monitor the crosstalk noise in ONoC is essential and vital. Mainly, this system must respect the following requirements:  Efficiency  Scalability  Facility to implement To detect and monitor crosstalk in ONoC, we propose the Crosstalk Detection and Monitor System (CDMS). The main idea of CDMS is to monitor continuously the different impairments (intra/inter crosstalk) in the whole network. To reach these objectives CDMS is composed by several Crosstalk Detection Bloc (CDB) distributed in the network according a special localization system as shown in the Figure 6. Furthermore, CDB is placed between two optical nodes in the same level which CDB splits the various optical signals passed through these optical routers. The process of function of CDB is given by the algorithm 1. First, we split the optical signals from the input/output of the appropriate optical nodes. To realize this operation we use, a photodetector module to convert the optical signals in electrical signals. Second, the different input/output signals passed to CDB which the process of detection and localization of the crosstalk noise launched. Third, a system of crosstalk classification begin to monitor and to classify the different detected impairments according the values of the crosstalk noises. Moreover, the crosstalk noises are classified on 3 types as shown in the Table 1. The CDMS monitor the crosstalk noise by evaluate the status and features of the signal, which the classification of the crosstalk noise is essential. Indeed, when the crosstalk noise power is great than 3.2 db the crosstalk noise is classified as dangerous and we must generate an alarm whatever the localization of the optical component how responsible for it. Besides, this situation named Dangerous crosstalk level. However, when the crosstalk noise power is less than 3.2 db, we have two situation: 1. If the optical component how has the responsibility of this impairment is located in the upper-half route, and we classify this crosstalk noise as acceptable. Besides, the acceptable crosstalk level allow to the optical signal reach the destination without alarm but they must be examined. 2. If we detect the crosstalk noise at the lower-half route and alarm is generated and we classify this crosstalk as dangerous. Because, the probability to induce more crosstalk is high and extremely affect the optical signal. To realize these operations in real-time function we design, simulate and implement the CDMS in RTL. Indeed, the inner architecture of the CDB shown in the Figure 7. Furthermore, CDB composes by several MRi to split the different optical signal ʎi which the number of MRi equal to the number of ʎ. Next, we used a photodetector to convert optical signal to electrical one and the number of the photodetectors are the numbers of ʎ. For the input signals passed to delay process to synchronize with the output signals. Finally, a system to detect the crosstalk noise presented with the complexity and the cost a directly related with the number of wavelength used. CDMS is a distributed system use the different informations collected from the CDB devices. We centralize these informations in the CDMS, which we proceed the monitoring of the crosstalk noise in the whole network. Moreover, the CDMS localize and classify the different crosstalk noises in objective to generate the appropriate alarms as depict in the Algorithm 1. RESULTS AND ANALYSIS To discuss the feasibility, the reliability, the scalability and the cost of our system, we simulate, synthesize and implement CDMS in FPGA. Furthermore, we used the STARTER Kit of Xilinx with the different simulation and syntheses tools (Project Navigator of Xilinx and ModelSim). Particularly, we selected SPARTAN-3E for our work [23]. We study the cost, the complexity and the scalability of CDMS according the size of the network and the number of the wavelengths used in ONoC. Besides, these parameters evaluate by the occupation area of the CDMS in the chip as shown in the Figure 8. We notice that the area occupation in the chip is the number of LUTs used on chip. Moreover, the total number of LUTs exponentially increases according the number of processor cores scales. Indeed, this evolution explained by the increase of the number of CDB according the size of the network. In addition, we remark that the execution times increases smoothly as function as the number of cores which they reflect the high scalability of CDMS. In particular, the average of the exactions time is around 23 microseconds. As result, CDMS reach a real time function with a high scalability. Indeed, when the network has 2048 processor cores CDMS need less than 4000 LUTs and this value is 0.01% of the size of the chip and the execution time is less than 25 microseconds. The penalty from the photodetector operation slightly effect the real time function of the CDMS because all devices are implemented in the same chip and the conception of the CDMS boost the real time execution by a high RTL design. Figure 8. Complexity and scalability of CDLS in chip To better understanding the cost and the complexity of the CDMS, we explore the cost and the scalability of CDB according the number of the wavelength ʎi used in ONoC. The Figure 9 presents the complexity of the CDB define as the rate of the LuTs number over the wavelength number ʎ used in ONoC. Similarly, we present the flow of data process as performance and scalability of CDB. We realize that the complexity of CDB increase significantly when the number of ʎ is between 2 and 16 then this variation slightly stabilizes. Indeed, this progress of the complexity is due to the reuse of the hardware blocs, which the complexity rate is 64% for ʎ equal 16 than it is 62% for ʎ equal 32. Otherwise, we notice that the performance and scalability of CDB are mostly constant as function the number of wavelengths. As a result, the CDB offers a high scalability and performance with an appropriate complexity when the number of wavelengths exceeds 16. CONCLUSION The technology growth of transistor integration has now reached its limits. Moreover, this high transistor integration rate has pushed the semiconductor industry to shift from the single-core to a multi-core in one chip (MPSoC). One of the serious problems of the MPSoC is the communications between the 4921 different processor's cores. In this context, Network on Chip is a promote solution to solve this problem but is limited by the increased number of cores implemented on chip. Optical Network on Chip is a promotional solution that solves the problem of high rate of data exchange between cores with less energy consumption. However, in optical communication ONoC is affected by crosstalk noise, which is a major problem that hinders the achievement and maintenance of high performance. In fact, crosstalk noise deteriorates the quality of signals and hence degrades system's performance. In this paper we proposed a new system to detect and monitor crosstalk noise in ONoC, which the contribution of this work if to offer the first completely system to detect and monitor alarms induced by crosstalk in ONoC. Particularly, we described the distributed architecture and the function of CDMS, also, we focused on the hardware design of CDB. Finally, we implemented and simulated our system to evaluate there performance. The results have demonstrated that, our system offers a high scalability with low rate of occupation in area of the chip as well as a real-time function with 23 microseconds as execution time.
5,300.4
2018-12-01T00:00:00.000
[ "Engineering", "Physics", "Computer Science" ]
Predicting Success of a Digital Self-Help Intervention for Alcohol and Substance Use With Machine Learning Background Digital self-help interventions for reducing the use of alcohol tobacco and other drugs (ATOD) have generally shown positive but small effects in controlling substance use and improving the quality of life of participants. Nonetheless, low adherence rates remain a major drawback of these digital interventions, with mixed results in (prolonged) participation and outcome. To prevent non-adherence, we developed models to predict success in the early stages of an ATOD digital self-help intervention and explore the predictors associated with participant’s goal achievement. Methods We included previous and current participants from a widely used, evidence-based ATOD intervention from the Netherlands (Jellinek Digital Self-help). Participants were considered successful if they completed all intervention modules and reached their substance use goals (i.e., stop/reduce). Early dropout was defined as finishing only the first module. During model development, participants were split per substance (alcohol, tobacco, cannabis) and features were computed based on the log data of the first 3 days of intervention participation. Machine learning models were trained, validated and tested using a nested k-fold cross-validation strategy. Results From the 32,398 participants enrolled in the study, 80% of participants did not complete the first module of the intervention and were excluded from further analysis. From the remaining participants, the percentage of success for each substance was 30% for alcohol, 22% for cannabis and 24% for tobacco. The area under the Receiver Operating Characteristic curve was the highest for the Random Forest model trained on data from the alcohol and tobacco programs (0.71 95%CI 0.69–0.73) and (0.71 95%CI 0.67–0.76), respectively, followed by cannabis (0.67 95%CI 0.59–0.75). Quitting substance use instead of moderation as an intervention goal, initial daily consumption, no substance use on the weekends as a target goal and intervention engagement were strong predictors of success. Discussion Using log data from the first 3 days of intervention use, machine learning models showed positive results in identifying successful participants. Our results suggest the models were especially able to identify participants at risk of early dropout. Multiple variables were found to have high predictive value, which can be used to further improve the intervention. Background: Digital self-help interventions for reducing the use of alcohol tobacco and other drugs (ATOD) have generally shown positive but small effects in controlling substance use and improving the quality of life of participants. Nonetheless, low adherence rates remain a major drawback of these digital interventions, with mixed results in (prolonged) participation and outcome. To prevent non-adherence, we developed models to predict success in the early stages of an ATOD digital self-help intervention and explore the predictors associated with participant's goal achievement. Methods: We included previous and current participants from a widely used, evidencebased ATOD intervention from the Netherlands (Jellinek Digital Self-help). Participants were considered successful if they completed all intervention modules and reached their substance use goals (i.e., stop/reduce). Early dropout was defined as finishing only the first module. During model development, participants were split per substance (alcohol, tobacco, cannabis) and features were computed based on the log data of the first 3 days of intervention participation. Machine learning models were trained, validated and tested using a nested k-fold cross-validation strategy. Results: From the 32,398 participants enrolled in the study, 80% of participants did not complete the first module of the intervention and were excluded from further analysis. From the remaining participants, the percentage of success for each substance was 30% for alcohol, 22% for cannabis and 24% for tobacco. The area under the Receiver Operating Characteristic curve was the highest for the Random Forest model trained on data from the alcohol and tobacco programs (0.71 95%CI 0.69-0.73) and (0.71 95%CI 0.67-0.76), respectively, followed by cannabis (0.67 95%CI 0.59-0.75). Quitting substance use instead of moderation as an intervention goal, initial daily consumption, no substance use on the weekends as a target goal and intervention engagement were strong predictors of success. INTRODUCTION Alcohol, tobacco, and other drugs (ATOD) use are among the leading risk factors for morbidity and mortality worldwide (Degenhardt et al., 2013;Shield et al., 2016;Volkow and Boyle, 2018) and can be a major cause of negative social, economic, and medical effects (Degenhardt and Hall, 2012). Digital selfhelp interventions for ATOD use have been broadly explored as a tool to help mitigate substance use and related harm, often with positive results (Riper et al., 2008;Tait et al., 2014;Mujcic et al., 2018;Berman et al., 2019;Olthof et al., 2021). Nonetheless, participant adherence remains a major issue in digital interventions for mental disorders, either due to not using the intervention (non-adherence) or due to not completing follow-up measures (study dropout) (Khadjesari et al., 2014). Outside randomized controlled trials, dropout rates reported in the literature for digital health interventions vary from 44% in a digital intervention for amphetamine-type stimulant abuse, up to 83% for an internet-based intervention for psychological disorders (Melville et al., 2010;Tait et al., 2014). Different types of data can be used to predict adherence or dropout in eHealth interventions. A study by Symons et al. (2019) focused on the prediction of outcome in cognitive behavioral therapy, and used variables related to demographics, medical history, psychiatric history, and symptoms of alcohol dependence. Using patient demographics and log data variables, the study of Pedersen et al. (2019) focused on the prediction of dropouts from an intervention for chronic lifestyle diseases (such as diabetes, heart disease, chronic obstructive pulmonary disease, and cancer), and reported an AUC of 0.92 with log data variables being more predictive than demographics. Log data often shows high predictive value. It consists of records of actions performed by the user when using the intervention and can provide new insights into the actual usage of each individual module (Sieverink et al., 2017a). Log data was used for the prediction of outcome (dietary changes) of an online intervention for eating disorders (Manwaring et al., 2008), the results suggested certain variables (e.g., the number of weeks the intervention was used, accessing content pages, and posting in the journals) were significantly associated with dietary changes. Cognitive Behavioral Therapy (CBT) and motivational interviewing (MI) based digital self-help interventions have been developed to help treat people suffering from diverse conditions including problem drinking (Riper et al., 2008;Mujcic et al., 2020), tobacco smoking (Mujcic et al., 2018), and cannabis (Olthof et al., 2021). Some central elements from CBT selfhelp interventions are: exploring and exploiting ambiguity regarding behavior change, stimulus control, stress management, social support, goal setting and pursuit through monitoring and exercises (Foreyt and Poston, 1998). Retaining users and increasing engagement has always been a priority of self-help health interventions (Freyne et al., 2012) since participant adherence plays a major role in the success of an intervention (Sieverink et al., 2017b). The large number of variables available in log data, that are collected during a digital self-help intervention make machine learning especially suitable for the prediction task since they are capable of handling high-dimensional data and discover previously unknown relationships between variables by adding non-linearity to the learning process. In this study, we aim to use machine learning models to predict participant success using data from the early stages of a digital self-help intervention for problematic substance use (alcohol, tobacco, or cannabis) and explore the components that are associated with the success of participants in reaching the goals that they set at the start of the intervention. We also explored model interpretability, since understanding the factors associated with participant adherence can subsequently lead to implementation research, investigating changes to improve relevant adherence patterns, and thus, improve success rates of the intervention (Acion et al., 2017). Jellinek Intervention We included all participants enrolled between January 2016 and October 2020 in a widely used, evidence-based unguided digital self-help intervention for alcohol, cannabis, and cocaine use, tobacco smoking, and gambling (Jellinek Digital Selfhelp), which is based on CBT and MI techniques and is composed of 6 modules. The intervention covers at least 30 days (5 days per module). Each module consists of an animation video, a reading assignment and an at least one writing assignment. Automated feedback is given by the program based on the results and progress from the participant. A forum and a personal diary are also available in the intervention. Based on the principles of CBT, the user is also encouraged to register their substance use or craving on a daily basis. More details about the intervention can be found in the diagram of Figure 1 and in the online document (Jellinek, 2019). Participants provided informed consent for the use of their data for research purposes when signing up for the intervention. All data was pseudonymized before the analysis, by removing all directly identifiable information such as e-mail addresses and names. Definition of Success At the start of the online intervention, the participants can choose their goal for the intervention. They can choose if they want to (gradually) stop or reduce their substance use. They can also set the target maximum daily consumption (in units) they want to reach by the end of the intervention. We use this target number of units defined by the participants combined with the prescribed use of the intervention to define success. We excluded a large number of participants who did not complete module 1 of the intervention. Based on inspection of the data, we defined intervention success as completing all 6 modules of the intervention and reaching the daily substance use goal for the last 7 days before discontinuing the intervention. We defined early dropout as completing module 1 but not going further than module 2. Feature Engineering We selected log data from the first 3 days (72 h) of intervention use of all participants. We selected this number of days after discussion with the program designers, to keep a balance between collecting as much relevant information about the usage of the intervention as possible while keeping the window for action as early as possible. Nevertheless, we explored other time windows (48 and 96 h) in a sensitivity analysis. In the first 72 h, the participants had tasks from the first module available to them, namely: watching the start video (introduction), writing pros of stopping and cons of continuing substance use (e.g., "I will save more money if I spend less on alcohol"), and writing agreements for themselves (e.g., "If I reach my goal I will get myself a reward"). Besides these tasks, other modules were available at all times such as the Forum (where the participant can interact with other participants, e.g., create, like, and reply to posts) and the diary. Daily consumption registration, logins, and forum access were also computed for the same time frame. A complete table with all the features computed and their explanation is available in Supplementary Table 1. Given the sensitive nature of the data, it is not publicly available. Sensitivity Analysis We designed experiments with a less strict definition of success, where participants that reached their target goal for at least 6 consecutive days (instead of the standard 7 of days) and finished at least 4 out of the 6 modules (instead of finishing all 6 modules) of the intervention, were also considered successful in reaching their goals. We also assessed the influence of shorter and longer time periods for data extraction (48 and 96 h). Finally, our definitions of success (finishing all modules and reaching the target consumption goal) and early dropout (reaching no further than module 2) do not include a group of participants who finish the intervention but do not achieve their target consumption goal. A total of 41% of the participants who reached module 6 of the alcohol intervention did not reach their target consumption goal and therefore, are not included in the successful group. The percentages were 33 and 39% for the cannabis and tobacco interventions, respectively. Since we defined success as a combined measure of prolonged participation and, meeting pre-set goals and given the low percentages for the cannabis and tobacco interventions (and therefore, an even smaller sample size) available in this subgroup, we did not include it in the analysis. Machine Learning Models Given the large number of machine learning models available in the literature, we selected a subset that has shown state-of-theart results in recent applications. Moreover, since the learning process can be very different for each model, we aimed at including models with different learning processes to increase generalizability and the chances of new findings (Fernández-Delgado et al., 2014). Therefore, we included two models, Logistic Regression, which is often used in clinical prediction tasks, offering a linear approach and being less robust when dealing with high-dimensional data (dataset with a large number of variables), and Random Forest (Breiman, 2001), which is robust to high-dimensional datasets and can identify non-linear relationships in the data (Couronné et al., 2018). Moreover, both models offer interpretable variable importance after trained. Modeling Pipeline We used a nested k-fold cross-validation strategy to train, validate and test the models. In the outer cross-validation loop, the data was split into 10 stratified (to account for class imbalance) folds. In each cross-validation iteration, onefold was used as test set while ninefold (the training set) were used in the inner cross-validation loop. The inner cross-validation loop was used for hyper-parameter optimization, where the training data was split again into threefold, where two were used to train the model with a given set of hyper-parameters, while the one left was used to validate them. The best set of hyper-parameters was the one with the highest average Area Under the Receiver Operating Characteristic Curve (AUROC). The model trained with the best set of hyper-parameters was applied to the test set, and the evaluation measures were computed. We present a list of hyper-parameters used for model optimization in Supplementary Table 2. The participants of each substance available in the intervention were assessed separately. All code was implemented in Python 3 using the Scikit-learn library for modeling (Pedregosa et al., 2012). The code is available in the following Github repository. 1 Class imbalance was present in all experiments and it can lead to biased results. Therefore, we applied balanced class weights during model training to address this issue (Pedregosa et al., 2012). Class weights work by multiplying the error of each sample during training. This way, classification mistakes in the minority class lead to higher loss values and a larger impact on model training. This approach has been shown to be effective even in cases of severe class imbalance (King and Zeng, 2003;Zhu et al., 2018). Statistical Analysis For each machine learning method, 10 models were optimized and tested using our nested k-fold cross-validation strategy. We report the average across all cross-validation iterations and the 95% Confidence Intervals (CI) for the following evaluation measures: AUROC, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). We also present a confusion matrix with the results from all folds. The threshold of 0.5 was used for converting the probabilities to class predictions. Feature Importance For feature importance visualization we used SHAP (SHapley Additive exPlanations), which is a unified approach for explaining the predictions of any machine learning model (Lundberg et al., 2019). SHAP values are used to describe the 1 https://github.com/L-Ramos/ML_ehealth importance a model assigns to the features for a given data point and how they influence the prediction of a certain class. SHAP allows the visualization of how high and low values of a given feature affect the prediction, offering insightful information about the models' decision process. We opted to use SHAP instead of odds ratio from LR and Gini feature importance from RF to allow the comparison between feature importance between both models using a single model visualization tool. Moreover, Gini importance is severely biased for high cardinality features (Strobl et al., 2007), which can lead to misleading conclusions. Study Population Log data from 32,398 participants enrolled from January 2016 to December 2020 in the Jellinek self-help intervention was available. Around 80% of the participants did not reach further than the first module of the intervention and were excluded from further analysis. This group of participants possibly included many individuals that only wanted to take a look at the intervention, rather than having the intention to actually follow the intervention. The remaining 20% of the participants were divided based on the five different addiction programs available in the online intervention, namely: alcohol, cannabis, tobacco, gambling, and cocaine. Since the number of participants in the gambling and cocaine intervention was relatively low for developing machine learning models, we did not include these programs in the analysis. In Table 1 we show the total of participants included per intervention, with a total of 2,126 participants for the alcohol intervention, 466 for the cannabis intervention, and 499 for the tobacco intervention. For the alcohol intervention, from 1,085 participants that complete module 6 of the intervention, 449 did not reach their target consumption goal, and therefore, were excluded, leaving 636 participants in the successful group and 1490 in the early dropout group. For the cannabis intervention, from a total of 156 participants that finished the intervention, 52 did not reach their target consumption goal and were excluded, leaving a total of 104 participants in the successful group and 362 in the early dropout group. From the 203 participants that finished the tobacco intervention, 81 were excluded for not reaching their target goal, leaving a total of 122 participants in the successful group and 377 participants in the early dropout group. We present in Table 2 the distribution of participants per substance used and the cumulative total of participants that Early dropout (defined as finishing module 1 and reaching no further than module 2) and success (Reaching module 6 and achieving the target consumption goal). Target goal = 7 days of consumption before discontinuation. Participants eligible for inclusion are defined as anyone who reached further than module 1. Frontiers in Psychology | www.frontiersin.org The value per module is the cumulative total, i.e., participants that reached module 6 are also counted in the previous modules. Percentages refer to the total number of participants after exclusion criteria for each substance program individually. reached each module. Overall, around 20% of all participants reached the second module of the intervention. This percentage differed between the substances: For alcohol, it was 26.36%, while for cannabis it was 18.50% and for tobacco, it was 9.40%. Supplementary Table 3 shows the distribution of all features available per substance type and the participants split into successful and early dropout groups. Prediction Accuracy We present in Table 3 the evaluation measures (mean and 95% confidence intervals) for predicting the success of participants using the online intervention for alcohol, cannabis, or tobacco. The AUROC was the highest for the alcohol and tobacco substances using the RF model (0.71-95%CI 0.69-0.73) and (0.71-95%CI 0.67-0.76), respectively. Specificity and NPV were higher than sensitivity and PPV, respectively, for all substances. The highest specificity value was for cannabis (0.78 95%CI 0.74-0.83) and for sensitivity it was for alcohol using LR (0.61 95%CI 0.57-0.65). The higher values for both specificity and NPV show that the models were better in identifying the early dropout participants. Prevalence of successful participation was 29, 28, and 29%, in the alcohol, cannabis and tobacco programs, respectively. In Figure 2 we present the confusion matrix for each substance for the model with the highest AUROC (RF) using the results from the test sets once all cross-validation iterations were complete. For the alcohol intervention, a total of 1,141 (77%) participants with an early dropout outcome were correctly identified, while 323 (50%) participants with a successful outcome were correctly identified. The same can be observed in the cannabis intervention, where 283 (78%) participants with an early dropout outcome and 49 (47%) participants with a successful outcome were correctly identified. Finally, in the tobacco intervention, 287 (73%) participants with an early dropout outcome and 66 (47%) participants with a successful outcome were correctly classified by the RF model. We present the results for the sensitivity analysis using other time windows for feature engineering besides 72 h (48 and 96 h) in Supplementary Table 4 and for relaxing the definition of success (achieving the target goal for 6 or 7 days before discontinuing the intervention and finishing at least 4 out of the 6 modules) in Supplementary Tables 5, 6, respectively. No differences were found when comparing the experiments with other time windows and the standard 72 h (change of around 0.01 in the average AUROC). When relaxing the definition of success (considering participants that reached 6 out of 7 days of their target goal as successful) there was a slight increase in the AUROC (around 0.01) for some interventions. Relaxing the number of target goal days to be achieved reduced the overall performance of the models. In most cases, relaxing the number of modules to be finished also led to worsening of the results (reduced prediction accuracy). Feature Importance In Figure 3 we present the feature importance (top 20 for visualization purposes) using SHAP for the RF model trained on the alcohol data. In the y-axis, we have the features based on the first 72 h of participation in order of importance from top (most important) to bottom (less important) and in the x-axis their respective SHAP value which indicates their association with success (SHAP values are above zero) or early dropout (SHAP values below zero) participant outcomes. The color legend on the right shows how large and low values of a given feature relate to the SHAP values. For example, Number of Logins is at the top as the most predictive feature. High values for Number of Logins have positive SHAP values, which indicates an association with the success outcome in the alcohol intervention. Most engagement-related features (Number of Logins, Forum Visits, and Participation Badges) appear at the top with high values being associated with success. Moreover, not drinking on the weekends (Saturday Target) was also associated with success in the alcohol intervention. The Total Units Consumed in the first 72 h was also considered an important predictor, with lower values associated with higher success rates. Another relevant finding is that having your target goal set to reduce (Program Goal-Reduce) was associated with the early dropout outcome for the alcohol and cannabis interventions. Finally, low initial daily consumption values (Monday Initial, Tuesday Initial, etc.) were associated with success as outcome. In Figure 4 we present the SHAP feature importance plot for the cannabis intervention. The main findings for cannabis intervention were similar to the alcohol one, with the addition of the high values of the Number of Diary Entries being associated with participant success. Figure 5 shows the SHAP feature importance plot for the tobacco intervention. Since many features were highly correlated with each other they were removed from the analysis. Moreover, consumption target variables were not included since for the tobacco intervention only (slowly) quitting is an option (instead of the other options of reducing or slowly reducing available in the other interventions), therefore all target goals are set to zero. A total of 15 features were included in the models and in Figure 5. The Total Units Consumed was the most important variable for the tobacco intervention, with low values being associated with success. Having the target goal set to stop instead of slowly stop was also an important predictor. Finally, as also observed in the DISCUSSION We have shown that machine learning models can accurately identify participants that will be successful in reaching their goals in the online self-help intervention for alcohol, cannabis, and tobacco. The best AUROC values were 0.71, 0.67, and 0.71 for the alcohol, cannabis, and tobacco interventions, respectively, which shows moderate predictive value. Despite all models having similar performance, the AUC was the lowest for the cannabis intervention. This is likely due to the small number of samples available for training since this was also the intervention with the least participants. Moreover, the high negative predictive values reported suggests that our models were better at identifying participants at risk of early dropout, while the high specificity shows that the early dropout predictions were often correct. Such findings could be used in practice, to offer extra support for this risk group. We have identified that engagement with the intervention, alongside with having the target goal to quit immediately instead of gradually quitting or moderating substance use, and not drinking on the weekends were all important predictors of participant success. Thus, our findings have important implications for implementation trials geared at increasing adherence and success in ATOD selfhelp programs. The prediction of participant adherence and success to addiction treatments (including CBT) has been previously explored in the literature (Acion et al., 2017;Symons et al., 2019;van Emmerik-van Oortmerssen et al., 2020) since it can lead to new insights and subsequently to improvements to the service. The prediction of dropout and outcome of a CBT treatment for Attention Deficit Hyperactivity Disorder and Substance Use Disorders (SUDs) was investigated by van Emmerik-van Oortmerssen et al. (2020). They found a significant association between participant demographics variables and drop-out from CBT. Despite their positive findings, the number of participants included was relatively small (119) and only linear models were explored. Acion et al. (2017) investigated the use of machine learning for predicting SUD treatment success. They included a large population (99,013 participants) and reported AUROCs up to 0.82. Nevertheless, their work was limited to in-hospital treatment (no CBT) and defined success as only reaching the end of treatment (i.e., an adherence goal). In the study from Symons et al. (2019) machine learning models were used to predict treatment outcomes of a CBT treatment for alcohol dependence. Demographics and psychometric data from 780 participants were included in the models, and they reported very low accuracy results, with AUROC values around 0.50 (close to random). The prediction of outcome of a CBT for tobacco smoking was explored in a study by Coughlin et al. (2020), where demographics and impulsivity measures were used to train a decision tree model and an accuracy of 74% was reported in the validation set. Our study builds upon and extends previous studies by including a large population of 32,398 participants from a self-help digital intervention for SUD, by using a nested cross-validation strategy to reduce the risk of biased results, including multiple variables available in the log data from the intervention (instead of commonly used demographics), by using non-linear models in the analysis pipeline, and by reporting the importance of feature values in the models' decision process. Future experimental work should clarify whether found predictors are true risk factors in that they independently contribute to success, or are only risk markers. Clinical Interpretation Our results suggest that users of the Jellinek online intervention that are more engaged (and probably more motivated) with the intervention in the first 3 days of participation tend to reach their goals more often. Having the goal of immediately quitting rather than gradually quitting or moderating use, and the goal to quit using substances on the weekends also correlates with success. These findings correspond with previous studies that investigated differences between gradually vs. abruptly quitting (Cheong et al., 2007;Hughes et al., 2010;Lindson-Hawley et al., 2016), while they tap into an old and wider discussion on whether either moderation or cessation are valid and feasible substance use treatment goals (Owen and Marlatt, 2001;Cheong et al., 2007;Hughes et al., 2010;Luquiens et al., 2011;Lindson-Hawley et al., 2016;Haug et al., 2017). Moreover, in the tobacco intervention, the Total Units Consumed was the most important predictor, while in the alcohol and cannabis interventions, it was the Number of Logins. The reasoning behind this difference is not entirely clear, but a possibility is that it is related to the possible goals of each intervention. For tobacco, only completely quitting can lead to a successful outcome. Therefore, the association between tobacco use quantities and success is much stronger, as success can only be achieved when tobacco use goes to 0 at some point. For alcohol/cannabis, this association is slightly less strong as moderate use of these substances (target goal set to reduce) can still lead to a successful outcome and goal achievement. Finally, quitting is more difficult for heavy smokers than for people who smoke less, making it a strong predictor of success (Vangeli et al., 2011). Therefore, surveying patients on their motivation before the start of the intervention, extending self-therapy by incentivizing daily use of the program, for instance, by interactive gaming mechanisms ("gamification") and/or availability of novel assignments each day, and encouraging therapy continuation by positive feedback on their progress, may have a positive effect for participants. Explaining how choices between quitting and controlled use as a treatment goal, and in case of controlled use, how substance use during weekends may affect their goal attainment might assist the participant in making betterinformed decisions, ultimately leading to improved outcomes. Such amendments to the intervention and how these might affect participant success will be the topic of future studies. Finally, regarding the methods, one could consider different optimization strategies or probability cut-offs to prioritize the identification of either early dropouts or successful participants. In our case, we aimed at prioritizing the correct identification of participants at risk of early dropout (NPV), which was higher for the standard 0.5 cut-off, while other approaches such as the Youden Index (Youden, 1950) led to more balanced sensitivity and specificity values. Strengths and Limitations A strong point of this study is the large sample size, which includes all participants that used the intervention since 2016 and indicated their data may be used for research purposes, making the data to a large extent representative of the full population that participates in the Jellinek ATOD online intervention. Our approach included multiple validation steps to reduce the risk of overfitting and biased results. Furthermore, we increased model transparency by using SHAP for model interpretation, which makes our results clearer and more actionable. A limitation of this study is the lack of demographic variables since these have previously been shown to be strong predictors of participant success (van Emmerik-van Oortmerssen et al., 2020). However, given the fact that demographics are not changeable, they cannot result in research aimed at improving treatment success, limiting their practical value. Due to privacy concerns, variables such as age, sex, and highest degree achieved are not mandatory for the participants to fill in during registration and are, therefore, highly missing in our dataset (more than 80%). Another limitation is the large number of participants that do not even finish the first module from the intervention and were excluded from this study. The large number of early dropouts is quite common in unguided self-help interventions, and finding predictors for it is often difficult (Eysenbach, 2005;Beatty and Binnion, 2016). Since our main goal was to predict success based on log data, a minimum number of days of use was necessary to make such data available. Nevertheless, the excluded participants represent a significant part of the population, which can impact the generalizability of our results. Future studies are necessary to explore the reasons behind the large number of participants that seem to barely use the intervention before dropping out. The group of participants that finished the intervention but did not reach their target goal was excluded from our analysis since they did not match our definition of success. This is a limitation of our study since despite being small, this is an important group that seems to be motivated enough to go all the way through the intervention, while not fully benefiting from it. Moreover, provided more data for such group is available in the future, a model capable of differentiating between participants that, despite finishing the intervention, will reach their target consumption goals or not, could be of great assistance. Finally, in all experiments, accuracy was relatively limited, with AUROCs around 0.70. The evaluation measures were higher for the substances with more participants, which suggests that our results could improve if more data would be available. CONCLUSION Log data analysis with machine learning yielded positive results for the prediction of participant success in the digital self-help Jellinek intervention, with the models being especially accurate in predicting patients at risk of early discontinuation. We also identified multiple relevant predictors of outcomes and how participants' choices regarding goals may affect goal achievement. Whether this information can lead to improvements to the intervention will be the subject of future studies. DATA AVAILABILITY STATEMENT The data analyzed in this study is subject to the following licenses/restrictions: Given the sensitive nature of the data, and to preserve the identity of the participants the data cannot be made publicly available. Requests to access these datasets should be directed to MB. ETHICS STATEMENT Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS LR: lead author, study design, analysis and interpretation of results, and critical revision of the manuscript. MB: data extraction, study design, analysis and interpretation of results, and decision making and critical revision of manuscript. GW and SP: study design, analysis and interpretation of results, and decision making and critical revision of manuscript. TB: result interpretation, analysis and interpretation of results, and decision making and critical revision of manuscript. AG: study design, analysis and interpretation of results, and decision making and critical revision of the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This project under the name Digging for goals using data science: Improving E-health alcohol and substance use interventions using machine learning and longitudinal clustering Digging for goals using data science was funded by ZonMw under (Grant No. 555003024).
7,851.2
2021-09-03T00:00:00.000
[ "Computer Science", "Medicine" ]
Machine-learning Assisted Insights into Cytotoxicity of Zinc Oxide Nanoparticles Zinc oxide nanoparticles (ZnO NPs) are commercially used as an active ingredient or a color additive in foods, pharmaceuticals, sun protection lotions, and cosmetic products. While the use of ZnO NPs in everyday products has not been linked to any serious health issues so far, the scientific evidence generated for their safety is not conclusive and, in most cases, could not be validated further in in vivo settings. To settle controversies arising from inconsistent in vitro findings in previous research focusing on the toxicity ZnO NPs, we combined the results of 25+ independent studies. One way analysis of variance (ANOVA) and classification and regression tree (CART) algorithm were used to pinpoint intrinsic and extrinsic factors influencing cytotoxic potential of ZnO in nanoscale. Particle size was found to have the most significant impact on the cytotoxic potential of ZnO NPs, with 10 nm identified as a critical diameter below which cytotoxic effects were elevated. As expected, strong cell type-, exposure duration- and dose-dependency were observed in cytotoxic response of ZnO NPs, highlighting the importance of assay optimization for each cytotoxicity screening. Our findings also suggested that ≥12 hours exposure to NPs resulted in cytotoxic responses irrespective of the concentration. Considering the cumulative nature of research processes where advances are made through subsequent investigations over time, such meta-analytical approaches are critical to maximizing the use of accumulated data in nano-safety research. Introduction Nanoscience deals with the phenomena that occurs in the nanometer range which is one billionth of a meter.While the conceptual roots of nanoscience were planted in the late 1950s, it was not until early 1990s that nanotechnology advanced enough to design structures, devices and systems at atomic and molecular scales (1).Nanoscale science and engineering is interdisciplinary in nature, requiring teams of researchers with different scientific backgrounds (e.g., physics, chemists, biologists, material scientists and engineers) working together to come up with new innovations and solutions to today's complex issues.The application of nanotechnology can span across different disciplines and research areas.Today, nanotechnology is explored in almost all existing domains ranging from high-strength materials and nanoscale sensors to electronic and opto-electronic devices (2).In parallel, novel properties of nano-scale materials are enabling new commercial markets such as next generation batteries and intelligent drug delivery systems (3,4). Nanoparticles are commonly classified according to their origin (engineered or natural), dimensionality (0D, 1D, 2D or 3D), morphology (low or high aspect ratio), state (well-dispersed, aggregated etc.) or chemical composition (ceramic, polymeric, carbon-based or metallic) (5).Among different metal-based nanoparticles (NPs), zinc oxides (ZnO) stand out for their high UV-absorption capacity and solubility.They are commercially used as a bulking agent, filler or pigment in glass and ceramic products, foods, pharmaceuticals, sun protection lotions, and cosmetics (6).One of the early uses of ZnO NPs was in sunscreens due to their intrinsic UV absorbing properties and transparent nature (7).The use of nanosized ZnO (and also titanium dioxide) as an effective ingredient in modern sunscreens has created a long-lasting debate over their safety (8,9).In early 2010s (and onwards), both the regulatory bodies and the public have become increasingly aware of the potential threat posed by sunscreens formulated with nano-ingredients.The early findings related to potential hazards of ZnO NPs were mostly inconsistent, making it impossible to conclude with high certainty that nano-sized ZnO is ultimately safe to use in skin-contacting products (9).In the following years, it became clear that not all ZnO NPs should be treated the same from safety perspectives because physicochemical characteristics greatly affect cellular interactions and safety profiles of NPs (10). Determining the potential harmful effects of NPs is critical to ensure that they are safe for human use.One effect of NPs that must primally be assessed is its cytotoxic potential, together with the factors contributing to their cytotoxicity (11).After two decades of research and detailed investigations, there is still no consensus on the main physicochemical properties driving cytotoxicity of NPs (12)(13)(14).In addition to intrinsic as-received properties of NPs and media-dependent surface characteristics, test conditions such as cell type, exposure concentration and duration have direct influence on the results of cytotoxicity assays.Figure 1 shows material-and assay-related parameters influencing different dimensions of NPs-protein and NPs-cell interactions. Figure 1. Key parameters affecting the toxicity of NPs ZnO NPs differ from their bulk counterparts in that their inherent complexity and medium-dependent characteristics make it very difficult to study their cellular interactions and effects.Moreover, the experimental differences in nano-hazard screening are directly reflected in test results, potentially leading to interexperimental inconsistencies.The aim of this study is to integrate published evidence on the cytotoxicity of ZnO NPs and to critically appraise bodies of evidence in their entirety. Literature Search and Data Extraction A systematic literature search was undertaken using the PubMed scientific search engine between 2010 and 2022.The following three terms were used for the initial article search: "zinc oxide", "nanoparticle*" and "cytotoxic*".The search returned 594 peer-reviewed research papers that were manually filtered according to the following inclusion criteria: (i) the core of the studied NPs must be zinc (and not a composite material); (ii) in vitro cytotoxicity data must be available and accessible; (iii) particle size data must be available; (iv) the unit of exposure concentration must be convertible to µg/mL; and (v) untreated cell control must be available.A total of 543 data points for 40 different ZnO NPs from the remaining 26 independent studies were included in the analysis. Data Cleaning and Pre-processing Data normalization (i.e., changing the values to a standard scale) is often used prior to statistical analysis when comparing features with different units or ranges.First, the units of measure were unified to minimize variability between different studies.The numeric data records describing the concentration were divided into ten subgroups.The cleaned data were randomly divided into training (75%) and test sets (25%), each involving a similar fraction of toxic and nontoxic groups. Descriptive Statistics One-way analysis of variance (ANOVA) was used to determine how strongly each of the categorical parameters describing NP, cell line, or assay characteristics was related to cytotoxicity.The strength and direction of the relationship between pairs of continuous variables were measured by Pearson's correlation coefficients.A box plot was used to display the distribution and skewness of the cell viability data among different subcategories.Significance was reported at p < 0.05 and p < 0.001 levels. Machine Learning Classification and Regression Tree (CART) was applied to partition the pre-processed data using a series of binary decisions.The method was set to regression, as the endpoint was a numerical value (% cell viability).The rpart package in R version 4.2.0 was used to implement all CART analyses.Regression trees were pruned through a 10-fold cross-validation process to remove branches providing the least error reduction.Refer to more specialized publications for details on the CART algorithm (15)(16)(17). Results and Discussion Description of data included in analysis.After a systematic data search and excluding data points that did not meet the data inclusion criteria, a total of 543 data records from 26 independent studies remained for evaluation.Each data record corresponds to a cytotoxic evaluation of individual nanoparticle.Figure 2 summarizes the main characteristics of the collected data. Figure 2. Dataset description Effect Sizes and Heterogeneity.A series of one-way ANOVAs were conducted to assess the influence of NPs and assay parameters on cell viability (Table 1).As expected, a strong negative correlation was observed between exposure dose and cell viability (p<0.001), with concentrations ≥20 µg/mL killing at least half of the cells.Similarly, cytotoxic profiles were detected after >12h exposure to ZnO NPs, with shorter exposure durations not causing significant toxicity.ANOVA also revealed that coating surfaces of NPs with amphiphilic polymers or thiol-containing acids could elevate its cytotoxicity, while green synthesis could help reduce the cytotoxic potential of ZnO NPs.These results highlight the importance of intrinsic materials characteristics and extrinsic experimental conditions on NP-induced cytotoxicity.A Pearson's correlation was run to assess the relationship between numeric parameters (particle size, hydrodynamic size, zeta potential, and concentration) and cell viability (%).The Pearson correlation coefficient of -0.22 suggested that cell viability and exposure concentration were moderately correlated in the opposite direction (Table 2).There was a positive correlation between particle size measured by TEM/SEM or DLS and cell viability, with NPs of larger diameter inducing less potent cell death.Interestingly, no direct correlation between zeta potential values and cell viability was observed.Next, a box plot was constructed to show the distribution of cell viability among different exposure durations and doses (Figure 3).As expected, higher concentrations of NPs and longer exposure durations led to higher levels of cytotoxicity relative to untreated cell control.The effect of extended exposure on cell viability was more pronounced at higher exposure doses.Machine learning.To identify the influence of material characteristics and experimental factors on the cytotoxic potential of ZnO NPs, the CART recursive partitioning analysis was employed.The bestperforming regression tree (Figure 4) was selected based on both cross-validation results and simplicity.The zeta potential measurements were not included in the decision tree analysis due to a high number of missing values (57%).Cross-validation error minimized at a tree size of 5 branches.The bestperforming regression tree given in Figure 4 included concentration, exposure duration, cell morphology, and particle size.The variable importance order was as follows: concentration > particle size > cell type > exposure duration > assay type > coating.In line with previous studies (18,19), our analysis showed that the potency of ZnO NPs to induce cytotoxic response is particle size-dependent.In particular, the primary particle size of 10 nm was found to be critical below which elevated cytotoxicity was seen.As expected, a strong positive linear relationship was observed between exposure duration and cytotoxic response.The longer the duration that cells are exposed to ZnO NPs, the greater the cytotoxicity.Most ZnO NPs were cytotoxic after 12 hours' exposure, especially at relatively higher doses (>20 µg/mL).As previously reported by Cierech et al., significant changes in cell viability were observed with increasing concentrations of ZnO NPs (20).The identified relationships between exposure conditions and cell viability results are also very much in line with the earlier investigations in the field (21)(22)(23)(24).For example, Khan and co-workers evaluated the toxic effects of ZnO NPs at different concentrations and demonstrated the role of reactive oxygen species generation in NP-induced cytotoxicity and genotoxicity (25).In another study, NP-induced DNA damage and cytotoxicity were evident after 6h exposure to 20 µg/mL of ZnO NPs (26).Taken together, the accumulated evidence on the cytotoxic and genotoxic potential of ZnO NPs suggests that the safety of ZnO NPs should remain a critical concern for all parties involved, including regulators, academicians, and industrial people. Conclusion Having a complete understanding of nanotechnology-related environmental, health, and safety (nano-EHS) issues is critical for bringing nano-enabled materials, products, and technologies to the mass market and ensuring their sustainable commercial use.Combining the results of multiple studies is increasingly applied in nano-EHS to pre-screen different NPs based on their toxicity potential and to look for early signs of harmful effects (10,27,28).By statistically integrating findings of independent nanotoxicity-screening studies, it is possible to get a more accurate representation of complex behavior of nano-systems in cellular systems.In this study, we examined the factors contributing to the cytotoxic potential of ZnO NPs using meta-analytic data covering 543 data points from 26 independent studies.The main aim here was to underpin parameters that potentially control the biological effects of ZnO NPs.While the data used in this study was for ZnO NPs, similar models can be developed for different-core metallic NPs with small changes in the framework.Such data-driven models and the insights they provide are key for the simultaneous realization of safe-by-design and quality-by-design concepts that aim to ensure the safety and quality of NPs in an early stage of the innovation process (Figure 5). Figure 3 . Figure 3. Box plot of changes in cell viability (%) as a function of exposure concentration (dose) category, grouped by exposure duration.Circles outside the plot represent outliers beyond the 10th and 90th percentiles. Figure 4 . Figure 4.The best-performing regression tree predicting cell viability of ZnO NPs Figure 5 . Figure 5. Integration of safety and quality by design concepts through machine learning Table 1 . One-way ANOVA results Table 2 . Pearson correlation results
2,808.4
2024-01-01T00:00:00.000
[ "Medicine", "Materials Science", "Environmental Science" ]
USING R PACKAGES ‘TMAP’, ‘RASTER’ AND ‘GGMAP’ FOR CARTOGRAPHIC VISUALIZATION: AN EXAMPLE OF DEM-BASED TERRAIN MODELLING OF ITALY, APENNINE PENINSULA : The main purpose of this article is to present the use of R programming language in cartographic visualization demonstrating using machine learning methods in geographic education. Current trends in education technologies are largely influenced by the possibilities of distance-learning, e-learning and self-learning. In view of this, the main tendencies in modern geographic education include active use of open source GIS and publicly available free geospatial datasets that can be used by students for cartographic exercises, data visualization and mapping, both at intermediate and advanced levels. This paper contributes to the development of these methods and is fully based on the datasets and tools available for every student: the R programming language and the free open source datasets. The case study demonstrated in this paper show the examples of both physical geographic mapping (geomorphology) and socio-economic geography (regional mapping) which can be used in the classes and in self-learning. The objective of this research includes geomorphological modelling of the terrain relief in Italy and regional mapping. The data include DEM SRTM90 and datasets on regional borders of Italy embedded in R packages ‘maps’ and ‘mapdata’. Modelling references to the characteristics of slope, aspect, hillshade and elevation, their visualization using R packages: ‘raster’ and ‘tmap’. Regional mapping of Italy was made using main package ‘ggmap’ with the ‘ggplot2’ as a wrapper. The Introduction The main tendencies in geographic education have been influenced by the open source GIS and publicly available free geospatial datasets (both raster and vector formats) that can be used by students for cartographic exercises. In view of the increased demand on the distance-based self learning creates the need for the open source programs in educational technologies. The development of the open source geoinformation technologies in cartographic methods recently made significant progress due to such excellent open source GIS as cartographic toolsets as QGIS, Generic Mapping Tools (Lemenkova, 2020a(Lemenkova, , 2020b, ILWIS GIS (Lemenkova, 2013), IDRISI GIS (Da Serra Costa et al. 1996;Lemenkova, 2014), GRASS GIS (Alvioli et al. 2020;Lemenkova, 2020c), SAGA GIS (Vacca et al. 2014;Lemenkova, 2020d), largely used in geoscience research. Besides the software, a variety of open source data exist available from various sources: Landsat TM and Sentinel-2A satellite images by the USGS, widely used in geoscience (Allevato et al. 2019;Lemenkova, 2015), Google Earth aerial imagery, Digital Chart of the World (DCW), raster topographic datasets: elevation DEMs, such as SRTM15, GEBCO, ETOPO1, GLOBE, to mention a few. Therefore, using available sources of such geographic information, students can get free of charge datasets, vector layers, raster grids and satellite images that are constantly updated. Once the data are captured, another question arises: which software to use? Leaving apart the mentioned above GIS applications, this paper focuses on the presenting non-trivial method of geographic data visualization in geographic education: the use of free and open source R programming language (R Core Team, 2020) for cartographic visualization by students. In particular, its specific libraries (also known as packages) used in this paper are the following ones: ggmap (Kahle & Wickham, 2013), maps, mapdata, sp, raster, tmap. These libraries are specifically tailored for cartographic visualization. In turn, they require the dependent packages, used for auxiliary data manipulation: ncdf4, RColorBrewer (Brewer et al. 2003;Neuwirth, 2014), sp (package for processing spatial data) and sf (Pebesma, 2017). For a long time, GIS was the only possibility to create maps, while the R language and its libraries was used to obtain the statistical information for modelling, assessment and graphical visualization of the tabular datasets (Lemenkova, 2019a). However, a variety of R packages nowadays presents new opportunities of using machine learning approaches in geographic education. Using R, quantitative and qualitative types of data stored in the tables as 'data.frame' are processed using a variety of statistical methods, and visualized as plots. However, it is also possible to use R not only for statistical analysis but in geographic mapping, which is much less known and popular comparing to GIS spatial applications in geographic and environmental data analysis (e.g. Allen et al. 1999;Suetova et al. 2005;Klaučo et al. 2013aKlaučo et al. , 2013bLemenkova, 2011). Specifically, R can be used for plotting traditional maps, and receiving information from the more complex spatial geomorphological analysis (slope, aspect, elevation, hillshade). The geomorphometry is widely used and applied in landform mapping (Evans, 2012) which makes R especially useful in landscape studies. R has a relatively straightforward syntax with a variety of packages that can be installed and uploaded additionally, via the RStudio (RStudio Team, 2017) and package installer by R. For instance, using a module 'tm_graticules' can be used for adding grid graticule and its additional elements (ticks, lines, labels) on maps and plotting them in a specific way (e.g. rotated, colored, using enlarged fonts, etc). R package 'tmap' can also be used for creating complex cartographic layouts with legends. Applying various color palettes can be done via the package 'RColorBrewer', adding cartographic elements (legends, multi-level annotations, scale bars, directional compass). The use of 'ncdf4' package is an additional condition for manipulating with various raster formats. The 'tmap' gives an opportunity to create classic maps in R, while 'raster' packages can be used for geomorphological analysis (calculation of slope, aspect, hillshade and elevation) from the SRTM 90 m datasets available by the getData() function, which gives an access to the elevation data. The research objective of this work concerns the following issues: 1) Introduce application of the cartographic functionality of R available in the packages 'tmap' (Tennekes, 2018), 'raster' (Hijmans, 2017), 'maps' (Becker et al. 2013), 'mapdata'. 2) Perform geomorphometric analysis of the Apennine Peninsula by 'raster' package with a visualized screenshot of R script and final layout output. 3) Present technical detailes of the packages as a visualized codes. 4) Apply 'tmap' and 'ggmap' packages for cartographic visualization. The presented topics should cover a large need of the cartographic issues in geographic education, since both R and data used in this study are open source and can be used in distance-based learning and geographic education. Study Area This paper focuses on mapping the Italy with two specific examples of geographic mapping: physical geographic and socio-economic (regional): 1) topographic mapping based on DEM, using 'tmap' and ''raster packages as the main tools. This includes calculation and visualization of slope, aspect, hillshade and elevation using SRTM90 dataset; 2) regional mapping of country borders using 'ggmap' packages as the main tool, 'maps' package (Becker et al. 2013) and a 'mapdata' as its supplement, providing access to larger and higher-resolution databases. Italy has been selected as the country with one of the most representative countries on geomorphology and hilly relief, suitable for terrain analysis. The landscapes on the Apennine Peninsula evolve under the control of a variety of factors which should be briefly mentioned. Among the most important ones are complex geological setting, fault system, tectonic deformations creating well-exposed faults in the region (Roberts & Michetti). For instance, the uplift occurring during Middle and Upper Pleistocene caused formation of the low relief. The uplift of the Northern Apennine mountain chain shaped the fluvio-lacustrine landforms in Italy (Bartolini, 2003). Landscape formation is also affected by vegetation, climate dynamics and sedimentation (Bertotti et al. 1997;Guido et al. 2020). Surface topography reflects landscape evolution, geomorphic processes, which in turn are influenced by the tectonic events (Bull, 2007), seismicity and earthquakes in faulted mountain fronts affecting river network of the basins and highs. Other factors sculpturing landforms include formation of the orogenic belts, morphotectonics of the Pliocene-Pleistocene regional uplifts affecting topography of the Central-Northern Apennines and Po Basin (Centamore, Nisio, 2003;Zuffetti & Bersezio, 2020) and tectonically active L'Aquila Basin (Cosentino et al. 2017). As a result, modern topography of the Apennines reflects the interactions between crustal-mantle and surface processes since the Late Miocene which affect the geomorphology of the peninsula in course of geologic, and tectonic-morphologic evolution of the orogenic belt in the Apennines . The formed landscapes record the sequence of geomorphic evolutionary steps, as well as the spatial-temporal relations between the active geological processes. As a result, the relief of Italy is highly uneven and varies regionally within the country, which makes it suitable for topographic visualization, already presented in existing papers on terrain DEM analysis of the selected landscapes in Italy (Tarquini et al. 2007;Ascione et al. 2008;Geurts et al. 2020;Gemelli et al. 2011). Materials and methods There are a wide variety of packages in R language used for graphical plotting. This research uses the packages 'ggmap', 'maps', 'mapdata', 'sp', 'raster', 'tmap' for data capture, processing and cartographic visualization (Figure 1). Using these packages R demonstrates the functionality of traditional GIS software, briefly described below. The collected data were pre-processed before they were used as an input data in R. Thus, when the data were loaded, the extent (dimensions) of the tabular data was checked up, as well as the column content: dim(italy). Since the table is rather big-sized, the initial and end part of the tables were inspected using the following functions: head(italy) and tail(italy), respectively. Then the data were inspected for their coordinate projection: crs(alt). Afterwards, the common geographical projection (lon/lat) was defined using the following code: crs(alt) <-"+proj=longlat +datum=WGS84 +no_defs". If necessary for the students to experiment with projections, the variety of the projections can be set up using this function. For instance, converting geodetic coordinate to UTM Zone 33 on the northern hemisphere (EPSG:32633 WGS 84/UTM zone 33N) can be done using following code: crs(alt) <-"+proj=utm +zone=33". Afterwards, the spatial extent of the data was defined using the following code: e <-as(extent (6,19,36,48), 'SpatialPolygons'). Here the West-East-South-North system is defined as the borders fo the study area ( Figure 2). The digital elevation models (DEM) obtained from the Shuttle Radar Topography Mission (SRTM) create perfect dataset for geomorphological studies due to the open availability and high resolution of the grid. There are various examples of using the SRTM data in geospatial research (Drăguţ & Eisank, 2012;Hirt, 2018) with discussed reliability (Frey & Paul, 2012). The SRTM data sources present a reliable and accurate topographic dataset to gather reliable information about the relief terrain, taking into account the elevation values. The spatial SRTM DEM used in this research have 90m*90m resolution. Calculate terrain characteristics: slope and aspect A complex geomorphic analysis may include computation of indices such as mountain front of water divide sinuosity, asymmetry, drainage basin elongation, relief ratio, hypsometry, normalized steepness, and concavity, integrated with geomorphological analysis (Giano et al., 2018). This study presents the most important geomorphic features that are often included in geographic education curriculum and also embedded in R algorithms: computation and mapping of slope steepness, aspect orientation, visualization hillshade and DEM elevation. The slope stability analysis is based on calculating the slope steepness often applied in studies on geologic hazard risks, such as assessment of landslides in mountains, rock avalanches (Antonielli et al. 2020;Lemenkova et al. 2012). Calculating slope, aspect and hillshade is based using existing slope and aspect generation mathematical algorithms (Ritter, 1987) of geomorphological models. Other computational examples of the terrain include, for instance, landscape metrics, used in environmental and sustainability studies assessing the vulnerability and ecological significance of the landscapes (Klaučo et al. 2014(Klaučo et al. , 2017. In R, the calculation of slope and aspect maps was done using the following codes: slope = terrain(alt, opt = "slope") and aspect = terrain(alt, opt = "aspect"), respectively ( Figure 2). After the slope and aspect were computed using the scripts presented in Figure 2, the preliminary visualization without cartographic elements was done using the plot(slope) and plot(aspect) functions, respectively. Afterwards, the response of R for calculated slope and aspect unit analysis was translated to the 'tmap' package as the spatial input data where the additional cartographic elements were added on the maps (graticule, grids, annotations, title, subtitle, legend, etc) using multiple specially designed functions, as follows: tm_scale_bar, tm_compass, tm_raster, tm_layout, tm_graticules. The full script used for plotting the maps of slope and aspect is visualized in Figure 2. The overlay of these cartographic elements, slope and aspect maps, their legend and histograms of data distribution was performed after the slope and aspect maps were prepared based on the SRTM DEM data of Italy. The final maps are presented in Figure 3 (left: slope, right: aspect) and Figure 4 (left: hillshade, right: elevation). Visualizing topographic maps: elevation and hillshade The elevation DEM data of SRTM was used to create a hillshade map. The raster package provides an easy access to the SRTM 90 m resolution elevation data with the getData() function. For example, it is possible to download the elevation data for the whole country using the getData() function. The hillshade map shows the topographical shape of highs, hills and mountains using levels of gray on a map. The role of this kind of maps is to display relative slopes, though not in absolute heights. However, despite its relative estimation of the topography, the value of the shaded relief maps consists in its approach: they give the student an immediate appreciation for the surface topography, because a hillshade map helps to visually estimate the depths (Horn, 1981). The hillshade map was created based on the terrain characteristics using the 'raster' package and then visualized as classic map with elements using 'tmap' Figure 2. Figure 3. Map of slope (left) and aspect (right) plotted in R using scripts presented in package. Thus, initially, the hillshade was calculated based on the input data from the previous research steps: slope and aspect using algorithms of hillshade computing (Jones, 1998). Both of them were calculated using 'raster package' with the terrain function. Computing of a hillshade was done by the hillShade() function using the following code: hill = hillShade(slope, aspect, angle = 40, direction = 270). Now the hillshade was computed using slope and aspect of the terrain with the option argument set to "slope" or "aspect", respectively. The previously created slope and aspect objects were used as input data and two new arguments -angle as 40° (the elevation angle of the light source) and direction as 270° (he direction (azimuth) angle of the light source (sun)) -were set up. The output hillshade map was plotted using plot(hill) function. Once the hillshade was computed, it was transferred to the 'tmap' package for further cartographic plotting. Here the visualizing was done using stacking of data layers similar to the traditional overlay in GIS, with the layer of the hillshade object (hill) colored using grey hues. Figure 4. Map of hillshade in monochrome colors (left) and elevation map in terrain colors (right), plotted in R. Mapping regional division The advantages of the 'ggmap' package consists in the convenient mapping and plotting the regional map of Italy ( Figure 5 and Figure 6). The maps package contains the outlines of Italy as country that have been with R as embedded data. The 'mapdata' package was used, since it has a higher-resolution vector outlines. The 'maps' package was used for its plotting functionality in addition to the 'ggplot2 package which operates on data frames (Wickham, 2009). Therefore the data from the 'maps' package were forwarded into a data.frame format of the 'ggplot' package ( Figure 5, left). The country data (Italy) and SRTM data (DEM) were also collected and loaded to RStudeio using common R syntax (Bivand et al. 2008). The country data on Italy origins from the NUTS III (Tertiary Administrative Units of the European Community) database of the United Nations Environment Program (UNEP) GRID-Geneva datasets made in 1989 (Becker & Wilks, 1995). ', 'ggmap', 'maps' and 'mapdata'. Script used for visualization (left) and resulting output map (right). The function 'tm_shape' of library 'tmap' was applied for the creation of cartographic layout and then saved in jpg format using R. The 'tmap' library defines the shape objects plotted on a map (Tennekes, 2018) and visualized in Figure 6. The base graphics of R was head directly to ggplot2 package. The 'gg-plot2' package was used as a wrapper for the raster map visualization, enabling the finer operation with spatial objects. Using some functions of ggplot2 enables easier interaction with the data in R 'maps' package. Here the main map was plotted using geom_polygon function (geom_polygon(data = italy, aes(x=long, y = lat, group = group)), and aesthetics was added using additional elements of ggplot2 syntax: fill = "pink", color = "blue", linetype = 1, size = 0.2). The annotations of the XY axes were made using the options xlab and ylab: xlab("Longitude") + ylab("Latitude"). The titles and the subtitles were also added on a mp using the following arguments of the 'ggplot2' package: labs(ti-tle="Italy", subtitle = "Mapping: R", caption = "Packages: ggmap, ggplot2, mapdata, maps"). The legend has been added on a map using the code: guides(fill = guide_legend(reverse=TRUE)). The disadvantage of the 'ggplot2', however that it does not support different cartographic projections. Once a map layout is prepared, the map was converted to the 'jpg' format and visualized in Figure 6. Results and discussion The visualized DEM derivatives included geomorphometric maps of slope, aspect and hillshade relief maps which shows the trends in the orientation (North-South-West-East) and slope steepness of the raster grids in different sub-regions of the Apennines. The slope directions of the rural areas of Italy revealed variations in the data showing 'gentle', 'moderate', 'strong', 'very strong', 'extreme' and 'steep' slopes of the mountainous regions of Italy. The number of pixels (over 75,000 on a raster grid) was the greatest in the 'moderate' slope level comparing to the others: 35,000 pixels for the 'gentle', 22,000 for 'strong', 18,000 for 'very strong', 11,000 for 'extreme' and 9,000 for 'extreme' slopes. As the influence of the geomorphological patterns was clear on the slope and aspect maps correlating with a general elevation map layer, detecting geomorphometric trends shows effective results by the presented and used R packages. The trends in aspect data distribution in the Apennines visualized using R package 'tmap' apparently reflect the structure of the geomorphological patterns of the study area. When comparing the four major classes of the slopes orientation (West-North-East-South) with the subdivided (North-West, North-East, South-West, South-East) in Figure 3, the integrated dataset shows four classes: 1 st class (East, colored red), the 2 nd class (West, colored orange) 3 rd class (South, colored yellow) and the 4 th class (North, colored blue). The data assessment demonstrated the subdivision of the slopes by DEM. The visualized elevation map of the Apennines identified heights in meters grouped in five major classes ( Figure 4) and colored by terrain.colors palette of R. The classes include heights from 0 to 4,000 meters. Additionally, a hillshade map was visualized as a raster calculation based on slope and aspect layers (Figure 4). The applied parameters of R libraries were sufficient for the terrain mapping and geomorphological analysis, as presented in this work, and therefore, R can be strongly recommended in similar research studies. All maps presented in this article are made and plotted in R. Although there are special cartographic toolsets and software has functionality for the complex cartographic workflow using scripting approaches e.g. GRASS GIS or GMT (Lemenkova, 2020e, 2020f), the direct use of R language presents the principally next steps in the development of the cartographic methods, since using programming language is completely based on the machine learning algorithms for data evaluation, which could be effectively used for geoinformatics and mapping. Conclusions Automatization in topographic modelling is largely presented by the use of scripting languages (Lemenkova, 2019b(Lemenkova, , 2019c(Lemenkova, , 2019d or specially designed software that uses approaches of machine learning in geoinformatics (Gauger et al. 2007;Iwahashi & Pike, 2007;Schenke & Lemenkova, 2008;Alvioli et al. 2016;Kuhn et al. 2007). Both practical and theoretical conceptual approaches are the core issues of the GIS-based geomorphic mapping. Therefore, the advances in R programming and development of its new libraries present new ways in geographic education by rapidly evolving scripting technologies used in cartographic visualization. The extended possibilities of R can successfully support and top up the existing GIS methods. Besides, free and open source R language is available for every student, which makes it especially useful in geographic education due to its availability and functionality. However, nowadays, the applications of R are mostly presented in cases of the statistical analysis (Lemenkova, 2018, while the use of its cartographic packages for geographic visualization have not yet been adequately addressed. This paper aims to fill in this lacuna and to present the illustration of how R libraries designed for cartographic needs can be used in classes of geography by the students for geomorphological analysis with a case study of Italy, known for the diverse relief patterns (mountains, slopes, hills, plains, highs). Mapping approaches by R presented in this research demonstrated maps of the terrain analysis based on DEM of the Apennines. The value of using R in geosciences consists in practical reproducibility of scripts in geography and geosciences. Therefore, the use of R in geographic education is a highly promising approach. However, there are some possible drawbacks and disadvantages comparing to the traditional GIS, which should be mentioned as well. Thus, the syntax of R may not always easily accessible by the beginner students and require preliminary skills in coding. Second, the presented packages require installation. Third, some concepts of R may not be always straightforward for the students and require additional classes and special teaching on script approaches. Fourth, the plotting of maps always require some familiarity with general GIS concepts, such as projections, plotting, operating with spatial data, etc.
5,017.6
2020-01-01T00:00:00.000
[ "Geography", "Computer Science", "Education" ]
Benchmarking of cell type deconvolution pipelines for transcriptomics data Many computational methods have been developed to infer cell type proportions from bulk transcriptomics data. However, an evaluation of the impact of data transformation, pre-processing, marker selection, cell type composition and choice of methodology on the deconvolution results is still lacking. Using five single-cell RNA-sequencing (scRNA-seq) datasets, we generate pseudo-bulk mixtures to evaluate the combined impact of these factors. Both bulk deconvolution methodologies and those that use scRNA-seq data as reference perform best when applied to data in linear scale and the choice of normalization has a dramatic impact on some, but not all methods. Overall, methods that use scRNA-seq data have comparable performance to the best performing bulk methods whereas semi-supervised approaches show higher error values. Moreover, failure to include cell types in the reference that are present in a mixture leads to substantially worse results, regardless of the previous choices. Altogether, we evaluate the combined impact of factors affecting the deconvolution task across different datasets and propose general guidelines to maximize its performance. S ince bulk samples of heterogeneous mixtures only represent averaged expression levels (rather than individual measures for each gene across different cell types present in such mixture), many relevant analyses such as differential gene expression are typically confounded by differences in cell type proportions. Moreover, understanding differences in cell type composition in diseases, such as cancer will enable researchers to identify discrete cell populations, such as specific cell types that could be targeted therapeutically. For instance, active research on the role of infiltrating lymphocytes and other immune cells in the tumor microenvironment is currently ongoing 1-3 (e.g., in the context of immunotherapy) and it has already shown that accounting for the tumor heterogeneity resulted in more sensitive survival analyses and more accurate tumor subtype predictions 4 . For these reasons, many methodologies to infer proportions of individual cell types from bulk transcriptomics data have been developed during the last two decades 5 , along with new methods that use single-cell RNA-sequencing (scRNA-seq) data to infer cell proportions in bulk RNA-sequenced samples. Collectively we term these approaches cell deconvolution methods. Several studies have addressed different factors affecting the deconvolution results but only focused on one or two individual aspects at a time. For instance, Zhong and Liu 6 showed that applying the logarithmic transformation to microarray data led to a consistent under-estimation of cell-type specific expression profiles. Hoffmann et al. 7 showed that four different normalization strategies had an impact on the estimation of cell type proportions from microarray data and Newman et al. 8 highlighted the importance of accounting for differences in normalization procedures when comparing the results from CIBERSORT 9 and TIMER 10 . Furthermore, Vallania et al. 11 observed highly concordant results across different deconvolution methods in both blood and tissue samples, suggesting that the reference matrix was more important than the methodology being used. Sturm et al. 12 already investigated scenarios where reported cell type proportions were higher than expected (spillover effect) or different from zero when a cell type was not present in a mixture (background prediction), possibly caused by related cell types sharing similar signatures or marker genes not being sufficiently cell-type specific. Moreover, they provided a guideline for method selection depending on which cell type of interest needs to be deconvolved. However, each method evaluated in Sturm et al. was accompanied by its own reference signature for the different immune cell types, implying that differences may be markerdependent and not method-dependent. Moreover, they did not evaluate the effect of data transformation and normalization in these analyses and only focused on immune cell types. Here we provide a comprehensive and quantitative evaluation of the combined impact of data transformation, scaling/normalization, marker selection, cell type composition and choice of methodology on the deconvolution results. We evaluate the performance of 20 deconvolution methods aimed at computing cell type proportions, including five recently developed methods that use scRNA-seq data as reference. The performance is assessed by means of Pearson correlation and root-mean-square error (RMSE) values between the cell type proportions computed by the different deconvolution methods (P C ; computed proportions; Fig. 1) and known compositions (P E ; expected proportions) of a thousand pseudo-bulk mixtures from each of five different scRNA-seq datasets (three from human pancreas; one from human kidney and one from human peripheral blood mononuclear cells (PBMCs)). Furthermore, to evaluate the robustness of our conclusions, different number of cells (cell pool sizes) are used to build the pseudo-bulk mixtures. We observe that the most relevant factors affecting the deconvolution results are: (i) the data transformation, with linear transformation outperforming the others, (ii) the reference matrix, which should include all cell types being part of the mixtures, iii) a sensible marker selection strategy for bulk deconvolution methods. Results Memory and time requirements. While simple logarithmic (log) and square-root (sqrt) data transformations were performed almost instantaneously in R (between 1 and 5 s; see Table 1 for information about the number of cells subject to transformation in each scRNA-seq dataset), the variance stabilization transformation (VST) performed using DESeq2 13 applied to the scRNAsequencing datasets had high memory requirements and took several minutes to complete (time increasing linearly with respect to the number of cells) (Supplementary Fig. 3). Importantly, DESeq2 v1. 26.0 (or above) reduced the running time from quadratic ( Supplementary Fig. 27 from Soneson et al. 14 ) to linear with respect of the number of cells. We further evaluated the impact of different scaling and normalization strategies as well as the choice of the deconvolution method. Although the different scaling/normalization strategies consistently have similar memory requirements, SCTransform 15 and scran 16 (two scRNA-seq specific normalization methods; the former uses regularized negative binomial regression for normalization (RNBR)) required up to seven minutes to complete, a 14 fold difference with the other methods, which finished under 30 s ( Supplementary Fig. 4). The bulk deconvolution methods DSA 17 , ssFrobenius and ssKL 18 (all implemented as part of the CellMix 19 R package) had the highest RAM memory requirements, followed by DeconR-NASeq 20 . Not surprisingly, the ordinary least squares (OLS 21 ) and non-negative least squares (nnls 22 ) were the fastest, as they have the simplest optimization problem to solve. Regarding the methods that use scRNA-seq data as reference, Dampened Weighted Least Squares (DWLS 23 ), which includes an internal marker selection step, resulted in the longest time consumption (6-12 h to complete) whereas MuSiC 24 and SCDC 25 finished in 5-10 mins. Running time and memory usage for the different deconvolution methods is summarized in Supplementary Fig. 5. Impact of data transformation on deconvolution results. We investigated the overall performance of each individual deconvolution method across four different data transformations and all normalization strategies ( Fig. 2; Supplementary Fig. 6-7). Maintaining the data in linear scale (linear transformation, in gray) consistently showed the best results (lowest RMSE values) whereas the logarithmic (in orange) and VST (in green; which also performs an internal complex logarithmic transformation) scale led to a poorer performance, with two to four-fold higher median RMSE values. For a detailed explanation concerning several bulk deconvolution methods and those using scRNA-seq data as reference that could only be applied with a specific data transformation or dataset, please see Supplementary Methods. With the exception of EPIC 26 , DeconRNASeq 20 , and DSA 17 , the choice of normalization strategy does not have a substantial impact on the deconvolution results (evidenced by narrow boxplots). These conclusions also hold when repeating the analysis with different pseudo-bulk pool sizes in all datasets tested (collapsing all scaling/normalization strategies and all bulk deconvolution methods ( Supplementary Fig. 8) or those using scRNA-seq data as reference ( Supplementary Fig. 9)). For these reasons, all downstream analyses were performed on data in linear scale. In terms of performance, the five best bulk deconvolution methods (OLS, nnls, RLR, FARDEEP, and CIBERSORT) and the three best methods that use scRNA-seq data as reference (DWLS, MuSiC, SCDC) achieved median RMSE values lower than 0.05. Penalized regression approaches, including lasso, ridge, elastic net regression, and DCQ performed slightly worse than the ones described above (median RMSE~0.1). Different combinations of normalization and deconvolution methods. It is clear that different combinations of normalizations and methodologies lead to substantial differences in performance ( Fig. 2 and Supplementary Fig. 6). Focusing on the data in linear scale, we delved into the specific method and normalization combinations evaluated. Among the bulk deconvolution methods, least-squares (OLS, nnls), support-vector (CIBERSORT) and robust regression approaches (RLR/FARDEEP) gave the best results across different datasets and pseudo-bulk cell pool sizes (median RMSE values < 0.05; Fig. 3a and Supplementary Figs. 10,12). Regarding the choice of normalization/scaling strategy, column min-max and column z-score consistently led to the worst performance. In all other situations, the choice of normalization/ scaling strategy had minor impact on the deconvolution results for these methods. When considering the estimation error relative to the magnitude of the expected cell type proportions, smaller proportions consistently showed higher relative errors (see . Of note, quantile normalization always resulted in sub-optimal results in any of the tested bulk deconvolution methods (Fig. 3a, b). As stated in its original publication, EPIC assumes transcripts per million (TPM) normalized expression values as input. We indeed observed that the choice of scaling/normalization has a big impact on the performance of EPIC, with TPM giving the best results. The semi-supervised approaches ssKL and ssFrobenius (using only sets of marker genes, in contrast to the supervised counterparts which use a reference matrix with expression values for the markers) showed the poorest performances with the highest root-mean-square errors and lower Pearson correlation values ( Fig. 3a and Supplementary Fig. 10). For deconvolution methods using scRNA-seq data as reference ( Fig. 3c and Supplementary Fig. 11), we evaluated each combination of normalization strategies for both the pseudobulk mixtures (scalingT, y-axis) and the single-cell expression matrices (scalingC, x-axis). DWLS, MuSiC and SCDC consistently showed the highest performance (comparable to the topperformers from the bulk methods, see also Fig. 2) across the different choices of normalization strategy (with the exception of row-normalization, column min-max, and TPM). While these results are consistent for deconvSeq, MuSiC, DWLS, and SDCD regardless of the dataset and pseudo-bulk cell pool size, we observed a substantial performance improvement in BisqueRNA when the pool size increased or when the dataset contained scRNA-seq from more individuals (E-MTAB-5061 and GSE81547, with n = 6 and 8, respectively) (Supplementary Figs. 7,11). Note that it was not feasible to evaluate all combinations (empty locations in the grid), see "Incompatible data transformations or normalizations with several deconvolution methods" (Supplementary Notes) for a detailed explanation. Impact of the markers used in bulk deconvolution methods. Based on the previous results, we wanted to evaluate whether different marker selection strategies had an impact on the deconvolution results starting from bulk expression data in linear scale. To that end, we assessed the impact of eight different marker selection strategies (see "Methods") on the deconvolution results using bulk deconvolution methods ( Fig. 4 and Supplementary Fig. 13). This analysis was not done for the methods that use scRNA-seq data as reference because they do not require marker genes to be known prior to performing the deconvolution. The use of all possible markers (all strategy) showed the best performance overall, followed by positive fold-change markers (pos_fc; negative fold-change markers are those with small expression values in the cell type of interest and high values in all the others) or those on the top 50% of average expression values (top_50p_AveExpr) or log fold-changes (top_50p_logFC). As expected, the use of random sets of 5 markers per cell type (random5; negative control in our setting) was consistently the worst choice across all datasets regardless of the deconvolution method. Using the bottom 50% of the markers per cell type based on average expression levels (bottom_50p_AveExpr) or log fold changes (bottom_50p_logFC) also led to sub-optimal results. Specifically in the Baron and PBMC datasets, the use of the top 2 markers per cell type (top_n2) led to a) optimal results when used with DSA; b) similar results as using the bottom_50p_AveExpr or bottom_50p_logFC with ordinary linear regression strategies; c) worse results than random when used with penalized regression strategies (lasso, ridge, elastic net, DCQ) and CIBERSORT. For all markers across each dataset, we took a closer look at the fold-change distribution for both the cell type where they were initially found as marker (highest fold change) and the foldchange differences among all other cell types. Using the threshold values used to select a gene as marker, we computed the percentage of those that could also be considered markers for a secondary cell type (values between parentheses in the boxplots below). For the five datasets included in the benchmark, 7-38% of the markers were not specific (exclusive) for only one cell type (see Supplementary Fig. 2). Effect of removing cell types from the reference matrix. Based on the results from all the analyses thus far, we decided to evaluate the impact of removing cell types with the data in linear scale and using all available markers (all marker selection strategy). Furthermore, we selected nnls and CIBERSORT as representative top-performing bulk deconvolution methods and DWLS and MuSiC as top-performing deconvolution methods that use scRNA-seq data as reference. To also be able to evaluate the impact of the normalization strategy, we included a representative sample of normalization strategies that result in small RMSE and high Pearson correlation values (see Fig. 3 and Supplementary Figs. 10-12): column, median ratios, none, TMM and TPM for nnls and CIBERSORT; column, scater, scran, none, TMM and TPM for DWLS and MuSiC. We assessed the impact of removing a specific cell type by comparing the absolute RMSE values between the ideal scenario where the reference matrix contains all the cell types present in the pseudo-bulk mixtures (leftmost column in Figs. 5a, b and 6a, b (with gray label: none); Supplementary Figs. 16,17) and the RMSE values obtained after removing one cell type at a time from the reference (all other gray labels). We then focussed on those cases where the median absolute RMSE values between the results using the complete reference matrix (depicted as none in Figs. 5a, b and 6a, b) and all other scenarios where a cell type was removed, increased at least 2-fold. In the PBMC dataset ( Fig. 5a, b), removing CD19+, CD34+, CD14+ or NK cells had an impact on the computed T-cell proportions (between a three and six-fold increase in the median absolute RMSE values, both in bulk deconvolution methods and those using scRNA-seq data as reference). The GSE81547 dataset ( Fig. 6a, b) shows that removing acinar cells has a dramatic impact in all other cell type proportions. Supplementary Figs. 14 and 15 showed the results for Baron and E-MTAB-5061 datasets, respectively. None of the method and normalization combinations was able to provide accurate cell type proportion estimates when the reference was missing a cell type. To investigate whether the proportion of the omitted cell type was re-distributed equally among all remaining cell types or only among those that are transcriptionally most similar, we computed pairwise Pearson correlation values between the expression profiles of the different cell types (Figs. 5c, d and 6c, d). Figure 5c, d shows that CD14+ monocytes were mostly correlated with dendritic cells (Pearson = 0.85 when computing pairwise correlations on the reference matrix containing only marker genes and 0.94 when using the complete expression profiles from all cell types, respectively) and Fig. 5a, b shows that, when removing CD14+ monocytes, the highest RMSE value was found in dendritic cells. Figure 6c, d shows that acinar cells are not correlated with any other cell type (Pearson values close to zero with all other cell types) and Fig. 6a Deconvolution of real bulk heterogeneous samples. In contrast to the thousands of artificial pseudo-bulk mixtures across five datasets used in the previous sections, we used nine human bulk PBMCs samples from Finotello et al. 27 for which cell type proportions were measured by flow cytometry. We considered these proportions as the gold standard against which both bulk deconvolution methods and those able to use scRNA-seq data as reference could be evaluated. To note, it was not possible to evaluate MuSiC and SCDC because the 10x scRNA-seq data used as reference came from only one individual. Hence, only DWLS, deconvSeq, and BisqueRNA were tested. See "Computational framework for the evaluation of deconvolution pipelines with real RNA-seq data" ("Methods") and Table 1 for more details. Regarding bulk deconvolution methods: robust regression methods (RLR, FARDEEP) and support vector regression (CIBERSORT) consistently showed the smallest RMSE and highest Pearson correlation values (Fig. 7a). Similarly, DWLS performed best among the deconvolution methods that use scRNA-seq data as input (Fig. 7b). Discussion Using both Pearson correlation and RMSE values as measures of the deconvolution performance, we comprehensively evaluated the combined impact of four data transformations, sixteen scaling/normalization strategies, eight marker selection approaches and twenty different deconvolution methodologies on five different scRNA-seq datasets. These datasets encompass three different biological sample types (human pancreas, kidney, and peripheral blood mononuclear cells) and four different sequencing protocols (CEL-Seq, Smart-Seq 2, Microwell-Seq, and GemCode Single-Cell 3′). Additionally, we assessed the impact of using different number of cells when making the pseudo-bulk mixtures and the impact of removing cell types from the reference matrix that were actually present in the mixtures. Even though the five scRNA-seq datasets used throughout this manuscript encompass different sequencing protocols that led to hundred-fold differences in the number of reads sequenced per cell (Table 1), our findings were consistent regardless of the dataset being evaluated or the number of cells used to make the pseudo-bulk mixtures (Supplementary Figs. 6-12). Given the limited number of cells available per dataset and the scarcity of publicly available datasets with similar health status, sequencing platform, and library preparation protocol to validate our results, some cells were used in more than one mixture and each dataset was split into training and testing (50%:50%), meaning that cells from one individual were present both in training and test sets but a given cell was only present in one split. Nevertheless, while the different datasets (except PBMCs) contain cells from more than one individual (=inherent inter-sample variability), we observed meaningful differences between cell types rather than by individual ( Supplementary Fig. 24). Additionally, we generated scenarios where cells from a given individual were used only in one split (training or test) by assigning half of the samples to each split prior to selecting the cells based on the cell type. These led to slightly higher RMSE and lower Pearson correlation values compared to those where cells from one individual were present in both splits, but the same conclusions hold true in both analyses . Both cell type proportions on their own (e.g., at baseline level, before any treatment has started) and changes in cell type composition upon drug treatment or a viral infection are relevant and can be assessed through computational deconvolution. For instance, patients with high levels of tumor-infiltrating lymphocytes have been found to respond better to immune checkpoint inhibitors (immunotherapy) 28 and changes in diverse immune cell types were found in mice lungs during the course of influenza infection 29 . In principle, the performance of a computational deconvolution framework should be independent of the experimental set up where it is applied. However, we acknowledge that the data included in our benchmark did not directly evaluate the latter scenario. ARTICLE The logarithmic transformation is routinely included as a part of the pre-processing of omics data in the context of differential gene expression analysis 30,31 , but Zhong and Liu 6 showed that it led to worse results than performing computational deconvolution in the linear (un-transformed) scale. The use of the expression data in its linear form is an important difference with respect to classical differential gene expression analyses, where statistical tests assume underlying normal distributions, typically achieved by the logarithmic transformation 32 . Silverman et al. 33 showed that using log counts per million with sparse data strongly distorts the difference between zero and non-zero values and Townes et al. 34 showed the same when log-normalizing UMIs. Tsoucas et al. 23 showed that when the data was kept in the linear scale, all combinations of three deconvolution methods (DWLS, QP, or SVR) and three normalization approaches (LogNormalize from Seurat, Scran or SCnorm) led to a good performance, which was not the case when the data was log-transformed. Here, we assessed the impact of the log transformation on both full-length and tag-based scRNA-seq quantification methods and confirmed that the computational deconvolution should be performed on linear scale to achieve the best performance. Data scaling or normalization is a key pre-processing step when analysing gene expression data. Data scaling approaches transform the data into bounded intervals such as [0, 1] or [−1, +1]. While being relatively easy and fast to compute, scaling is sensitive to extreme values. Therefore, other strategies that aim to change the observations so that they follow a normal distribution (= normalization) may be preferred. Importantly, these normalizations typically do not result in bounded intervals. In the context of transcriptomics, normalization is needed to only keep true differences in expression. Normalizations, such as TPM aim at removing differences in sequencing depth among the samples. An in-depth overview of normalization methods and their underlying assumptions is presented in Evans et al. 35 . Vallania et al. 11 assessed the impact of standardizing both the bulk and reference expression profiles into z-scores prior to deconvolution, which is performed by CIBERSORT but not in other methods. They observed high pairwise correlations between the estimated cell type proportions with and without standardizing the data, suggesting a neglectable effect. However, a high Pearson correlation value is not always synonym of a good performance. As already pointed out by Hao et al. 36 , high Pearson correlation values can arise when the proportion estimations are accurate (low RMSE values) but also when the proportions differ substantially (high RMSE values), making the correlation metric alone not sufficient to assess the deconvolution performance. Both for bulk deconvolution methods and those that use scRNAseq data as reference, our analyses show that the normalization strategy had little impact (except for EPIC, DeconRNASeq, and DSA bulk methods). Of note, quantile normalization (QN), an approach used by default in several deconvolution methods (e.g., FARDEEP, CIBERSORT), consistently showed sub-optimal performance regardless of the chosen method. In general, the use of all data at hand (i.e., in supervised strategies) leads to better results than unsupervised or semisupervised approaches. However, in other contexts different from computational deconvolution (e.g., automatic cell identification 37 ), it has been shown that incorporating prior knowledge into the models does not improve the performance. Furthermore, there are situations where cell-type specific expression profiles are not readily available and supervised methodologies cannot be used. For these reasons, we included ssFrobenius and ssKL in our benchmarking, two semi-supervised non-negative matrix factorization methods to perform bulk gene expression deconvolution. They led to higher RMSE and lower Pearson correlation values than most supervised methodologies (except DCQ and dtangle; Fig. 2 and Supplementary Fig. 6), highlighting the positive impact of incorporating prior knowledge (in the form of cell-type specific expression profiles) in the field of computational deconvolution. In any case, results from supervised and semi-supervised methodologies should be interpreted separately. Schelker et al. 38 and Racle et al. 26 showed that the origin of the expression profiles had also a dramatic impact on the results, revealing the need of using appropriate cell types coming from niches similar to the bulk being investigated. Hunt et al. 39 showed that a good deconvolution performance was achieved if the markers being used were predominantly expressed in only one cell type and with the expression in other cell types being in the bottom 25%. Monaco et al. 40 showed similar conclusions when the reference matrix was pre-filtered by removing markers with small log fold change between the first and second cell types with highest expression. In our analyses, markers were selected based on the fold change with respect to the cell type with the second-highest expression. Therefore, the pre-filtering proposed by Hunt changes, those in the top fifty percent led to smaller RMSEs compared to those in the bottom fifty percent (Fig. 4). Wang et al. 24 explored the effect of removing one immune cell type at a time from the reference matrix on the estimation accuracy using artificial bulk expression of six pancreatic cell types (alpha, beta, delta, gamma, acinar, and ductal) and removing one cell type from the single-cell expression dataset. They observed that, when a cell type was missing in the reference matrix, MuSiC, NNLS, and CIBERSORT did not produce accurate proportions for the remaining cell types. Gong and Szustakowski 20 also investigated this issue by performing a first deconvolution using DeconRNASeq, then removing the least abundant cell population from the reference/basis matrix, and finally repeating the deconvolution with the new matrix. They observed an uneven redistribution of the signal and observed that some initial proportions became smaller. Moreover, Schelker et al. 38 investigated this phenomenon by looking at the correlation coefficient between the results obtained with the complete reference matrix and the results removing one cell type at a time. We performed similar analyses for four deconvolution methods (two bulk and two using scRNA-seq data as reference) and eleven normalization strategies (five for bulk, six for single-cell) on three single-cell human pancreas and one PBMC dataset, keeping the data in linear scale. We observed both cases where the choice of normalization strategy had no impact and other cases where it did. Interestingly, the removal of specific cell types did not affect all other cell types equally. Both bulk deconvolution methods and those using scRNA-seq data as reference showed similar trends when removing specific cell types. However, there were some discrepancies in the RMSE values (e.g., removal of beta cells had a substantial impact on the proportions of delta cells but CIBERSORT showed three times higher RMSE values compared to either nnls, MuSiC or DWLS (Fig. 6a, b and Supplementary Fig. 17)). This may be explained by the fact that for bulk deconvolution methods, we removed both the cell type expression profile and its marker genes from the reference matrix whereas for those where scRNA-seq data was used as reference, only the cells from the specific cell type were excluded, without applying extra filtering on the genes (MuSiC, SCDC) or because a different signature was internally built (DWLS). Furthermore, we found a direct association between the correlation values among the cell types present in the mixtures and the effect of removing a cell type from the reference matrices. Specifically, we hypothesize that: (a) removing a cell type that is barely or completely uncorrelated (Pearson < 0.2) to all other cell types remaining in the reference matrix has a dramatic impact in the cell type proportions of all other cell types; (b) removing a cell type that was strongly positively correlated (Pearson > 0.6) with one or more cell types still present in the reference matrix leads to distorted estimates for the most correlated cell type(s). The correlation between different cell types is a direct manifestation of their relatedness in a cell-type ontology/hierarchy: the closer the cell types in the hierarchy, the higher the correlation between their expression profiles. The cell-type relationship based on the hierarchy is a good qualitative predictor of the population, which will be most affected when removing a cell type from the reference matrix. EPIC 26 shows a first attempt in alleviating this problem by considering an unknown cell type present in the mixture. Nevertheless, this is currently restricted to cancer, using markers of non-malignant cells that are not expressed in cancer cells. In conclusion, when performing a deconvolution task, we advise users to: (a) keep their input data in linear scale; (b) select any of the scaling/normalization approaches described here with exception of row scaling, column min-max, column z-score or quantile normalization; (c) choose a regression-based bulk deconvolution method (e.g., RLR, CIBERSORT or FARDEEP) and also perform the same task in parallel with DWLS, MuSiC or SCDC if scRNA-seq data is available; (d) use a stringent marker selection strategy that focuses on differences between the first and second cell types with highest expression values; (e) use a comprehensive reference matrix that include all relevant cell types present in the mixtures. Finally, as more scRNA-seq datasets become available in the near future, its aggregation (while carefully removing batch effects) will increase the robustness of the reference matrices being used in the deconvolution and will fuel the development of methodologies similar to SCDC, which allows direct usage of more than one scRNA-seq dataset at a time. Methods Dataset selection and quality control. Five different datasets coming from different single-cell isolation techniques (FACS and droplet-based microfluidics) and encompassing both full-length (Smart-Seq2) and tag-based library preparation protocols (3′-end with UMIs) were used throughout this article (see Table 1). After removing all genes (rows) full of zeroes or with zero variance, those cells (columns) with library size, mitochondrial content or ribosomal content further than three median absolute deviations (MADs) away were discarded. Next, only genes with at least 5% of all cells (regardless of the cell type) with a UMI or read count greater than 1 were kept. Finally, we retained cell types with at least 50 cells passing the quality control step and, by setting a fixed seed and taking into account the number of cells across the different cell types (pooling different individuals when possible; thereby including inherent inter-sample variability), each dataset was further split into balanced training and testing datasets (50%:50% split) with a similar distribution of cells per cell type. Regarding E-MTAB-5061: cells with not_applicable, unclassified and co-expression_cell labels were excluded and only cells coming from six healthy patients (non-diabetic) were kept. After quality control, we made two-dimensional t-SNE plots for each dataset. When adding colored labels both by cell type and donor ( Supplementary Fig. 24), the plots showed consistent clustering by cell type rather than by donor, indicating an absence of batch effects. Generation of reference matrices for the deconvolution. Using the training splits from the previous section, the mean count across all individual cells from each cell type was computed for each gene, constituting the original (un-transformed and un-normalized) reference matrix (C in equation (1) from section "Computational deconvolution: formulation and methodologies") and were used as input for the bulk deconvolution methods described in that section. For the deconvolution methods that use scRNA-seq data as reference and for the marker selection step, the training subsets were used in their original single-cell format, whereas a mean gene expression collapsing step (= mean expression value across all cells of the same cell type) was required to generate the reference matrices used in the bulk deconvolution methods. Cell-type specific marker selection. TMM normalization (edgeR package 41 ) was applied to the original (linear) scRNA-seq expression datasets and limma-voom 42 was used to find out marker genes. Only genes with positive count values in at least 30% of the cells of at least one group were retained. Among the retained ones, those with absolute fold changes greater or equal to 2 with respect to the second cell type with highest expression and BH adj p-value < 0.05 were kept as markers in all three pancreatic datasets. Since the kidney and PBMC datasets contained more closely related cell types, the fold-change threshold was lowered to 1.8 and 1.5, respectively. Once the set of markers was retrieved, the following approaches were evaluated: (i) all: use of all markers found following the procedure described in the previous paragraph; (ii) pos_fc: using only markers with positive fold-change (= overexpressed in cell type of interest; negative fold-change markers are those with small expression values in the cell type of interest and high values in all the others); (iii) top_n2: using the top 2 genes per cell type with the highest log fold-change; (iv) top_50p_logFC: top 50% of markers (per CT) based on log fold-change; (v) bottom_50p_logFC: bottom 50% of markers based on log fold-change; (vi) top_50p_AveExpr: top 50% of markers based on average gene expression (baseline expression); vii) bottom_50p_AveExpr: low 50% based on average gene expression; (viii) random5: for each cell type present in the reference, five genes that passed quality control and filtering were randomly selected as markers. Generation of thousands of artificial pseudo-bulk mixtures. Using the testing datasets from the quality control step, we generated matrices containing 1000 pseudobulk mixtures (matrix T in equation (1) from "Computational deconvolution: formulation and methodologies") by adding up count values from the randomly selected individual cells. The minimum number of cells used to create the pseudo-bulk mixtures (pool size) for each of the five datasets was 100 and the maximum possible number was determined by the second most abundant cell type (rounded down to the closest hundred, to avoid non-integer numbers of cells), resulting in n = 100, 700, and 1200 for Baron; n = 100, 300, and 400 for PBMCs; n = 100 and 200 for GSE81547; n = 100 and 200 for the kidney dataset and n = 100 for E-MTAB-5061. For the human pancreas and PBMC datasets, each (feasible) pseudo-bulk mixture was created by randomly (uniformly) selecting the number of cell types to be present (between 2 and 5) and their identities, followed by choosing the cell type proportion assigned to each cell type (enforcing a sum-to-one constraint) among all possible proportions between 0.05 and 1, in increasing intervals of 0.05. For the kidney data, the number of cell types present was randomly (uniformly) selected between 2 and 8 (eight being the maximum number of cell types possible), followed by selecting the cell type proportion assigned to each cell type (enforcing a sum-toone constraint) among all possible proportions between 0.01 and 0.99. Finally, once the number of cells to be picked up from specific cell types was determined, the cells were randomly selected without replacement (= a given cell can only be present once in a mixture). Evaluation of deconvolution pipelines with real RNA-seq data. We downloaded and processed raw poly(A) RNA-seq (single-end) data of nine human (bulk) PBMCs samples from Finotello et al. 27 for which cell type proportions were measured by flow cytometry (assumed gold standard; see Supplementary Table 2 and "Processing poly(A) RNA-seq of nine human bulk PBMCs samples" in Supplementary Notes). Furthermore, we used scRNA-seq data from PBMCs (10× Genomics; see Table 1) and bulk RNA-seq for B cells, monocytes, myeloid dendritic cells, natural killer and T cells (see Supplementary Table 1). Importantly, we acknowledge two limitations in this set-up: (i) the nine PBMCs include a measurement for neutrophils (2.45%-5.05%) while such cell type was not present in the 10x scRNA-seq data; (ii) The 10× scRNA-seq data contained CD34+ cells whereas the nine PBMCs did not include information for such cell type. Therefore, to establish an unbiased assessment for both bulk deconvolution methods and those that use scRNA-seq data as reference, we excluded CD34+ cells from the 10× scRNA-seq data and did not use the bulk RNA-seq data for neutrophils in the reference matrix (see Table 2). Of note, flow cytometry proportions for T cells were computed as the sum of proportions of three different sub-populations (T regs , CD8+ and CD4+). Data transformation and normalization. The next step is applying four different data transformations to: (i) the un-transformed and un-normalized reference matrix C; (ii) the un-transformed and un-normalized single-cell training splits and (iii) the un-transformed and un-normalized matrix T containing the 1000 pseudobulk mixtures. Since count data from both bulk and scRNA-seq show the phenomenon of overdispersion 41,43 , the following data transformations were chosen: (a) leave the data in the original (linear) scale; (b) use the natural logarithmic transformation (with the log1p function in R 44 ); (c) use the square-root transformation; (d) variance-stabilizing transformation (VST). The second and third are simple and commonly used transformations aiming at reducing the skewness in the data due to the presence of extreme values 31 and stabilizing the variance of Poisson-distributed counts 45 , respectively. VST (using the varianceStabilizingTransformation function from DESeq2) removes the dependence of the variance on the mean, especially important for low count values, while simultaneously normalizing with respect to library size 13 . Each transformed output file was further scaled/normalized with the approaches listed on Table 3. The mathematical implementation can be found at the original publications (Ref column) and in our GitHub repository. Due to the sparsity of the scRNA-seq matrices (most genes with zero counts), the UQ Reference normalization failed (all normalization factors were infinite or NA values) and thus was eventually not included in downstream analyses. TMM includes an additional step that uses the normalization factors to obtain normalized counts per million. LogNormalize and Linnorm include an additional exponentiation scale after normalization in order to transform the output data back into linear scale. Median of ratios can only be applied to integer counts in linear scale. Computational deconvolution: formulation and methodologies. The deconvolution problem can be formulated as: (see Avila Cobos et al. 5 together with "Approximation of bulk transcriptomes as linear mixtures" and "Small impact of cell cycle in the deconvolution results" in Supplementary Notes), where T = measured expression values from bulk heterogeneous samples; C = cell type-specific expression values and P = cell-type proportions. Specifically, T represents the 1000 pseudo-bulk mixtures from "Generation of thousands of artificial pseudo-bulk mixtures" and C is the reference matrix from "Cell-type specific marker selection and generation of reference matrices for the deconvolution". In the context of this article, the goal is to obtain P using T and C as input. Measures of deconvolution performance. Changes in memory were assessed with the mem_change function from the pryr package 51 and the elapsed time was measured with the proc.time function (both functions executed in R v.3.6.0). We computed both the Pearson correlation values and the root-mean-square error (RMSE) between cell type proportions from thousands of pseudo-bulk mixtures with known composition and the output from different deconvolution methods for each combination of data transformation, scaling/normalization choice, and deconvolution method. Higher Pearson correlation and low RMSE values correspond to a better deconvolution performance. Evaluation of missing cell types in the reference matrix C. For every cell type removed, the deconvolution was applied only to mixtures where the missing cell type was originally present. For bulk deconvolution methods, the marker genes of the cell type that was removed from the reference were also excluded (methods using scRNA-seq data as reference did not require a priori marker information). Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
9,024.4
2020-11-06T00:00:00.000
[ "Biology", "Computer Science" ]
Wave Motion Analysis in Plane via Hermitian Cubic Spline Wavelet Finite Element Method A plane Hermitian wavelet finite element method is presented in this paper. Wave motion can be used to analyze plane structures with small defects such as cracks and obtain results. By using the tensor product of modified Hermitian wavelet shape functions, the plane Hermitian wavelet shape functions are constructed. Scale functions of Hermitian wavelet shape functions can replace the polynomial shape functions to construct new wavelet plane elements. As the scale of the shape functions increases, the precision of the new wavelet plane element will be improved.,e newHermitian wavelet finite element method which can be used to simulate wave motion analysis can reveal the law of the wave motion in plane. By using the results of transmitted and reflected wave motion, the cracks can be easily identified in plane. ,e results show that the new Hermitian plane wavelet finite element method can use the fewer elements to simulate the plane structure effectively and accurately and detect the cracks in plane. Introduction e application of plate structure in industry is very important [1]. Plane wave motion analysis has been researched in mechanical engineering in recent years [2]. Wave motion analysis remains an open research field in real engineering though mathematical principles are well developed [3]. Many numerical methods are used to analyze the motion of elastic waves area [4]. Manktelow et al. [5] proposed a perturbation analysis by using discretization finite element method to research wave motion in continuous and periodic structures. Wang and Sett [6] used the stochastic Galerkin method, where material parameters and mechanical functions are uncertain to solve the solid mechanics partial differential equation. Gravenkamp et al. [7] resolved the results of high frequencies wave motion by using the following methods: scaled boundary finite element method and the nonuniform rational B-splines. Komijani and Gracie [8] proposed the global enrichment method which adopts the harmonic functions. e merit of global enrichment and the generalized finite element method is that they can analyze wave motion in plane with cracks by using the phantom node method. Pamel et al. [9] derive the finite element equations which can solve the problem of three-dimensional elastic dynamic wave scattering motion. e fundamental characteristics of wave scattering in attenuation and dispersion are used to be studied. For solving the problem of plane vibration in engineering, most methods are difficult to identify small defects such as cracks. e reason is that small defects are related to high frequency effects. e latest technique for identifying small defects is shown by using the result of elastic wave motion in high frequency propagation and reflection. Stawiarski et al. [10] adopt the elastic wave to detect the initiation of the fatigue damage in isotropic plate. e propagation and reflection of waves indicate a bright path for nondestructive testing of structures. Dubuc et al. [11] used the three-dimensional numerical model to analyze guided wave motion with tensile and shear cracks in isotropic plane. Komijani et al. [12] adopted enhanced finite element models to solve dynamic crack propagation and wave propagation. Numerical simulation has become the key factor of product design in engineering for the latest decades. To reduce the costs and product development time in engineering, many researchers adopt numerical simulation techniques to guide their design process, which is called conceptual design [13]. With the development of commercial software and computer resources, many engineers can use the numerical models to analyze more complex structures for fault diagnosis and for identification at higher frequencies [14]. At present, the most popular numerical modeling technique for high frequency wave motion is finite element method. Manktelow et al. [5] used integrated commercial software which can explore and optimize the complex structure to analyze nonlinear wave dispersion. e finite element method is used to accurately simulate the high frequency wave motion analysis. e method needs at least 20 nodes which are generated by a lot of elements to simulate a wavelength that leads to a huge computational cost. Spatial discretization problem is the main problem that high frequency wave motion faced in engineering. By fine and accurate processing, the space structure, the wave motion dispersion, and response of the structure can be obtained. Nanda et al. [15] present a spectral finite element method which can adopt the efficient and accurate layerwise theory to analyze the nonuniform composite and sandwich structure beams. However, spectral finite element methods require modifying two-dimensional or three-dimensional problems, geometrical complexities, nonperiodic boundary conditions, and so on. Guo et al. [16] solved linear wave motion equations by adopting a fully discrete element method. Due to the uncertainty of modeling, even a small error can cause a large error of high frequency fluctuation. erefore, the accuracy of structural models is very important. Park et al. [17] proposed a generalized multiscale finite element method to simulate the question of fluid flows. is paper considered a coupled factor of two equations, which can fine grid in order to improve accuracy. Wavelets have been widely used in many physics and engineering problems in recent years [18]. Due to the fact that the wavelet has the characteristic of multiresolution analysis, it provides a new mechanism that can decompose the solution into a series of coefficients. e numerical method of wavelet functions can be regarded as finite element shape functions. It seems like the shape functions of signal or image processing. Basu et al. [19] pointed out that the finite element, the boundary element, and the meshless methods have already replaced the finite difference methods and Ritz methods. ose methods may be replaced by the wavelet numerical methods in the near future. Chen and Ma [20,21] constructed beam and plane Daubechies wavelet finite element methods. ose can solve the Euler beam and thin plate bending problem. Because B-spline wavelet functions have the characteristics of display expression and high precision and efficiency, many scholars research the finite element method which adopts B-spline wavelet functions as shape functions [22,23]. Further, Xiang et al. [24] proposed Hermite cubic spline wavelet to solve intensity factors. Xiang indicated that the Hermitian scale and wavelets functions should be truncated. Xue et al. [25] presented a modified Hermitian interpolation wavelet base by adding the appropriate wavelet function to solve the wave motion and load identification. However, these interpolation functions satisfy the elements of C 0 , whose functions can be interpolated for the displacement. e method of interpolating the rotation by calculating the first derivative of displacement makes the accuracy of Hermitian spline wavelets be limited. Moreover, these interpolation functions are very complex, which contain more nodes in a wavelet element. Although the accuracy of the wavelet is very high, the amount of calculation is very large. e new effective wavelet finite element methods that adopt Hermitian wavelet functions as shape functions are presented in this paper. e new Hermitian wavelet plane element is called Hermitian spline wavelet on interval (HSWI) element. e new Hermitian wavelet functions satisfy the elements of C 1, whose functions can be interpolated for the displacement and rotation at the same time. Moreover, this new plane element has a very small number of nodes. e accuracy is higher than that of the Hermitian element constructed by the authors in [26]. e wavelet functions are orthogonal under the condition of given inner product. ese new shape functions can decouple totally or partially for the Hermitian plane wavelet element. e precision of element can rise by improving the scale and nesting the approximation space. Hermitian Plane Wavelet Shape Functions. e scale functions ϕ 1,k and wavelet functions ψ 1,k of Hermitian wavelet are shown in Figure 1. e equations of scale functions ϕ 1,k are e wavelet functions ϕ 1,k of Hermitian are when k is even, , when k is odd, where e shape functions of finite element method should satisfy the necessary condition that the sum of shape functions at any node is 1; the original scale functions of Hermitian wavelet cannot satisfy the conditions at the boundary node, which is called boundary problem. Stretching and translating can make some of Hermitian functions meet the requirements of Lagrange interpolation and the other meet the requirements of Hermite interpolation functions. e new Hermitian interpolation functions are constructed by the Hermitian scale and wavelet functions. e equations are , when k is even. ese functions can be interpolated for the displacement when k � 1, 3, 5, . . ., 2 j+1 + 1, as well as for rotation when k � 2, 4, 6, . . ., 2 j+1 + 2. When k is odd number, the interpolation functions satisfy the condition of element C 0 , and the displacement can be interpolated. When k is even number, the interpolation functions satisfy the C 1 type condition and can interpolate the rotation. Figure 2 is the graph of modified Hermitian wavelet shape functions. e wavelet space H 1 0 (0, 1) can be generated by the scale and wavelet functions; the equations of decomposition are where _ + is the symbol of direct sum, V 1 represents the initial scale space, the wavelet space is W j , and j is different level of wavelet. e equations of Hermitian wavelet shape function are Shock and Vibration e Hermitian wavelet functions have an excellent characteristic: the first derivative has good continuity, which makes modified Hermitian wavelet functions meet interpolation condition of C 1 . ese features are more suitable for the research of beam, plane, and so on. With finite element shape functions, HSWI can solve the engineering problems with high precision. Figure 2 shows the new scale functions ϕ j,k , the corresponding approximation space is H 1 0 (0, 1) , and the intervals of HSWI shape functions are [0, 1]. Using the tensor product, the Hermitian plane wavelet approximation space H 1 0 (0, 1) is constructed. e initial scale space is V 1 , the wavelet space is W j , and j is different level of wavelet. So the subspace which generates tensor product is φ j � Φ j ⊗ Φ j , the scale functions are φ 1 � Φ 1 ⊗ Φ 1 at j � 1, the scale functions are φ 2 � Φ 2 ⊗ Φ 2 at j � 2, and so on. Figure 3 shows the tensor product of HSWI plane elements where the scale is j � 1, 2. e plane Hermitian wavelet shape functions can replace the traditional finite element shape functions. Using the new shape functions, stiffness matrix and mass matrix can be solved. e Newmark time integration can calculate the results of high frequency wave motion by the stiffness matrix and mass matrix. Hermitian Wavelet Finite Element Formula. e elements are divided into two types in plane structure: one is plane stress element, and the other is plane strain element. Plane stress element is established based on the Hermitian wavelet shape functions. e plane strain element can use E/(1 − μ 2 ) and μ/(1 − μ) to replace E and μ, where E is Young's modulus, and μ is Poisson's ratio. For the plane structure, the formula of potential energy is where h represents the thickness of element, f � f x , f y shows the body forces vector, the displacements vector is Assume that the material has the characteristic of being linearly elastic and isotropic; the stress equation is Adopting the scale j � 2 as an example, the arrangement of plane nodes is shown in Figure 4; the displacement functions in x direction and y direction are Φ represents the plane Hermitian wavelet shape functions, T shows the transformation matrix which is from wavelet space to physical space, the displacement in x direction is u, and in y direction it is v: e standard element domain can be gained by mapping the original element domain. Substituting equations (8) and (9) into equation (7), the principle of Galerkin variation can obtain the finite element equations: where e equation of consistent mass matrix is where the density is ρ , the plane area is A, G 00 � 1, 2) can be easily obtained. l ey and dη can replace l ex and dξ. e plane Hermitian wavelet shape functions are constructed by using the tensor product of modified Hermitian wavelet element in this paper. Substituting the plane Hermitian wavelet shape functions into the finite element formula, the stiffness matrix and mass matrix can be gained. e stiffness matrix can be converted from the wavelet domain to the physical domain. Similarly, the where K is the stiffness matrix, the mass matrix is M, C represents the damping matrix, the excitation force is F(t), and u, _ u, and € u are displacement, velocity, and acceleration, respectively. e Rayleigh damping formula is used to calculate the damping matrix in this paper. e velocity and displacement which are in Newmark time integration are assumed as follows: where p represents the number of time steps and the time interval is Δt, which is from p − 1 to p step. Substituting equations (17) and (18) into equation (16), the response of wave motion equation is e wave motion in plane can be analyzed by equation (19). It is assumed that the material is homogenous and isotropic, and the displacements and velocities which are in the initial situation are zero. Numerical Examples e vibration structure especially for the high frequency vibration plays an important role in engineering. e high frequency vibration is also called wave motion. e effect of wave motion is becoming more and more important, and it has significant advantage in detecting small defects, especially cracks. Four working conditions are used to describe plane wave motion analysis in this paper. A sinusoidal signal has a frequency of 100 kHz, and the signal is added by Hanning window. e signal is used in this section as the excitation. Figure 5 shows that the excitation location is at point A. Time domain and frequency domain are shown in Figure 6, respectively. e HSWI finite element method is used to analyze wave motion of plane structure in this paper. e wave motion is assumed to be carried out under undamped conditions. Shock and Vibration e research object is a thin plate of plane, where length and width are 1 m and the thickness is 0.001 m. Aluminum is used for research in this paper; the material parameters are as follows: Young's modulus is 70 GPa, Poisson's ratio is 0.3, and the density is 2730 kg/m 3 . e computer used to calculate the wave motion was composed of Intel CPU, 1.7 GHz, and 35G memory. e Matlab software is used under the condition of Windows 10 operating system. Various small defects can be accurately detected by suitable high frequency excitation works on the mechanical structure. 10 Shock and Vibration into the longitudinal wave (or major wave, P wave) and the transverse wave (or minor wave, S wave). e influence of wave under the different crack lengths is researched in this section. According to equations (20) and (21), the speed of P wave and S wave can be solved. e time of wave motion was estimated as 0.25 ms: Shock and Vibration 11 e number of time steps can be divided into 2500 steps in this paper, and the Newmark time integration method is chosen to analyze wave motion. e plane can be divided into 20 × 20 regular quadrilateral elements by using the HSWI elements. A HSWI element has 18 degrees of freedom (or 9 nodes), so the plane model has 3,362 degrees of freedom. e P wave and S wave are numerically simulated by the HSWI elements with 3362 degrees of freedom in plane. e calculation results are shown in Figures 7-10, where the displacement v refers to P wave and the displacement u refers to S wave. e graphs of wave motion are displayed at different time. e difference about wave patterns is obvious with or without crack. e crack is easily identified in the graph of P wave and S wave. e effect of S wave is greater than that of P wave from Figures 7-10. e difference of P wave is also obvious with or without crack at 0.12 ms and 0.25 ms. As the length of crack is increased, the waveform of crack becomes more and more obvious. When the length of crack is consistent, the crack waveform becomes larger and larger with time development. e displacement response diagram at point A, point B, point C, and point D is shown in Figures 11-14. e diagram also shows the P wave and S wave at each point. e excitation signal is at point A. e displacement response diagram can clearly see whether there is crack or not, as well as the length of crack. As the length of the crack increases, the interval time between the reflected waves and the Analysis of Different Crack Locations. e cracks are distributed at three locations in plane: the first crack is 0.25 m at the top and right end, the second crack is in the middle of the plane, and the third crack is 0.25 m at the bottom and left end. Assume that the length of three cracks is 200 m, and the depth of them is 0.5 mm. e boundary condition is free for four edges in plane. Figure 15 is the plane distributed graph of the crack. HSWI elements are used to analyze high frequency wave motion in plane, and the material is aluminum. An excitation signal in the form of a force pulse signal with amplitude of 100 N is applied at point A. e response results are observed at point A, point B, point C, and point D in this section. e wave patterns of P wave and S wave are investigated at different crack positions. e numerical modeling is established in Figure 15 by the use of HSWI elements. Figures 16-19 display the calculation results at different positions for cracks. It is worth mentioning that the waveform and displacement response graphs are Figures 9 and 13 for the second crack. e cracks under different positions are expanded on analysis at different time points. e propagation and reflection of waves will change greatly when the crack location changes. e crack location 2 is in the middle (Figure 9), and the influence of P waves is greater than S waves compared with the waveform of without crack (Figure 7). e crack locations 1 and 3 have great influence on P and S waves compared with the waveform without crack (Figures 16 and 17). Compared to the transmitted waves, the reflected crack waves are dominant in the P waves. So the v displacement response is worth researching. e wave motion graphs of P wave and S wave, where the excitation signal is applied at point A, were displayed in Figures 13, 18, and 19. However, the response signals are observed at point A, point B, point C, and point D. e wave pattern varies greatly for the cracks at different positions. When the crack is not in the middle, it can be seen from Figures 18 and 19 that there are additional S waves at point A and point D. But there was no S wave at point A and point D under the absence of crack ( Figure 11). When the crack is in the middle, there is no additional waveform at point A and point D. e wave propagation and wave reflection have a certain interval time. Due to the influence of crack position, the S wave pattern and amplitude have changed obviously at point B and point C. is phenomenon can be used to identify the location of different cracks. e cracks will reduce the amplitude of waves at point D in the v direction displacement. e geometry is similar to literature [3], with 40 × 40 spectral elements, and a total of 80,802 degree of freedom are used in this literature. In this paper, the new Hermitian wavelet elements have 20 × 20 elements, which have 3,362 degrees of freedom. When the number of Hermitian wavelet elements is further increased, the results of numerical modeling are converged. e numerical results show that the plane structure with crack can be accurately analyzed in this paper. Conclusions e new plane Hermitian wavelet shape functions are constructed, and the shape functions have been substituted into the finite element equations to calculate the new elements in this paper. e new elements have the characteristics of high precision and less calculation, which can save the calculation time. e new elements are used to analyze the wave motion and to calculate the crack for the different length and location. As the length of crack is increased, the waveform of crack becomes more and more obvious. When the length of crack is consistent, the crack waveform amplitude becomes larger and larger with time development. ere are significant effects of wave motion for the crack of different length. As the length of the crack increases, the interval time between the reflected waves and the transmitted waves gets longer and longer. e propagation and reflection of waves will change greatly when the crack location changes. When the crack is not in the middle, the additional displacement response signals can be measured in the u displacement. Due to the influence of the cracks, the waveform and amplitude of waves have changed significantly. is phenomenon can be used to identify the location of different cracks. It is proved that the new elements of researching the wave motion are feasible and effective. Data Availability e data used to support the findings of this research are available from [3,25]. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
5,091.8
2020-08-26T00:00:00.000
[ "Engineering" ]
Database for High Throughput Screening Hits (dHITS): a simple tool to retrieve gene specific phenotypes from systematic screens done in yeast Abstract In the last decade several collections of Saccharomyces cerevisiae yeast strains have been created. In these collections every gene is modified in a similar manner such as by a deletion or the addition of a protein tag. Such libraries have enabled a diversity of systematic screens, giving rise to large amounts of information regarding gene functions. However, often papers describing such screens focus on a single gene or a small set of genes and all other loci affecting the phenotype of choice (‘hits’) are only mentioned in tables that are provided as supplementary material and are often hard to retrieve or search. To help unify and make such data accessible, we have created a Database of High Throughput Screening Hits (dHITS). The dHITS database enables information to be obtained about screens in which genes of interest were found as well as the other genes that came up in that screen – all in a readily accessible and downloadable format. The ability to query large lists of genes at the same time provides a platform to easily analyse hits obtained from transcriptional analyses or other screens. We hope that this platform will serve as a tool to facilitate investigation of protein functions to the yeast community. | INTRODUCTION The yeast Saccharomyces cerevisiae (from here on termed yeast) was the first eukaryote to have its full genome sequenced 20 years ago (Barrell et al., 1996). The availability of the full gene tally, with a finite number of only 6000 genes, drove a community effort to create arrayed, genome-wide collections of genetically modified strains, colloquially termed libraries. In general, two types of libraries have since been created. The first are those intended to enable characterization of gene functions by altering gene sequence or levels and measuring/studying the effects of these manipulations. These include the whole-genome deletion library (Giaever et al., 2002;Winzeler et al., 1999), various mutant libraries for essential genes [temperature sensitive alleles (Ben-Aroya et al., 2008;Li et al., 2011); TET-off promoters for repression of transcription (Mnaimneh et al., 2004); destabilization of mRNA for reduced expression (Breslow et al., 2008)] and an overexpression library (Sopko et al., 2006). The second type library (Huh et al., 2003) and the newly made N′ GFP and N′ Cherry libraries (Yofe et al., 2016). The utilization of yeast for screening of gene functions has been practised since the 1980s (Bankaitis, Johnson, & Emr, 1986;Erdmann, Veenhuis, Mertens, & Kunau, 1989;Novick, Field, & Schekman, 1980) and was one of the drivers for making yeast a widely utilized model organism. However, until the creation of systematic, arrayed, libraries, screens were performed largely by random mutagenesis or pooled plasmid libraries, and hence were often not comprehensive or exhaustive and were nearly never quantitative. The creation of arrayed yeast libraries opened up a new approach to screening, providing a more systematic and quantitative capacity. For example, the first screens for the whole-genome deletion libraries measured colony sizes in various media or stresses, giving a quantitative, statistically significant value for the ability of each strain to grow in a specific environment (Giaever et al., 2002). With the advent of more sophisticated robotic setups that enabled integration of additional genetic traits into libraries (Cohen & Schuldiner, 2011;Tong et al., 2001) and measurement of more complex phenotypes using microscopy as readouts, the quantity of screens and their information content grew dramatically. Over the years it is becoming apparent that most yeast screens fall into one of two categories ( Figure 1): 1. An altered-expression library (e.g. the deletion library, the hypomorphic allele library and the overexpression library), is used to search for phenotypes for a given gene. Such phenotypes can be diverse (a few examples include growth rate, drug resistance, secretion and cellular localization/abundance of a query protein). Such screens can often be done with manual approaches and hence are more readily performed. 2. A fluorophore-tagged library (e.g. the GFP library) is used to identify changes in localization or abundance of all proteins under a specific genetic background (e.g. deletion of a gene) or growth condition (e.g media). Screens such as this require a high content screening setup and are therefore less prevalent. To date, tens of such screens have already been performed and the wealth of information that they provide has been extremely helpful in characterizing protein functions. However, often finding information about such screens is not a trivial task. One reason for this is that 'hit' lists from screens often only show up in supplementary materials of publications and have a variety of different layouts and terminologies. Since such lists do not appear in the abstract or main text, they often do not come up during literature searches and can be missed by someone interested in the function of a specific gene or group of genes. By annotating all the information from such screens, the Saccharomyces genome database (SGD) (SGD, 2018) has created an invaluable resources available to the yeast community. While the SGD continues to be the most comprehensive and accurately curated yeast screen database, it is not simple to use it to compare hit lists from several different screens or to search for genes with a similar hit pattern. To make querying this information more accessible and direct, we have created a new platform that concentrates screens from the two above types in a single, easy to use, database: dHITS (Database for High Throughput Screen Hits; https://www. dhitsmayalab.tk/firstPage.php (direct entry) http://mayaschuldiner. wixsite.com/schuldinerlab/dhits (alternative address). The dHITS database has several unique characteristics to optimize its utilization by the yeast community: 1. Querying lists of genesthe dHITS database is built to enable querying large groups of genes for their appearance in screens. This is FIGURE 1 Schematic representation of the two types of screens that are represented in the dHITS database especially helpful when researchers have lists of genes from deepsequencing, micro-array or screening efforts and are looking for possible connections to their process of interest. In essence, dHITS enables easy discovery of additional phenotypes for a list of genes that will help give functional predictions to a gene of choice. 2. Curationthe dHITS database is unique in that each highthroughput screen that is represented has been curated to enable easy understanding of both the screen itself and the phenotypes observed. In addition, hit lists are given in an organized, consistent and easy to download format. 3. Uniscoreone of the distinctive features of the dHITS database is an internal calculation of uniqueness that we term Uniscore. Uniscore gives a numerical value to how many times a gene has appeared in screens. This parameter can be used to differentiate non-specific, pleiotropic effects of a given deletion (low Uniscore) vs. highly specific effects of a given gene on a process of choice (high Uniscore). For example, we find that 50% of fluorophore tagged proteins have never been altered in expression or localization in any of the curated screens ( Figure 2a). Similarly, over 60% of mutant strains have never displayed a phenotype in our curated screens (Figure 2a). 4. Accessibility -since dHITS was built to enable easy access to all yeast researchers to screening data, one of the main features is the ability to easily download all primary hit lists for each screen or all the screens for a given gene. We hope that the unique capabilities of dHITS and the concentration of systematic screens into one, searchable database that will continue to grow and evolve as new screens continue to be published will enable in silico exploration of gene functions. 2. Once a choice is made, users are automatically transferred to the next stage where they must enter a list of gene names. To maximize ease of use, such a list can be given as systematic gene names (such as YGL020C), standard names (GET1) or mixed, all in a manner that is insensitive to case. Lists can be copied and pasted directly from Excel files and manual entries must be separated by a paragraph mark. The requirements for correct entry as well as an example for a potential query are given. Once the list has been created, pressing the 'Submit' button will retrieve the results. 3. The results page is headed by the number of screens currently available for this type of data and which were mined for this analysis. We hope that with time this number will increase as more such screens become available and as we annotate more of them into dHITS. This is followed by a table that includes several columns: 1. List of genes with their systematic names. Role of essential genes in mitochondrial morphogenesis in Saccharomyces cerevisiae Altmann and Westermann (2005) 119 A proteomic screen reveals SCFGrr1 targets that regulate the glycolytic-gluconeogenic switch Benanti, Cheung, Brady, and Toczyski (2007) 163 The lipodystrophy protein seipin is found at endoplasmic reticulum lipid droplet junctions and is important for droplet morphology Szymanski et al. (2007) 59 Global screening of genes essential for growth in high-pressure and cold environments: searching for basic adaptive strategies using a yeast deletion library Abe and Minegishi (2008) 80 Comprehensive phenotypic analysis for identification of genes affecting growth under ethanol stress in Saccharomyces cerevisiae Yoshikawa et al. (2009) 446 Genome wide analysis reveals novel pathways affecting endoplasmic reticulum homeostasis, protein modification and quality control Copic et al. (2009) 72 Imaging-based live cell yeast screen identifies novel factors involved in peroxisome assembly Wolinski et al. (2009) 31 The Rpd3L HDAC complex is essential for the heat stress response in yeast Ruiz-Roig, Vieitez, Posas, and de Nadal (2010) 276 Ergosterol content specifies targeting of tail-anchored proteins to mitochondrial outer membranes The yeast ER-intramembrane protease Ypf1 refines nutrient sensing by regulating transporter abundance Avci et al. (2014) 50 Yeast phospholipid biosynthesis is linked to mRNA localization Hermesh et al. (2014) 14 Genome-wide screen uncovers novel pathways for tRNA processing and nuclearcytoplasmic dynamics Wu, Bao, Chatterjee, Wan, and Hopper (2015) 172 Genome-wide screens in Saccharomyces cerevisiae highlight a role for cardiolipin in biogenesis of mitochondrial outer membrane multispan proteins | 'dHITS' DATABASE CONSTRUCTION' In order to collect as many high-throughput screens into the dHITS database we first uploaded all of the published screens from our laboratory to date. This includes screens of either the deletion/DAmP library or of the GFP library under a variety of genetic and environmental conditions (For breakdown of the various phenotypes of the GFP tagged strains see Figure 2b). As a next step we mined the literature for similar high-throughput screens, downloaded the tables describing the hits of these screens and unified data presentation and terminology with our data style. All of the papers that we have currently curated and the numbers of hits that came up in their respective screens are provided in Table 1. Importantly there were several types of data that we did not integrate into the dHITS database. First, we did not integrate genetic and physical interaction scores as there are numerous, well-annotated websites that enable mining such data. We also did not include complex phenotypes such as lipidomic, ionomic or metabolomic datasets. Finally, we did not include huge datasets from chemogenomic profiling of the deletion library (Dudley, Janse, Tanay, Shamir, & Church, 2005;Hillenmeyer et al., 2008) as these have their own, easily mineable interface. This is because we wanted to focus the dHITS database on individual screens that had specific, single, phenotypic outcomes. Our literature analysis and curation may not have been comprehensive or exhaustive and new screens are continuously being published, and hence we encourage any laboratory interested in uploading data from their screen to contact us. | SUMMARY We here describe a new database that we have created to organize and categorize two types of whole-genome screens in yeastthose querying the phenotypic consequence of altering a single gene and those measuring changes in protein abundance or localization of a protein of interest. We hope that this database will become a new platform for integrating hits from future screens as they become available. By pooling information from a multitude of laboratories and approaches into a single, unified, searchable database we hope to provide a new, powerful, tool for investigation of protein functions in the The Protease Ste24 clears clogged translocons Ast, Michaelis, and Schuldiner (2016) 106 The SND proteins constitute an alternative targeting route to the endoplasmic reticulum Aviram et al. (2016) 91 Combining deep sequencing, proteomics, phosphoproteomics, and functional screens to discover novel regulators of sphingolipid homeostasis
2,909.2
2018-05-03T00:00:00.000
[ "Biology", "Computer Science" ]
Achieving 90% In Data-Centric Industry Deep Learning Task Achieving 90% In Data-Centric Industry Deep Learning Task . In industry deep learning application, our manually labeled data has a certain number of noisy data. To solve this problem and achieve more than 90 score in dev dataset, we present a simple method to find the noisy data and re-label the noisy data by human, given the model predictions as references in human labeling. In this paper, we illustrate our idea for a broad set of deep learning tasks, includes classification, sequence tagging, object detection, sequence generation, click-through rate prediction. The experimental results and human evaluation results verify our idea. Introduction In recent years, deep learning [1] model have shown significant improvement on natural language processing(NLP), computer vision and speech processing technologies. However, the model performance is limited by the human labeled data quality. The main reason is that the human labeled data has a certain number of noisy data. Previous work [2] has propose the simple idea to find the noisy data and correct the noisy data. In this paper, we first review the way we achieve more than 90 score in classification task, then we further illustrate our idea for sequence tagging, object detection, sequence generation, click-through rate (CTR) prediction. Background In previous work [2], we illustrate our idea in these steps: 1. It is a text classification task. We have a human labeled dataset-v1. 2. We train a BERT-based [3] deep model upon dataset-v1 and get model-v1. 3. Using model-v1 to predict the classification label for dataset-v1. 4. If the predicted labels of dataset-v1 do not equal to the human labels of dataset-v1, we think they are the noisy data. 5. We label the noisy data again by human, while given the labels of model and human as reference. Then we get dataset-v2. 6. We loop this re-labeling noisy data steps and get the final dataset. Then we get model-v2. We can further loop this steps to get model-v3. sequence tagging We take named entity recognition(NER) as example for the sequence tagging like tasks. In NER task, we extract several classes of key phrase from a sentence. Follow our idea, we view each class of NER task as a classification task. Then our steps are: 1. It is a NER task. We have a human labeled dataset-v1. 2. We train a BERT-based [3] deep model upon dataset-v1 and get model-v1. 3. Using model-v1 to predict the sequence labels of one class for dataset-v1. 4. If the predicted labels of dataset-v1 do not equal to the human labels of dataset-v1, we think they are the noisy data. 5. We label the noisy data again by human, while given the labels of model and human as reference. Then we get dataset-v2. 6. We loop this re-labeling noisy data steps for all the classes of NER and get the final dataset. Then we get model-v2. object detection Object detection is a computer vision technique that allows us to identify and locate objects in an image or video. Follow our idea, we view each kind of bounding box as a classification task. Then our steps are: 1. It is a object detection task. We have a human labeled dataset-v1. 2. We train a Swin Transformer [4] upon dataset-v1 and get model-v1. 3. Using model-v1 to predict the bounding boxes of one class for dataset-v1. 4. If the predicted bounding box of dataset-v1 is far from the human labeled bounding box of dataset-v1, we think they are the noisy data. 5. We label the noisy data again by human, while given the bounding boxes of model and human as reference. Then we get dataset-v2. 6. We loop this re-labeling noisy data steps for all the classes of object detection and get the final dataset. Then we get model-v2. sequence generation The key step of our idea is about how to judge the noisy data. For sequence generation, we can use BLEU score or other sequence similarity evaluation method. Then our steps are: 1. We take text generation task as example. We have a human labeled dataset-v1. 3. Using model-v1 to predict the generated sentences for dataset-v1. 4. If the BLEU score of generated sentences of dataset-v1 is far from the human labeled generated sentences of dataset-v1, we think they are the noisy data. 5. We label the noisy data again by human, while given the generated sentences of model and human as reference. Then we get dataset-v2. 6. We loop this re-labeling noisy data steps and get the final dataset. Then we get model-v2. click-through rate prediction For CTR task, we use the method of [6] that automatically set the label again for the noisy data. CTR task is a click-or-not prediction task, we choose a threshold between the predicted score and the 0/1 online label score to judge whether the data is the noisy data. In this way, we could improve the AUC in dev dataset but the online performance should test online. Experimental Results We do the experiments of text classification and NER to verify our idea. The results is shown in Table 1 and Table 2. We also do a lot of other classification task and NER task of other dataset. The improvement is also significant and we do not list the detail results. Conclusion We argue that the key point to improve the industry deep learning application performance is to correct the noisy data. We propose a simple method to achieve our idea and show the experimental results to verify our idea.
1,290.8
2021-12-17T00:00:00.000
[ "Computer Science" ]
Prediction of unsaturated shear strength from microstructurally based effective stress This work aims at investigating the adequacy of microstructurally based effective stress to predict the shear strength of unsaturated soils over a wide range of suction. For that purpose, shear strength data are acquired on a silty clay soil through two types of unsaturated triaxial tests: suction controlled triaxial tests and unconsolidated triaxial tests at constant water content. The microstructure of the soil is determined with Mercury Intrusion Porosimetry and is directly used in different expressions of microstructurally based effective stresses available in the literature. The large range of suction tested allows to determine the most consistent expression of the effective stress to reproduce the experimental observations. Introduction Shear strength of unsaturated soils can be generally predicted through the definition of an appropriate generalised effective stress [1,2]. It is now well admitted that the generalised effective stress depends on the suction within the sample weighted by an effective stress parameter. This effective stress parameter represents somehow the proportion of the pores where the capillary effects act on the solid grains, and can be thus considered equal to the degree of saturation [3][4][5][6] or a function of the degree of saturation [7,8]. However, in remoulded fine-grained soils the compaction process leads to the formation of macropores where capillary effects predominate and micropores where water is attached to the solid through physicochemical bonds [9,10]. Consequently, only the free water partially filling the macroporosity plays probably a role on the generalised effective stress, and in turn on the mechanical behaviour of the materials [8]. Those observations have led to the definition of microstructurally based effective stresses where the suction is weighted by an effective degree of saturation characterizing the prevailing suction that contributes to the effective stress [11][12][13]. On the other hand, in an initially dried clay sample, the micropores saturate first and, then the macropores. That leads to define effective degree of saturation that evolves significantly with high degree of saturation corresponding to the saturation of the macropores once the micropores are fully saturated. A literature review of the strategies used to define microstructurally based effective stress is presented in section 2. The ability of such microstructurally-based effective stress definitions to predict the mechanical behaviour of different soils has been already investigated in [13]. For that purpose, Mercury Intrusion Porosimetry (MIP) have been achieved to characterize the ratio between the volume of micropores and the total volume of pores, and in turn the degree of saturation of the macropores. They concluded that the stiffness of unsaturated soils can be efficiently predicted from a generalised effective stress expressed in terms of this degree of saturation of the macropores. Some efforts have been also achieved to predict the unsaturated shear strength from microstructurally based effective stress. Alonso et al [8] proposed a first analysis that illustrates the possible capacity of the methodology. However some elements are not fully considered in this study to provide an integrated analysis of the ability of microstructurally based effective stress: (i) No MIP is performed in this study and the microstructural state variable is thus only a parameter used to optimize the fitting of the shear strength data without any experimental validation; (ii) The range of suctions investigated is limited (i.e. only shear strength data for suctions lower than 600 kPa are used to assess the methodology); (iii) Shear strength data come from direct shear tests and not from triaxial tests. Consequently, this work aims at investigating the adequacy of microstructurally based effective stress to predict the shear strength of unsaturated soils over a wide range of suctions. To this end shear strength data are acquired on a silty clay soil through two types of unsaturated triaxial tests: suction controlled triaxial tests (using the axis translation technique) and unconsolidated triaxial tests at constant water content for the highest suction levels. The microstructure of the soil is determined by means of MIP and is directly used in different expressions of the effective stress available in the literature. The large range of suctions tested allows to Introduction Shear strength of unsaturated soils can be generally predicted through the definition of an appropriate generalised effective stress [1,2]. It is now well admitted that the generalised effective stress depends on the suction within the sample weighted by an effective stress parameter. This effective stress parameter represents somehow the proportion of the pores where the capillary effects act on the solid grains, and can be thus considered equal to the degree of saturation [3][4][5][6] or a function of the degree of saturation [7,8]. However, in remoulded fine-grained soils the compaction process leads to the formation of macropores where capillary effects predominate and micropores where water is attached to the solid through physicochemical bonds [9,10]. Consequently, only the free water partially filling the macroporosity plays probably a role on the generalised effective stress, and in turn on the mechanical behaviour of the materials [8]. Those observations have led to the definition of microstructurally based effective stresses where the suction is weighted by an effective degree of saturation characterizing the prevailing suction that contributes to the effective stress [11][12][13]. On the other hand, in an initially dried clay sample, the micropores saturate first and, then the macropores. That leads to define effective degree of saturation that evolves significantly with high degree of saturation corresponding to the saturation of the macropores once the micropores are fully saturated. A literature review of the strategies used to define microstructurally based effective stress is presented in section 2. The ability of such microstructurally-based effective stress definitions to predict the mechanical behaviour of different soils has been already investigated in [13] Introduction Shear strength of unsaturated soils can be generally predicted through the definition of an appropriate generalised effective stress [1,2]. It is now well admitted that the generalised effective stress depends on the suction within the sample weighted by an effective stress parameter. This effective stress parameter represents somehow the proportion of the pores where the capillary effects act on the solid grains, and can be thus considered equal to the degree of saturation [3][4][5][6] or a function of the degree of saturation [7,8]. However, in remoulded fine-grained soils the compaction process leads to the formation of macropores where capillary effects predominate and micropores where water is attached to the solid through physicochemical bonds [9,10]. Consequently, only the free water partially filling the macroporosity plays probably a role on the generalised effective stress, and in turn on the mechanical behaviour of the materials [8]. Those observations have led to the definition of microstructurally based effective stresses where the suction is weighted by an effective degree of saturation characterizing the prevailing suction that contributes to the effective stress [11][12][13]. On the other hand, in an initially dried clay sample, the micropores saturate first and, then the macropores. That leads to define effective degree of saturation that evolves significantly with high degree of saturation corresponding to the saturation of the macropores once the micropores are fully saturated. A literature review of the strategies used to define microstructurally based effective stress is presented in section 2. The ability of such microstructurally-based effective stress definitions to predict the mechanical behaviour of different soils has been already investigated in [13]. For that purpose, Mercury Intrusion Porosimetry (MIP) have been achieved to characterize the ratio between the volume of micropores and the total volume of pores, and in turn the degree of saturation of the macropores. They concluded that the stiffness of unsaturated soils can be efficiently predicted from a generalised effective stress expressed in terms of this degree of saturation of the macropores. Some efforts have been also achieved to predict the unsaturated shear strength from microstructurally based effective stress. Alonso et al [8] proposed a first analysis that illustrates the possible capacity of the methodology. However some elements are not fully considered in this study to provide an integrated analysis of the ability of microstructurally based effective stress: (i) No MIP is performed in this study and the microstructural state variable is thus only a parameter used to optimize the fitting of the shear strength data without any experimental validation; (ii) The range of suctions investigated is limited (i.e. only shear strength data for suctions lower than 600 kPa are used to assess the methodology); (iii) Shear strength data come from direct shear tests and not from triaxial tests. Consequently, this work aims at investigating the adequacy of microstructurally based effective stress to predict the shear strength of unsaturated soils over a wide range of suctions. To this end shear strength data are acquired on a silty clay soil through two types of unsaturated triaxial tests: suction controlled triaxial tests (using the axis translation technique) and unconsolidated triaxial tests at constant water content for the highest suction levels. The microstructure of the soil is determined by means of MIP and is directly used in different expressions of the effective stress available in the literature. The large range of suctions tested allows to determine the most consistent expression of the effective stress to reproduce the experimental observations. Expressions of microstructurally based effective stress Different expressions of microstructurally based effective stress have been already proposed in the literature. They generally consist in an extension of the Bishop generalised effective stress σ': where σ is the total stress tensor, s is the suction, χ is the effective stress parameter and I is the unity tensor. The effective stress parameter should represent somehow the proportion of the pores where the capillary effects act on the solid grains. It was often proposed that the degree of saturation Sr can be used as a possible candidate for χ [1-5;14]. However, this definition of the effective stress parameter disregards the microstructure of clayey soils, for which two groups of pore sizes are identified: the macropores with predominant capillary effects and micropores with predominant physic-chemical effects. The different mechanisms of interactions between water and the solid phase in the two groups of pores lead to different contributing effect on the constitutive stress. As a result, some authors proposed to assume that the effective stress parameter χ is equal to an effective degree of saturation Sr,eff representing the degree of saturation that contributes to the stress measure controlling the mechanical behaviour of clayey soils. The Bishop generalised effective stress becomes thus: The determination of Sr,eff requires thus the knowledge of information on the microstructure of the soils, and the progressive saturation of micropores and macropores respectively. Water entering in an initially dried compacted clayey soil will first saturate the micropores because of the strong affinity of the clay platelets with water. However, the saturation of the micropores controlled by physic-chemical effects does not influence the capillary pressure, and so the constitutive stress. Once the micropores are fully saturated, water will saturate progressively the macropores, with effect on the capillary pressure and so on the constitutive stress. Based on that, some authors [8;11-13] proposed to define the effective degree of saturation by a bilinear expression: where ζm, called the microstructural state variable, represents the ratio between the porosity of the micropores nm and the total porosity n: = A second microstructurally based effective stress proposed in the literature [13] consists in the smoothing of the previous relationship at the transition between the micropores and macropores, which can improve the efficiency of some numerical simulations. It leads to: where τ is a parameter representing the degree of smoothing. This definition assumes that some changes in the effective stress are possible for degree of saturation Sr lower than the state variable ζm. That means that either some macropores start to saturate before the full saturation of the micropores, or the saturation of the micropores leads to some limited changes in the constitutive stress and in turn mechanical behaviour of the soils. Finally, a third relationship is a power law of the degree of saturation proposed in [8]: Here also the smoothed behaviour is beneficial for numerical calculations, but the loss of direct reference to measured microstructural data of the soils can disregard the principle of microstructurally based effective stress. Figure 1 presents a comparison between the different relationships. Material The soil used in this study is a silty clay composed by 35% of clay, 27% of silt and 36% of sand. The optimum water content and dry density provided by the standard Proctor compaction test are respectively 1.8 g.cm -3 and 15.6%. The density of solid grains is equal to 2.70 g/cm³. Preparation of the sample All the samples used in this study were dynamically compacted at a dry density of 1,8 g.cm -3 and an initial water content of 12%. The compaction at low initial water content provides generally structured samples with dual porosity [10]. The initial total porosity n is therefore equal to 0.31. Mercury Intrusion Porosimetry (MIP) An initially saturated silty clay soil sample is first dried through freeze-drying technique to do not alter the microstructure [15]. A MIP test in which mercury was first intruded until reaching a maximum pressure of 227 MPa was conducted on the sample. The pore size distribution of the material can be deduced from the relation between the injection pressure and the volume of mercury injected into the pores. Soil water retention curve Two techniques are used to determine the soil water retention curve of the silty clay: the osmotic technique [16] and the chilled-mirror dew-point psychrometer [17]. The osmotic method is a technique of imposition of the suction. Samples with 35 mm in diameter and 8 mm high are protected by a semi-permeable membrane and then soaked in a solution of polyethylene glycol 20000 (PEG20000) at a given concentration. Only the movement of water molecules is possible through the membrane and samples reach progressively the imposed suction. Samples are regularly weighted and equalization is assumed when the sample mass stabilizes. The soil volume is then determined by means of a 3D scanning tool. This technique is used for suction ranging from 0 to 2 MPa. The chilled-mirror dew-point psychrometer is a technique for measuring suction. Samples of 35 mm in diameter and 8 mm in height are air dried until different water contents. Then the measurement of the suction was done using a WP4C water potential meter. The use of the chilled mirror method allows determining the relative humidity of the air around the sample installed in a sealed chamber once the sample has come into equilibrium with the vapour in the surrounding air. The dew-point psychrometer is used for determination of suction above 2 MPa. Triaxial test Samples, 36 mm in diameter and 70 mm high, were dynamically compacted in a mould in 3 layers of same thickness for the different triaxial tests presented hereafter. Consolidated and drained (CD) saturated triaxial test Conventional CD triaxial test are achieved under saturated conditions. The samples are first saturated using confining pressure and water pressure at the top of the sample of respectively 60 kPa and 55kPa. The saturation of the samples is assumed when the Skempton coefficient B reaches a value of 0.95. After the consolidation at different normal pressures the samples are sheared with the same confinement as during the consolidation and at a constant strain rate of 0.007 mm/min (NF P94-074). The cross-sectional area under loading was calculated according to Head [18]. Suction-controlled triaxial test The axis translation technique is used to impose constant suction within the samples during shearing [19]. To this end the conventional triaxial cell has been modified. A high-air-entry ceramic disk (HAED) was sealed onto the base pedestal of the cell. It allows a separate control for the air pressure and the water pressure when the HAED is fully saturated (i.e. only water can pass through it). In our study HAED with air entry pressure of 500 kPa has been used. However, dissolved air can pass through HAED and air can form below HAED and progressively desaturate the porous disk. A flushing system is added to remove the air bubbles beneath the porous stone. A spiral drainage groove of 2 mm of diameter and 0.5mm of height is included on the bottom face of the porous disk to ensure an efficient trapping of the air. The samples are first saturated using the same procedure as for the CD triaxial tests. After consolidation, different suctions are imposed through the axis translation technique. Homogeneous suction within the sample is assumed when the volume of water expelled from the sample is lower to 0.1cm 3 per day [20]. Finally shearing of the samples is achieved in drained conditions with a vertical strain rate of 0.025 mm/h [21]. The evolution of the section is determined assuming the change in volume of the confining fluid corresponds to the change in volume of the sample. For each imposed suctions, 3 different confining pressures are imposed and tested. Constant water content triaxial test Constant water content triaxial tests are achieved to analyse the shear strength for suctions higher than 500 kPa. After the same procedure of samples saturation as described above, the samples are dried in a desiccator where evaporation is controlled by means of a saline solution [22]. Such a drying procedure is preferred to the air-drying to limit the gradient of moisture within the specimen and in turn the specimen cracking. The samples are removed from the desiccator when a target mass is reached. It means that the samples are not necessary in equilibrium with the relative humidity imposed when they are removed. The specimen is then covered with a plastic film and an aluminium foil and are wrapped with a paraffin layer for one week to homogenize the water content and prevent evaporation during this step. Finally, the samples are sheared in a triaxial cell under undrained conditions at a constant strain rate of 0.5 mm/min. This high strain rate allows assuming that specimen remain at a constant water content during shearing. Since the impact of confining pressure on shear strength becomes negligible for high suctions, only one confining pressure is tested for each constant water content/suction. Figure 2 presents the pore size distribution of the "ascompacted" silty clay soil. A bimodal distribution can be distinguished. From this figure, the diameter of critical pores marking the transition between micropores and macropores can be determined, and is considered to be equal to 0.15 mm. From the cumulative intrusion curve, it is then possible to calculate the porosity of the macropores nM equal to 0.15. Figure 3 presents the soil water retention curve of the silty clay soil obtained from osmotic technique and chilledmirror dew-point psychrometer. Fig. 3. Soil water retention curve of the silty soil A van Genuchten retention model is fitted on the experimental data. It leads to with λ=1.13 and Pr=100 kPa. Saturated shear strength 3 CD triaxial tests under saturated conditions have been achieved at 3 different confining pressures (60, 100 and 200 kPa). The shear failure is considered occurring at the maximum deviatoric stress q. This value allows to determine the mean effective stress p' (=q/3+σ'3) at failure for each confining pressure (Figure 4). It leads to a shear strength criterion: with m = 0.80 and k = 19.2, corresponding to an effective cohesion c' and an effective friction angle ϕ' of respectively 9 kPa and 21°. Unsaturated shear strength Unsaturated shear strength has been determined for various suctions from suction-controlled and constant water content triaxial tests. Table 1 summarize the unsaturated shear strength obtained from the different tests. For suction-controlled triaxial test, the degree of saturation Sr is calculated from the van Genuchten retention curve using the imposed suction. For constant water content triaxial test, a sample is extracted from the specimen at the end of the test, and the water content is determined from weighting. The suction is then calculated from the van Genuchten model of the soil-water retention curve (eq. 7) Discussion Using the MIP results and eq. 4, a microstructural state variable ζm equal to 0.52 is obtained. It is then possible to determine the mean effective stress p' at failure for the different unsaturated triaxial tests, using the different definitions of microstructurally based effective stresses presented in section 2. It becomes: with Sr,eff obtained from equations 3, 5 or 6 respectively. With an appropriate definition of the effective stress for unsaturated conditions, a unique shear failure criterion should be able to predict the effective stress state at failure both for saturated and unsaturated conditions. In this study, the shear failure criterion under saturated conditions has been determined in section 4.3 through saturated CD triaxial tests. It is thus proposed to identify the most suitable expression of microstructurally based effective stress through a comparison between the effective stress state at failure measured in the different unsaturated triaxial tests (eq. 9) and the prediction coming from the saturated shear strength criterion. Figures 5 to 7 present the comparison between the experimental effective stress state at failure for the unsaturated triaxial tests and the prediction of the stress at failure using the microstructurally based effective stress definitions and the saturated shear strength. The bilinear expression (eq. 3) is used in Figure 5. On Figure 6 a smoothed expression of the bilinear equation (eq. 5) is considered, with the smoothing coefficient τ fitted on experimental data and equal to 8. Finally, a power law expression (eq. 6) is envisaged, with a coefficient α equal to 4.13. The results highlight first that the three models seem suitable to predict the unsaturated shear strength for low values of the suctions (< 200 kPa). Then it demonstrates the inability of the bilinear equation to predict the unsaturated shear strength for large suctions (and so large p'). Also the smoothed bilinear relation and the power law expression seem to be good candidates for a suitable expression of a microstructurally-based effective stress for the full range of suctions tested in this study. Finally, for each unsaturated triaxial test, the "back calculated effective stress parameter" necessary to fulfil the uniqueness of the effective-stress dependent shear failure criterion under both saturated and unsaturated conditions is calculated: with qexp,failure and pfailure the experimental deviatoric stress and the total mean stress at failure determined in the unsaturated triaxial tests. Conclusions This study aims at investigating the adequacy of microstructurally based effective stress to predict the shear strength of unsaturated soils. The main contribution of this work is to assess the suitability of the microstructurally based expressions over a wide range of suctions. To this end shear strength data are obtained on a silty clay soil through suction-controlled triaxial tests and unconsolidated triaxial tests at constant water content. The microstructure of the soil is determined by means of mercury intrusion porosimetry and is directly used in different expressions of the effective stress available in the literature. The results illustrate the adequacy of a power law expression of the effective stress to predict the unsaturated shear strength. However, even if such an expression is based on a physical explanation on the role of the microstructure on the effective stress, and in turn on the shear strength, there is no direct reference to measured microstructural data in this model. The capacity of prediction of unsaturated shear strength only based on microstructural data is therefore limited. On the other hand, the bilinear equation uses directly microstructural data of the soil and is more physically based, but fails in reproducing experimental unsaturated shear strength. That can be explained by more complex saturation processes of the microstructure than the one described in the introduction. The saturation of the micropores is probably not only associated with physicchemical processes, and can slightly modify the capillary pressures, and so affects the effective stress, also for small degrees of saturation. Also the bilinear formulation (as the two other microstructurally-based effective stresses) neglects the interfacial forces between air and water. Nikooee et al. [23] and Likos [24] showed that interfacial effects are much more pronounced at high suctions and in fine-grained soils. On Fig.5, the main discrepancies are observed for high suction. Such an effect should be therefore considered in future works to validate the expression of a microstructurally-based effective stress for large suctions.
5,618.2
2019-06-01T00:00:00.000
[ "Geology" ]
A Machine Learning Approach for the Prediction of Traumatic Brain Injury Induced Coagulopathy Background: Traumatic brain injury-induced coagulopathy (TBI-IC), is a disease with poor prognosis and increased mortality rate. Objectives: Our study aimed to identify predictors as well as develop machine learning (ML) models to predict the risk of coagulopathy in this population. Methods: ML models were developed and validated based on two public databases named Medical Information Mart for Intensive Care (MIMIC)-IV and the eICU Collaborative Research Database (eICU-CRD). Candidate predictors, including demographics, family history, comorbidities, vital signs, laboratory findings, injury type, therapy strategy and scoring system were included. Models were compared on area under the curve (AUC), accuracy, sensitivity, specificity, positive and negative predictive values, and decision curve analysis (DCA) curve. Results: Of 999 patients in MIMIC-IV included in the final cohort, a total of 493 (49.35%) patients developed coagulopathy following TBI. Recursive feature elimination (RFE) selected 15 variables, including international normalized ratio (INR), prothrombin time (PT), sepsis related organ failure assessment (SOFA), activated partial thromboplastin time (APTT), platelet (PLT), hematocrit (HCT), red blood cell (RBC), hemoglobin (HGB), blood urea nitrogen (BUN), red blood cell volume distribution width (RDW), creatinine (CRE), congestive heart failure, myocardial infarction, sodium, and blood transfusion. The external validation in eICU-CRD demonstrated that adapting boosting (Ada) model had the highest AUC of 0.924 (95% CI: 0.902–0.943). Furthermore, in the DCA curve, the Ada model and the extreme Gradient Boosting (XGB) model had relatively higher net benefits (ie, the correct classification of coagulopathy considering a trade-off between false- negatives and false-positives)—over other models across a range of threshold probability values. Conclusions: The ML models, as indicated by our study, can be used to predict the incidence of TBI-IC in the intensive care unit (ICU). INTRODUCTION Traumatic brain injury (TBI) is still one of the leading causes of death and disability worldwide with over 10 million people hospitalized every year (1). It is common to witness the alterations of the coagulative system and disturbed coagulation function in TBI patients. Results from previous studies indicated that two in three patients with severe TBI manifested coagulation system abnormalities upon admission to the emergency department, and then continued to worsen (2,3). And the overall mortality of TBI-induced coagulopathy (TBI-IC) attains 17-86% (4)(5)(6). TBI-IC is characterized by both hypocoagulopathy with prolonged bleeding or hyper-coagulopathy with an increased prothrombotic tendency, or both (4,7). Previous study unearthed that coagulopathy following TBI was related to higher mortality and prolonged intensive care unit (ICU) stay (8). In early stage, potential mechanisms include the dysfunction of the coagulation cascade and hyperfibrinolysis, both of which contribute to hemorrhagic progression. Later, a poorly defined prothrombotic stage emerges, partly caused by fibrinolysis shutdown and hyperactive platelets (9)(10)(11). Undoubtedly, it is imperative to promote the early identification of TBI-IC in a timely way. Laboratory assays, including international normalized ratio (INR) and thromboelastogram are widely used to diagnose TBI-IC. Nonetheless, these assays have limited value in predicting coagulopathy before it develops. In recent years, as a field of artificial intelligence, machine learning (ML) is able to learn from data based on computational modeling. Likewise, ML can fit high-order relationships between covariates and outcomes in data-rich environments (12)(13)(14). This study aimed to determine whether ML algorithms using demographic, comorbidities, laboratory examinations and other variables could predict TBI-IC with considerable accuracy and identify factors contributing to the prediction power. Data Source We conducted this retrospective study based on two sizeable critical care databases, the Medical Information Mart for Intensive Care (MIMIC)-IV version 1.0 (15) and eICU Collaborative Research Database (eICU-CRD) version 1.2 (16). In brief, the MIMIC-IV database, an updated version of MIMIC-III, incorporated comprehensive, de-identified data of patients admitted to the ICUs at the Beth Israel Deaconess Medical Center in Boston, Massachusetts, between 2008 and 2019, containing data from 383220 distinct admissions (single center). The other database, eICU-CRD, was a multicenter, freely available, sizeable database with de-identified high granularity health data associated for over 200,000 admissions to ICUs across the United States between 2014 and 2015. This study was approved by the Institutional Review Boards of Beth Israel Deaconess Medical Center (Boston, MA) and the Massachusetts Institute of Technology (Cambridge, MA). Requirement for individual patient consent was waived because the study did not impact clinical care and all protected health information was deidentified. One author (CP) has obtained access to both databases and was responsible for data extraction (Certification number: 41657645). The study was reported in accordance to the REporting of studies Conducted using Observational Routinely collected health Data (RECORD) statement (17). Participant Selection Inclusion criteria were patients with moderate and severe TBI [msTBI: defined as Glasgow Coma Score (GCS) =< 12]. People with an age of less than 16 years old, ICU stays less than 48 h, and no coagulation index within 24 h of ICU admission were excluded from the study. Moreover, for patients with ICU admissions more than once, only data of the first ICU admission of the first hospitalization were included in the analysis. Predictors of Coagulopathy A total of 53 predictor variables for the ML models were initially included. Specifically, in this study, the data were extracted from MIMIC-IV and eICU-CRD including age, gender, race, family history of stroke. Coexisting disorders were also collected based on the recorded International Classification of Diseases (ICD)-9 and ICD-10 codes. Then, the Charlson comorbidity index (CCI) was calculated from its component variables [myocardial infarction, congestive heart failure, peripheral vascular disease, cerebrovascular disease, dementia, chronic pulmonary disease, rheumatic disease, peptic ulcer disease, diabetes, paraplegia, renal disease, malignant cancer, severe liver disease, metastatic solid tumor and acquired immunodeficiency syndrome (AIDS)]. Lastly, we extracted data containing vital signs, laboratory findings, injury type, different therapy strategies and scoring system on the first day of ICU admission. Details of missing data can be seen in Supplementary Table 1. Statistical Analysis Values were presented as the means with standard deviations (if normal) or medians with interquartile ranges (IQR) (if non-normal) for continuous variables, and total numbers with percentages for categorical variables. Proportions were compared using χ² test or Fisher exact tests while continuous variables were compared using the t test or Wilcoxon rank sum test, as appropriate. In this study, recursive feature elimination (RFE) as a feature selection method was used to select the most relevant features. In short, RFE recursively fits a model based on smaller feature sets until a specified termination criterion is reached. In each loop, in the trained model, features were ranked based on their importance. Finally, dependency and collinearity were eliminated. Features were then considered in groups of 15/25/35/45/ALL (ALL = 53 variables, as represented in Figure 1) organized by the ranks obtained after the feature selection method. To find the optimal hyperparameters, 10fold cross-validation was used as a resampling method. In each iteration, every nine folds were used as training subset, and the remaining one fold was processed to tune the hyperparameters. This training-testing process was repeated thirty times. And in this way, each sample would be involved in the training model, and also participated in the testing model, so that all data were used to the greatest extent. In this study, we employed seven diverse ML algorithms to develop models, containing artificial neural network (NNET), naïve bayes (NB), gradient boosting machine (GBM), adapting boosting (Ada), random forest (RF), bagged trees (BT), and eXtreme Gradient Boosting (XGB). Initially, we conducted internal validation on the development sets to quantify optimism in the predictive performance and evaluate stability of the prediction model. Bootstrap Resampling technique with 100 iterations was used to evaluate the internal validity of each model. External validation of the models was performed in eICU-CRD. All the models were assessed in multiple dimensions regarding their model performance. The median and 95% confidence intervals of area under the curve (AUC) were calculated, where an AUC value of 1.0 means perfect discrimination and 0.5 represents no discrimination. And the accuracy, sensitivity, specificity, negative predictive value, and positive predictive value were also calculated. Additionally, to determine the clinical usefulness of the included variables by quantifying the net benefit at different threshold probabilities, we conducted the decision curve analysis (DCA) (19). Finally, the "Shiny" package in the R was used to construct a visual data analysis platform. All analyses were performed by the statistical software packages R version 4.0.2 (http://www.R-project.org, The R Foundation). In our study, we used the "Caret" R packages to achieve the process. P values less than 0.05 (two-sided test) were considered as statistically significant. Baseline Characteristics Variable values on the first day of the TBI patients in MIMIC-IV were analyzed. As shown in Figure 1 and Table 1 of 5717 TBI patients in MIMIC-IV, 999 were included in the final cohort. A total of 493 patients developed coagulopathy, whereas 506 patients did not. A cohort of 285 patients with coagulopathy following TBI in eICU-CRD was included as external dataset (Supplementary Table 2). The process of data extraction, training preparation, data testing via different ML algorithms is depicted in Figure 1. People who had coagulopathy were more likely to be female, with family history of stroke, myocardial infarction, congestive heart failure, peripheral vascular disease, cerebrovascular disease, renal disease, malignant cancer, severe liver disease, metastatic solid tumor as well as having higher CCI, heart rate, respiratory rate, red blood cell volume distribution width (RDW), INR, lactate, buffer excess (BE), FiO 2 , chloride, sodium, glucose, creatinine (CRE), blood urea nitrogen (BUN), blood transfusion, sepsis related organ failure assessment (SOFA), acute physiology score III (APSIII), and longer APTT, prothrombin time (PT), mechanical ventilation (MV). Furthermore, they were less likely to have dementia, cerebral contusion, with lower temperature, mean artery pressure (MAP), red blood cell (RBC), white blood cell (WBC), hemoglobin (HGB), PLT, hematocrit (HCT), pH, bicarbonate, PaO 2 /FiO 2 , calcium, urine output, and GCS. Variable Importance A total of 15 important predictors (Figure 2) was selected by the RFE algorithm, including INR, PT, SOFA, APTT, PLT, HCT, RBC, HGB, BUN, RDW, CRE, congestive heart failure, myocardial infarction, sodium, and blood transfusion. Then, these 15 variables were used in all the subsequent analysis for all models in both training and testing sets. Prediction Performance in eICU-CRD The discriminatory abilities of all models for the prediction of coagulopathy are in Figure 3 and Table 2. Within the training set, the NNET, NB, GBM, Ada, RF, BT and XGB models were Figure 4, the net benefits of the Ada model and XGB model surpassed those of other ML models, including NB for all threshold values, showing that these two models were more superior in predicting the TBI-IC in this cohort. In the Figure 5, the fifth predictor variables in the ML models are shown. Each variable included in the study had varying importance over the TBI-IC relying on the ML approach. Overall, the coagulation profile (PLT, INR, PT) was the variable with relatively higher importance across all ML algorithms, followed by APTT, SOFA, and so forth. DISCUSSION Altered hemostasis and hemorrhagic progression are substantial challenges in the clinical management of TBI. Patients with TBI-IC were at a high risk of death over those with normal coagulation. Notably, studies elucidating the rapid prediction of TBI-IC, are warranted. In this sense, our study developed and validated ML models, providing an accurate predictive tool for coagulopathy in TBI patients. Specifically, seven ML models (NNET, NB, GBM, Ada, RF, BT and XGB) were used to predict TBI-IC using variables frequently used in clinical practice. Concerning the predictive performance, the Ada outperformed the remaining models. Moreover, results from the DCA indicated that the Ada and XGB models had higher net benefits over a range of threshold probability values than other models. It is remarkable that this study combined preoperative characteristic, comorbidities, and laboratory findings other than coagulopathy profile to establish a prediction model. To help surgeons use the model, a calculator was developed, which provided a user-friendly interface. After entering the variables, the incidence of TBI-IC will be shown. The explanation of the ML model at the individual level was consistent with the aforementioned explanations at the feature level, and gratifyingly, the black-box concern was further mitigated to a certain extent. Notably, these results facilitated correct clinical decisions, and more importantly, timely treatment strategy. A previous study conducted by Cosgriff et al. (20) developed a simple score to predict traumatic brain injury-induced coagulopathy (TIC) using four binary predictors [systolic blood pressure<70 mm Hg, temperature <34 • C, pH <7.1, and Injury Severity Score (ISS) >25]. However, due to the fact that the ISS cannot be obtained at the time of decision making, the application of such a score was limited. To predict TIC more accurately, two scores have been developed by prehospital information (21,22). Mitraet al.'s score used 5 predictors (entrapment; systolic blood pressure < 100 mm Hg; temperature < 34 • C; suspected abdominal or pelvic injury; and chest decompression), whereas Peltan et al.'s score employed 6 predictors (age, injury mechanism, prehospital shock index> = 1, GCS, and need for prehospital tracheal intubation and/or Cardiopulmonary Resuscitation (CPR)) (21,22). Nevertheless, in new patients, both scores achieved only moderate performance, with sensitivity <30%. Additionally, the Trauma Induced Coagulopathy Clinical Score (TICCS) employed three components, including general severity, blood pressure, and extent of significant injuries to predict TIC (23). A major limitation of above scores was that much of the prognostic potential of available information was lost through limiting the number of predictors and dichotomizing continuous variables. Consequently, a novel predictive model for early-identification of TIC was established (Predictors: heart rate, systolic blood pressure, temperature, hemothorax, Focused Assessment with Sonography for Trauma (FAST) result, unstable pelvic fracture, long bone fracture, GCS, lactate, base deficit, pH, mechanism of injury, energy) (24). However, one point worth noting was that previous study focused on the entire trauma patient, not TBI patients in particular, which added confusion to some extent. By interpreting the full model, it was found that many clinical variables can contribute to predict the risk of TBI-IC. In this study, coagulopathy profile (INR, PT, APTT) was found to be the most important variable in predicting TBI-IC, followed by SOFA, blood routine test (PLT, RBC, HCT, HGB, RDW), renal function (BUN and CRE), comorbidities (congestive heart failure, myocardial infarction) and so forth. Among the fifteen included variables, the SOFA was an important predictor. SOFA is an indicator to describe multiple organ dysfunction, including respiratory system, nervous system, cardiovascular system, liver, coagulation and kidney (25). Potential mechanisms may include the fact that SOFA scores are more likely to indicate liver failure or cardiovascular failure. Those organ failures have a high tendency to bleed, and subsequently leading to coagulopathy (26). In this study, PLT, RBC, HCT, HGB and RDW were important predictors of TBI-IC. In a prospective observational study conducted by Davis PK et al. (27), PLT dysfunction was an early marker for TBI-IC. Potential mechanism included the blood dilution arised from the use of coagulation factor products (28). Nevertheless, we cannot exclude the likelihood that the blood coagulation system was activated by the continuous bleeding itself (29). RDW, a parameter of red blood cell volume, measures the variability in size of circulating erythrocytes. Although primarily used to diagnose different types of anemias, the RDW was also associated with various thrombotic disease processes including venous thromboembolism (VTE) (30,31). Although the underlying mechanism is unclear, it is speculated that inflammatory factors destroy the vascular endothelial integrity, subsequently changing the glycoprotein and ion channel structure of the erythrocyte membrane (32,33). Consequently, the deformability of the RBC is reduced, in turn, further enables endothelial damage to increase, causing the release of tissue factors that activate the coagulation pathway and triggers disseminated intravascular coagulation (DIC) (34). In this study we found that renal function indicators (BUN and CRE) can help to indicate the risk of TBI-IC. Similarly, a ML model developed by Zhao QY et al. also identified renal function, including urine output and CRE to predict sepsis-induced coagulopathy (SIC) (35). It is worth noting that renal dysfunction has been associated with both thrombotic and hemorrhagic complications (36,37). Potential mechanism included less adenosine diphosphate (ADP) and serotonin storage in PLT of patients with renal dysfunction (38,39). Taken together, the force of impact at the time of TBI can cause shearing of large and small vessels, and result in subdural, subarachnoid, or intracerebral hemorrhages, or a combination of different types. TBI-associated factors might then alter the intricate balance between bleeding and thrombosis formation, leading to coagulopathy (9). Indeed, the complex interactions between the PLT dysfunction, changes in endogenous procoagulant, anticoagulant factors, endothelial cell activation, hypoperfusion, and inflammation related to TBI-IC remain to be elucidated (9,40,41). The strengths of this study lied in the fact that it applied modern ML approaches to predict TBI-IC. It is worth noting that early and accurate prediction of TBI-IC can provide more time for clinicians to adjust corresponding treatment strategies. For example, this model is applicable if detailed medical history is not available for intubated severe head-injured ICU patient. Furthermore, given the heterogeneity of TBI-IC phenotypes (bleeding/thrombotic tendencies), timely treatment strategy would still require investigation and further testing to determine the type and therefore appropriate treatment. Furthermore, it was based on a real-world data with multicenter and external validation, which heighted the reliability of the performance of ML models. Besides, all the information in this dataset was coded independently of the practitioner, making it a reliable source. Our study had limitations, consistent with those inherent to many large administrative database studies. First, only TBI-IC adults in ICUs were included, while TBI-IC children and hospitalized TBI-IC cases were not analyzed. Nevertheless, in light of the immaturity of the coagulation system in children, more research is indeed required. Second, derived from the ICU participants, the results of our study cannot be generalized to other population, and we did not obtain information including laboratory testing and interventions before ICU admission, which may cause confounders to some extent. Although our models can screen out patients who are at a high risk of TBI-IC, it is the surgeons who decide the administration of anticoagulant therapy. Usually, the interventions are time sensitive and need to occur early after admission, starting in the emergency department. Third, some new coagulation markers, for example, thrombin-antithrombin-III complex and plasmin-α2-antiplasmin complex, are useful in coagulopathy diagnosis (42,43). Nevertheless, these indicators were not recorded in the MIMIC-IV and eICU database. This was also the case for viscoelastic coagulation testing [Thrombelastograghy (TEG), Rotational thromboelastometry (ROTEM), ClotPro]. Although these testings can provide detailed coagulopathy diagnosis rapidly and have multiple advantages over the traditional plasma-based coagulation tests (PT, APTT, INR), unfortunately, the above indicators were not included in these two databases. Fourth, we did not obtain the results of cranial Computer Tomography (CT) scans in this study, consequently, the original Corticosteroid Randomization After Significant Head Injury (CRASH)-CT score was not available. Moreover, as an administrative database, there was possibility for misclassification of TBI, to reduce bias caused by imprecise coding, we adopted the extensively used ICD-9, 10 codes. Fifth, as with all potential retrospective studies, there was a potential for unmeasured confounders, causing selection bias. Another major limitation worth noting was the changing nature of the variables in a critically ill patient from time of injury and right throughout the continuum of care to ICU discharge. The nature of the retrospective database did not allow for correction for when measurements were taken in relation to the time of injury. Lastly, although our study deeply explored the coagulopathy of TBI in the ICU settings, other outcomes, such as long-term incidence, are also needed further investigation. CONCLUSIONS In general, the present study suggested that some important features were potentially related to the TBI-IC. The ML model processed large number of variables and subsequently discriminated TBI patients who would and would not develop coagulopathy, facilitating the implement of timely yet efficient treatments. In the future, further validation regarding its clinical application value will become a necessity. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: These data can be found here: https://mimic-iv.mit.edu/; https://eicu-crd.mit.edu/. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Institutional Review Boards of Beth Israel Deaconess Medical Center (Boston, MA) and the Massachusetts Institute of Technology (Cambridge, MA). Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS FY, CP, and LP: Conception and design. YL: Administrative support. JW: Collection and assembly of data. FY and CP: Data analysis and interpretation. All authors: Manuscript writing and final approval.
4,792.2
2021-12-10T00:00:00.000
[ "Medicine", "Computer Science" ]
Onomatopoeia and Cat Vocalisations Onomatopoeia—the imitation of natural sounds—is a common phenomenon in human language, though imitations of the same sounds might appear different cross-linguistically. It is true that onomatopoeia is not like ordinary language, but how does it differ from natural vocalisation? While the distinction between onomatopoeia and ordinary language has received ample treatment, its difference from natural sounds have so far received less attention from linguistics. This study aims to investigate the phonetic differences between onomatopoeic cat sounds in ten languages and natural cat vocalisations. The findings show some segmental and phonotactical distinctions due to the direct representation of these words regarding their meanings, which clearly indicates that this phenomenon in world languages is not arbitrary and offers strong evidence of iconicity. While arbitrariness is the norm in human language and has an essential impact on language development, there are clearly some nonarbitrary aspects of human language, and onomatopoeia is notable among them. Introduction Onomatopoeia, with its odd spelling, refers to the "process of forming words whose phonetic form is perceived as imitating a sound, or sound associated with something, that they denote" (Matthews, 1997, p. 276). Its Greek root indicates that onomatopoeia is indeed the making (poiein) of a name or word (onoma) from natural sounds (Chalker & Weiner, 1998). Consider how the sound made by a bee resembles the word buzz, and how this close resemblance between onomatopoeia and a natural sound poses a significant challenge to de Saussure's notion of the arbitrariness of language (Saussure, 1959). These are imitative words, or at least part of natural sounds (Saussure, 1959). Cross-linguistically, onomatopoeia exists in all the languages of the world, and some scholars, actually, consider onomatopoeic words to have been the earliest words spoken by humans during the development of human language, due to the fact that direct imitation allows the listener to straightforwardly recognise the meaning, hence it very clearly to describe animals such as a rooster with cock-a-doodle-do, which formed the bulk of the conversations between primordial humans (Yaqubi, Tahir, & Amini, 2018). Therefore, this specific linguistic phenomenon is the focus of this study. The aim is to investigate cross-linguistic onomatopoeia of cat sounds in relation to cats' natural vocalisations. To achieve this aim, three research questions have been taken into consideration: • How do the onomatopoeic words for cat sounds and cat vocalisations differ phonetically? • How are onomatopoeia and cat vocalisation related? • What is the impact of this relationship on the notion of arbitrariness in language? The next section provides a basic theoretical background with regard to onomatopoeia and animal vocalisation, with a specific focus on cat vocalisation, after which there is a section demonstrating in detail the cross-linguistic onomatopoeic word and cat vocalisation dataset used in this research and highlighting the phonetic differences with a thorough analysis. The fourth section presents the study's discussion, while the conclusion, presented in the final section, provides a summary of the study and its findings. Theoretical Background Theory on the arbitrariness of language begins in the 1900s, with the claim made by Ferdinand de Saussure that all human languages are in fact arbitrary (Saussure, 1959). World languages are based on a process of naming. More precisely, things are linked to a particular name or word (Dofs, 2008). Concerning this process, two elements are involved, the signifier (a sound image) and the signified (a concept), resulting in a linguistic sign that is normally accepted by the speakers of the same language as contained in ordinary language (Dofs, 2008). Indeed, the diversity of words representing the same things in different languages signifies that the association between the signifier and the signified is indeed arbitrary (Dofs, 2008). Variety of meaning might be possible, as one signified can have several signifiers, and vice versa, leading to the notion of ambiguity in language (Driscoll et al., 2009). Arbitrariness made it possible for humans to develop more complex languages than animals, as it is easier to learn the sound-pairs of a given language that can be combined in several ways, than to develop an inherent system for communication in numerous diverse contexts (Klages, 2001). Apparently, therefore, arbitrariness is the norm in human language, to the extent that the basic connection between the form and meaning is concerned (Valli & Lucas, 2000). However, there are many nonarbitrary aspects to language. Non-arbitrariness can be found in other domains of language, the most notable and obvious being the onomatopoeic words, which are words imitative of "natural sounds or have meanings that are associated with such sounds of nature" (Valli & Lucas, 2000, p. 276). Yet, in the case of such onomatopoeic words, a debate regarding arbitrariness is to be found. While "the form is largely determined by the meaning, the form is not an exact copy of the natural sound" (Valli & Lucas, 2000, p. 277). For instance, English speakers have just arbitrary conventionalised cock-a-doodle-doo as the sound of roosters, even though do not actually say in that form. Furthermore, when different languages imitate the same sound, they have to make use of their own linguistic resources (Dofs, 2008). Different languages admit different sound combinations, so even the same natural sound may end up with a different form cross-linguistically, even though each form is somewhat imitative (Valli & Lucas, 2000). For instance, a rooster sound in Chinese Mandarin is kukuku, even though rooster vocalisation is apparently the same in China and America. As it is widely recognised that human language begins through human mimicry of naturally occurring sounds or movements, Peirce indicates that human language is iconic from the beginning (Peirce, 1867). Miller indicates that humans tend to connect certain sounds to certain actions and objects, which is somewhat related to modern theories of sound symbolism, but Miller claims that the instinct to connect sound to semantic fields disappeared as a result of language development (Abelin, 1999, p. 18). Some languages might be considered more iconic than others, because the use of onomatopoeia is controlled by the phonemes in the language (Dofs, 2008). As a result of linguistic restriction, a word such as 'crash' would not be possible in Japanese, because Japanese syllables cannot begin with a sequence like kr or end with a consonant like sh, and so on (Gasser, 2006). Iconic words, such as 'bow-wow', which is an imitation of a dog's bark, changed form and the meaning broadened, and finally 'bow-wow' became the representation for a dog (Corballis, 1999). The natural resemblance between a sign, and the concept or object in the real world which it refers to, can be classified as iconicity (Dofs, 2008). Iconicity is likeness to a concept-which includes our own impressions and ideas about something-corresponding to an object in our own perception of the world, and we all perceive the world in different ways (Dofs, 2008). The cat, which is the focus of this study, is a very common pet around the world, with millions of individual cats having lived with humans over more than 10,000 years (Turner & Bateson, 2000;Moelk, 1944). Cat vocalisations were first clarified by Moelk (1944), revealing the phonetic characteristics of domestic cats, followed by several pieces of research focusing on the acoustic characteristics of cat vocalisations (McKinley, 1982;Nicastro, 2004;Schötz & Eklund, 2011;Schötz, 2012Schötz, , 2013Schötz, , 2015. Building on this, a phonetic typology phonetic of cat vocalisations, along with their phonetic descriptions, were identified (Schötz, Weijer, & Eklund, 2017). Table 1 shows a classification of the most common cat vocalisation types and subtypes. Among the various cat vocalisations, this study will be largely based on the Meow types as will be discussed further, in detail, in the Discussion section. Onomatopoeia and Cat Vocalisations Most of the onomatopoeic words are neglected in dictionaries, although some languages have specific ones, like the English and Spanish Dictionary of onomatopoeic sounds, tones and noises in English and Spanish (Kloe, 1977). The data complication is taken from various studies and general dictionaries. The languages chosen for the current research were not randomly selected, as each language comes from a different family, in order to ensure the multiplicity of onomatopoeic words and yield adequate findings. Table 2 shows the onomatopoeic word for cat sounds in the various languages used in this study. Concerning the cat vocalisations, as mentioned in the background section, previous studies and analysis have identified the most common cat vocalisations based on recordings and developed a comprehensive phonetic typology, along with the phonetic descriptions shown in Table 1. It should be noted that only the meow types, along with the phonetic transcriptions demonstrated in Table 3, will be the focus of this study since most of the onomatopoeic words are indeed an imitation of this type. Apparently, the two sets of data can be preliminarily used to analyse phonetically the cross-linguistic onomatopoeic cat sound in contrast to the phonetic structuring of natural cat vocalisations (Schötz, Weijer, & Eklund, 2017). It can be clearly seen that most of the onomatopoeic words are indeed an imitation of the cat meow types presented in Table 3, which is the sound produced with an opening-closing mouth. The meow types include several subtypes: mew, squeak, moan and meow. Discussion When different languages attempt to imitate the same sound, they are restricted to using their own linguistic resources, leading to the fact that even the same natural sound ends up with a variety of forms in different languages. Generally speaking, the onomatopoeic words for cat sounds constitute the strongest evidence for non-arbitrariness, in which the meaning is clearly predictable from the form as seen in the study findings. Indeed, onomatopoeic cat sounds are extremely non-arbitrary, since the connections between the form and the meaning show a perfect example of iconicity as they are direct representations of their meanings due to natural derivation, cross-linguistically. In this research, regarding our first question, it can be concluded that the cross-linguistic onomatopoeic cat sounds were to some extent identical to cat vocalisations, showing some segmental and phonotactic differences in sounds and syllable structure. Nevertheless, the different phonological or morphological systems of every language considered in the study lead us to the fact that onomatopoeic cat sounds can surely be considered a perfect example of iconicity. Conclusion Cats communicate by making the same sounds, no matter in what country, but the way of representing their sounds might differ cross-linguistically. In this study, 10 onomatopoeic words for cat sounds in 10 languages, in contrast to cats' natural vocalisations, have been phonetically analysed. The findings revealed some segmental and phonotactical distinctions between onomatopoeia and natural cat vocalisations, due to the different phonological systems cross-linguistically. They provide strong evidence of the non-arbitrary aspect of onomatopoeic words for cat sounds cross-linguistically and show clearly their iconicity. To investigate other onomatopoeic words phonetically, more in-depth research is required, perhaps following the approach adopted in this study. In conclusion, it might be suggested to extend the research study on onomatopoeia and other animal vocalisations, to explore more widely the notion of iconicity and the arbitrariness of language cross-linguistically.
2,557.8
2021-07-19T00:00:00.000
[ "Linguistics" ]
Factors affecting bubble size in ionic liquids This study reports on understanding the formation of bubbles in ionic liquids (ILs); with a view to utilising ILs more efficiently in gas capture processes. In particular, the impact of the IL structure on the bubble sizes obtained has been determined in order to obtain design principles for the ionic liquids utilised. 11 ILs were used in this study with a range of physico-chemical properties in order to determine parametrically the impact on bubble size due to the liquid properties and chemical moieties present. The results suggest the bubble size observed is dictated by the strength of interaction between the cation and anion of the IL and therefore the mass transport within the system. This bubble size - ILs structure - physical property relationship has been illustrated using a series of QSPR correlations. A predictive model based only on the sigma profiles of the anions and cations has been developed which shows the best correlation without the need to incorporate the physico-chemical properties of the liquids. Depending on the IL selected mean bubble sizes observed were between 56.1 and 766.9 μm demonstrating that microbubbles can be produced in the IL allowing the potential for enhanced mass transport and absorption kinetics in these systems. Bubble behaviour of 11 ionic liquids was studied and the relationship of bubble size, physical properties and structure was examined. Introduction The development of materials for gas capture and separation is important for many industrial applications. Recently, ionic liquids (ILs) have been widely investigated as gas sorbents. 1 Results have shown that certain ILs can exhibit high gas solubility and more importantly the ability to selectively dissolve particular gases from mixed gas streams. Therefore, ILs have the potential to be drop in replacements for volatile molecular solvents in many industrial processes. ILs possess benefits over organic solvents such as chemically tunability, stability and low vapour pressure. 2 However, in general, ILs have much higher viscosities than molecular solvents and, therefore, the reduced mass transfer associated with this property has hindered their employability on an industrial level. 3 Bubbling arrangements are commonplace in industrial capture and release applications to either achieve heat and/or mass transfer. Microbubbles, i.e. bubbles with diameter in the range 1 µm to 999 µm, have advantageous mass transfer properties over larger size bubbles. The rate at which mass transfer can occur is proportional to the interfacial area between which mass transfer is to occur. A reduction in bubble size increases the surface area to volume ratio and, therefore, smaller bubbles are favourable for increased surface area and mass transfer properties. 4,5 Previously reported research documenting the use of microbubbles illustrates how these finer bubbles can improve numerous aqueous systems. Processes which have been shown to increase their efficiency through introduction of microbubbles include algal growth, separation rates and mixing in airlift-loop-bioreactors (ALBS). 4,[6][7][8][9] Therefore, the ability to create small bubbles within the IL media would enhance the mass transport and make ILs more applicable for gas capture systems. In aqueous solutions, it has been shown that the charge density of the ions in solution effects the stabilisation of bubbles, 10 a similar trend has also been seen with IL ions in solution. 11 However the use of neat IL media will result in a different system with a number of other factors influencing bubble size/stability. The mass transport properties within ILs is not well studied, to date, and bubble size data is only reported for a small number of ILs with the focus mainly on imidazolium-based ILs. [12][13][14][15][16] These reports agree that viscosity and surface tension are the dominating factors in determining the bubble behaviour. In general, bubble size increases as viscosity increases and in cases where IL viscosities are similar, surface tension becomes the governing factor. 16 Other experimental conditions have been investigated such as addition of water 13 , temperature 12,16 , gas flow rate 12 , gas type 15 and reactor geometry. 15 The effect of the addition of water and the increase in temperature both result in decreases in bubble diameter potentially due to the subsequent decrease in viscosity. It has been found that mass transport models used for molecular solvents do not fit the data gained from IL systems as the gas-liquid interface is different; cations and anions are both present at the interface with no segregation and cation rings have been found to sit perpendicular rather than parallel to the surface. 17 Other studies have been performed examining the mass transfer properties of CO2 in ILs using a combined computational fluid dynamic (CFD) and population balance model (PBM). 18 However, the study of bubble size distribution within multi bubble systems in many ILs is still largely unreported and, therefore, a comprehensive study of bubble distributions within a wide range of ILs is required if ILs are to be implemented as reaction/separation media at an industrial scale. This study considers a family of 11 ILs examining the key IL physico-chemical parameters that affect the sizes of bubbles produced within the media. The overall aim of this work is to develop design parameters which allow the IL to facilitate the generation of small bubbles without detrimentally affecting the gas affinity or increasing the energy consumption or process cost. For this reason, several Quantitative Structure-Property-Relationships-based correlations (QSPR) were investigated to illustrate a bubble size -ILs structure -physical properties relationships using key physical properties and/or molecular descriptors. was synthesised from trifluoroacetic acid (Sigma-Aldrich, 99 %, 114.0 g, 1 mol) added dropwise to 1-butyl-3methylimidazioum chloride (174.7 g, 1 mol) dissolved in Milli-Q ultra-pure water (500 cm 3 ) in an ice bath and allowed to stir overnight. The solvent was then removed using a rotary evaporator to obtain the IL. All ILs were dried in vacuo (< 10 -2 mbar @ 40 °C) for a minimum of 48 h and maintained under a flow of dry N2 overnight before microbubble measurements were carried out. After drying, the water content of the ILs was measured using a Metrohm 787 KF Titrino Karl Fischer as < 0.1 wt% for all ILs. The purity of the synthesized ILs was analysed using 1 H NMR using a Bruker 300 MHz Ultra shield Plus NMR spectrometer and the results were consistent with literature reports. 19,20 Methods Nitrogen (BOC, 99.998 %) was delivered to the IL via a Bronkhorst mass flow controller. The gas was flowed for ~ 5 min prior to the measurements being made in order to equilibrate the pressure within the system, including the ceramic porous material. During this time bubbles continuously detached from the surface of the ceramic and into the IL. The ceramic porous material used in this study has an average pore size of 2.5 µm as shown in Figure 1. Scanning electron microscopy (SEM) was carried out on a JEOL JSM 6300 SEM with an Agar MB7240 gold sputter coater. This material has a thickness of 10 mm and the pressure required to allow bubbling in an aqueous system is 2.4 bar (gauge) at 298 K and 101.325 kPa. Typically, the microbubble rigs reported previously have volumes of > 50 L; 8 however, due to the use of ILs and the associated synthesis and procumbent costs, the equipment was scaled down to allow the use of smaller volumes. In these measurements, ~120 cm 3 of IL was employed. The rig, consists of a stainless-steel base with an inlet chamber and a microporous ceramic diffuser. The chamber section is secured to a quartz glass viewing section for bubble visualisation and imaging. Nitrogen is bubbled into the IL at 3 cm 3 ·min -1 for each of the ILs in order to directly compare the bubble size distribution in each case. This particularly low flow rate is used to minimise the amount of bubbles created within the IL and, therefore, reduce the risk of overlapping bubbles during imaging. In addition, all materials of construction were tested to examine the effect of the ILs to ensure compatibility with the IL media. Contact angle measurements. Young's equation (1) was used to determine the contact angle (θ) utilising the surface tensions of the three phases present in the system: Experimental where γ is the surface tension at the interface of two phases and S, V and L represent the solid, vapour and liquid phases, respectively. This relationship suggests that the contact angle is independent of drop size, provided that the drop is small enough to ignore gravitational effects. 21 The contact angle reported, herein, is the angle that a droplet of IL makes when it is placed on a ceramic surface. It is measured at 293.15 K and 101.325 kPa using an Attension Theta tensiometer, which is able to measure contact angles using its optimised setup including a monochromatic light source, adjustable sample holder and dedicated software. This allows a droplet of IL placed on a non-porous ceramic substrate to be analysed and its volume and contact angle easily measured. The ceramic substrate material was chosen to replicate that of the porous ceramic diffuser of the microbubble rig without the complications of a porous structure. The IL was pipetted onto a cleaned substrate stage. The ceramic substrates were cleaned by applying chloroform then propan-2-ol then water followed by the propan-2-ol then chloroform before drying. Three measurements were taken and an average value taken providing a standard deviation of 2 sigma. Prior to both bubble imaging and contact angle experiments the ILs were dried/equilibrated with the gas stream by bubbling nitrogen through the media overnight at room temperature and stored within a vacuum desiccator. Bubble size measurements. Bubbles of nitrogen in the ILs were imaged using an optical method utilising a digital camera (Pixelink PL742, 1.3 MP enclosed camera with a 2/3″ On Semi IBIS 5B sensor and 27 fps output. The sensor features a 6.7 µm pixel pitch and is capable of 10-bit output) and associated software. The IL was backlit using a ThorLabs White LED Array light source (LIU004) with an intensity of 1700 µW/cm 2 and emitted at a peak of 450 nm. The bright LED light source, focused using a lens, is diffused into a more uniform light using a white plastic translucent optical diffuser layer, before entering the bubble visualisation rig. The rig was constructed using transparent quartz glass which was tested previously and known to be compatible with the ILs. This setup is shown in Figure 2. Once the images of bubbles in the IL have been taken, the bubble size distribution was obtained using bespoke bubblesizing software ( Figure 3). 22 This software is able to automate a number of imaging processing techniques on a large number of images, each containing multiple bubbles. The software program takes a folder of bubble images and outputs both an average bubble size from the collection of images in a folder and also a complete set of bubbles sizes for further size distribution analysis. The data gained from the bubble-sizing software is used to produce a bubble size distribution and repeated for each IL. The bin size was chosen as 5 µm within a range from 0 to 2000 µm. The bin sizes and the range are kept constant for all the IL for ease of comparison. The bubble size distributions for the 11 ILs studied are shown in Figures S1-S11 of the Electronic Supporting Information (ESI). From the bubble size distributions presented in these figures, it is apparent that there are significant differences in the shape of the distributions curves for each of the ILs. In order to further analyse the bubble size distribution each was examined quantitatively using the mean bubble size and kurtosis. The mean bubble size is obtained using equation (2): where D is the diameter of an individual bubble and n is the total number of bubbles. The pixel size was calibrated by imaging an object of known size, such as a millimetre scale transparent rule. The D[1,0] method for calculating average bubble diameter is chosen for simplicity, and to quickly represent any changes in bubble diameter that may occur as a result of changing the IL parameters. Kurtosis is also used to describe the distribution of a data set. Kurtosis is a measure of how flat (negative kurtosis) or sharp (positive kurtosis) the peak of the data set appears. Kurtosis for Gaussian distributions is zero. Kurtosis values were obtained using the following equation: where is the standard deviation of sample, is the total number of bubbles, is sample bubble size and [1,0] is the mean bubble size. COSMOthermX calculations. The COSMOthermX software is based on the Conductor-like Screening Model for Real Solvent method (COSMO-RS), which combines statistical thermodynamics with the electrostatic theory of locally interacting molecular surface descriptors. 23 Prior to utilisation of this software, the structure of each ion involved was optimized within the Turbomole 7.0 program package, 24 with a convergence criterion of 10 −8 Hartree in the gas phase DFT calculations combining the Resolution of Identity (RI) approximation 25 utilizing the B3LYP functional with the def-TZVP basis set. 26 Each resultant optimized structure was then used as an input for the generation of the most stable conformer of each species using the COSMOconfX program (version 4.0). The COSMOthermX software (version C30_1602) was then used to determine the sigma profile, COSMO volume of each ion, as well as, the free volume in each selected IL by following the same methodology as already presented previously by our group. 27 Additionally, sigmamoments were determined to further analyse the capability of the COSMOthermX software to be used as a QSPR-based approach to correlate average IL bubble sizes as the function of the ILs structure by following the same approach as that reported by Klamt et al. 23 Results and discussion The series of ILs were selected for microbubble testing to cover a wide range of viscosities (16-2900 mPa·s), densities (0.8-1.5 g·cm -3 ), molecular weights (170-760 g·mol -1 ) and hydrophobicity as measured by contact angle (11.7-56.4 °) at 293.15 K and 101.325 kPa. The structures of the cations and anions of the various ILs used are given in Figure 4. In this study, the bubble size data have been acquired after bubbling with nitrogen to understand how the various IL properties correlate with the bubble size observed. Nitrogen gas was used instead of CO2 to limit the effect of gas dissolved in selected ILs on the bubble size distributions observed as it is very well reported in the literature that N2 has a lower solubility than CO2 in several ILs. 28,29 The results from the microbubble experiments are given in Table 1; including average bubble size and measures of distributions (standard deviation and kurtosis). Table 1 shows, in general, that the ILs containing a tetraalkyl phosphonium cation exhibited the largest bubble sizes whereas the imidazolium based ILs resulted in the smallest bubble sizes observed. The lowest mean bubble size was observed in [C2mim][DCA] and the largest mean bubble size was observed in [P66614]Cl. To help understand the cause of the differing bubble size distributions and average bubble sizes in a selection of ILs, individual properties (viscosity, 30-37 density, [30][31][32][33][34][35][36][37] contact angle, molecular weight and free volume) are considered and are also listed in Table 1. From an initial inspection of bubble size results and IL parameters (Table 1) coupled with previous work, [15][16] it was Table 1. Summary of the IL properties and bubble size data for the IL studied at 293.15 K and 101325 Pa; mean bubble radius, standard deviation and kurtosis values calculated for the bubble size data, molecular weight, literature values for viscosity and density, free volume determined using COSMOthermX by following methodology reported previously, 27 Please do not adjust margins Please do not adjust margins expected that the viscosity would be a significant factor in determining the bubble size with an expected trend of increased bubble size with increased viscosity. The dependence of bubble size with respect to the ILs viscosity is given in Figure 5. Figure 5 shows the expected general trend that with increased viscosity () larger average bubble radii are found. 13 Equation (4) Wettability (contact angle) has also been reported previously as a contributing factor in determining the bubble size. 38 It has been shown that bubble sizes increase with increasing contact angle; however, this was demonstrated using water as the liquid and the hydrophobicity of the surface was modified to produce a range of contact angles and the resulting bubble size measured. Figure 6 shows the average bubble radius against contact angle for each IL studied. In all cases, the surface remained constant and the hydrophobicity of the liquid is varied. In contrast to the previous observations, in general, the bubble size was found to decrease with increasing contact angle which could be correlated reasonably (R 2 = 0.77) using the following equation: Mean bubble size = 1395.6 • exp (−0.0522 • ) (5) where the mean bubble size and the contact angle are given in in μm and °, respectively. However, as observed with the trends with viscosity, ILs with similar contact angles resulted in significantly different bubble sizes. Figure 8 shows the relationship between ILs density and mean bubble size. In this case, no significant correlation was observed. Figure 9 shows the trend of the bubble size with respect to the free volume of each IL determined using COSMOthermX. 24 To obtain the free volume, the total COSMO volume of each IL was calculated by the sum of the COSMO volumes of the anion and the cation directly determined by COSMOthermX. An estimation of each IL free volume (see Table 1) is then calculated by taking the difference between the calculated molar volume and the COSMO volume of the IL by following the same approach as that reported previously. 24 As shown in Figure 9 a general trend in bubble size as the function of the IL free volume is observed. As the free volume (fv) of the IL increases a larger average bubble size is observed. This overall trend was then fitted by a straight line (R 2 = 0.38) which follows the equation below: Mean bubble size = 3.8125 • (7) where the mean bubble size and the free volume are given in µm and in cm 3 ·mol -1 , respectively. Whilst there is a general trend the correlation is poor and is only qualitative. For example, [P66614]Cl and [P66614][DCA] have similar molar volumes (583.6 and 612.6 cm 3 ·mol -1 , respectively) and free volumes (125.4 and 134.4 cm 3 ·mol -1 , respectively) but give very different average bubble sizes (766.9 and 270.9 µm, respectively). Correlation of the bubble size data to physico-chemical properties of each IL shows that no individual physical property is the determining factor and that a combination of the properties influences the results. However, notably contact angle and viscosity have the strongest correlation to the mean bubble size in ILs. To further analyse this behaviour, a simple QSPR was setup to correlate the mean bubble size as the function of these key properties as follows: where a, b, c and d are the QSPR type fitting parameters reported in Table 2, the mean bubble size, the viscosity, contact angle, molecular weight and free volume are given in µm, Pa·s, °, g·mol -1 and cm 3 ·mol -1 , respectively. As reported in the Eq. 8 the QSPR constant was set as equal to zero as it was assumed that no bubble could be formed in the absence of the IL. By following this methodology, a reasonably good QSPR correlation (equation y = x with a R 2 = 0.85, RAAD = 26 %) was found between these properties and the experimental mean bubble size data as shown in Figure 10. This further demonstrates the impact of highlighted properties on the mean bubble size in selected ILs. As shown in Table 2 Table 2. QSPR-type fitting parameters of the Eq. 8 and comparison between experimental and correlated mean bubble sizes. a Relative Absolute Deviation (RAD) calculated as follows: where Yexp. And Ycorr. represent the experimental and correlated mean bubble sizes, respectively. and as expected from Figures 5 and 7, the molecular weight and the viscosity positively contribute to the size of the bubble in IL a contrario of the contact angle (expected from Figure 6) and more surprisingly the free volume (unexpected from Figure 9) as both properties have a negative QSPR fitting parameter (parameters b and d). A further analysis of the number of properties and fitting parameters was then conducted to verify the impact of each property on the QSPR performance. In this case, each of the parameters a, b, c and d was set to zero and the significance of its effect evaluated. As exemplified in Figure S12 along with parameters and calculated data reported in Table S1 -ESI, by neglecting the IL free volume (parameter d = 0 in Eq. 8) a poor QSPR correlation was found (y = 1.767 x with a R 2 = 0.53, RAAD = 53 %), demonstrating the importance of this property on the QSPR correlation performance. This could be attributed to two main factors: (i) the free volume is a key property describing the mean bubble size in ILs; and/or (ii) the increase of the fitting parameters in Eq. 8 enhances the QSPR correlation. However, differences observed from parity plots between experimental vs. calculated mean bubble size by excluding ( Figure S12, slope = 1.767) or including ( Figure 10, slope = 1) the free volume into the QSPR correlation clearly show its impact on the slope of the straight line, and in fact on the quality of the QSPR correlation. However, even if a good correlation was observed by using all highlighted physical properties, by analysing data reported in Table 2, it appears that this QSPR is unable to provide the correct mean bubble size trend as the function of the IL structure as experimentally it was found that mean bubble size increases in the following order: Absolute Average Deviation (RAAD) close to 26 %. Therefore, the ability for the bubbles to form in the IL was evaluated by examining the strength of the cation and anion interaction, as well as correlating the trends with respect to the shape of the distribution (Figures S1-S11 -ESI). In order to provide some quantification of the interactions present, sigma profiles were calculated for each of the IL cations and anions and are summarised in Tables 3 and 4. Figures S1-S3 [DCA]is the smallest anion of the set (82.5478 Å 3 ), its sigma surface shows the charge is mainly located on the three nitrogens within the molecule separated by the two carbons. The sigma profile shows two peaks one representing the polarization charge on the surface of the carbon between 0 and 0.01 e.Å -2 and a larger second peak at 0.01-0.02 e.Å -2 for the charge on the nitrogens. As expected, the sigma profile shows that all of the polarization charge is positive which results in a good interaction with the cation. The charge is localised at uniform places within the molecule and, therefore, has strong Coulombic and hydrogen bonding interactions, the latter with the ring hydrogens of the cation. A weaker interaction is found for [EtSO4]due to its increased size (125.2487 Å 3 ), its sigma surface shows that the charge is mainly located on the -OSO3group and its sigma profile shows two peaks, one for the alkyl chain of the anion -0.005 to 0.01 e.Å -2 and one for the OSO3section at 0. , respectively. Furthermore, from their sigma surface it is seen that, in both cases, the positive charge is located on the carbon between the two nitrogens but also delocalised around the aromatic ring. Table 4 shows almost identical sigma profiles for the two imidazolium cations this is expected as previous reports have shown that non-polar domains are only observed above C4. 39 Therefore, the cationanion interaction will also be similar resulting in a similar average bubble size value and distributions. Table 3) shows localisation of the positive charge around the phosphorous and the sigma profile (Table 4) shows a larger proportion of the charge density is shifted to positive charge polarisation which is representative of the bulky alkyl groups and resulting van der Waals forces. The weak interaction of the bulky alkyl groups and the large size of the cation results in the larger mean bubble size and wide bubble size distribution. [Dec]is the anion with the largest volume (247.3144 Å 3 ) within the study, its sigma surface ( Table 3) shows localisation of the charge on the acetate group with a long chain alkyl group attached. The [Dec]sigma profile ( Table 4) has two maxima of the charge density for the -COOat 0.02 e.Å -2 reflecting the highly localised negative charge of this anion, and a region around 0 e.Å -2 which is representative of the van der Waals interactions due to the alkyl chain. The larger size of the [Dec]anion and the reduced interaction with the [P66614] + cation results in the large mean bubble size observed. In the case of [P66614]Br and [P66614]Cl, the anion will be very strongly bonded to the cation, due to the spherical charge on the halide anion and, therefore, it may have been expected that these ILs would result in a narrow distribution due to a very strong interaction between the cation and anion. The higher hypothetical mean bubble size observed in the case of the Cl --based sample may be attributed to its higher hydrogen bonding ability compared to Branion. However, the standard deviation values for [P66614]Br and [P66614]Cl are much larger than the other three ILs in this data set. These results would suggest these two anions are having little effect on the bubble size distribution as they are so strongly bonded to the cation. Their sigma profiles (Table 4) support this observation with only one region of charge density around 0.02 e.Å -2 for both halide anions. The large mean bubble sizes and wide distribution observed can be explained by cation-cation interaction that is dominated by van der Waals forces from the alkyl chains on the tetraalkyl phosphonium cation and not by the Coulombic interactions. For the strongly bound halide, the anion does not influence the structure significantly whereas for the bulkier, weaker Coulombic interactions some interaction/influence on the alkyl chain structure is present which changes the bubble size. It is, however, important to note that these ILs are still dominated by the van de Waals interactions and all have large bubble sizes, in general. These interactions influence the physico-chemical properties. For example, the movement of the anion and cations relative to each other determines the stress required to translate the liquid and this determines the viscosity measured. This also affects the bubble size by changing the resistance to bubble growth. With more structured liquids the energetics of increasing the bubble size and lowering the surface potential of the bubble cannot overcome the energy required to disrupt the structure, therefore, limiting the size observed. As the structure becomes less ordered and the interactions are weaker, the bubbles can grow leading to the larger bubble sizes. The lack of a correlation with the density and free volume as shown in Figures 8 and 9 indicates that the bubbles are not strongly dependent on the void space within the IL which is at molecular level but rather the structural changes/strain which occur over longer distances. This is as expected given the relative size of the bubbles formed and the void space within the IL. To assess the ability of sigma profiles to describe the mean bubble size distribution of the selected ILs, a second QSPR approach was developed by splitting the polarization charge into 6 regions (region 1: from -0.030 to -0.021 e.Å -2 ; region 2: from -0.020 to -0.011 e.Å -2 ; region 3: from -0.010 to 0 e.Å -2 ; region 4: from 0 to 0.010 e.Å -2 ; region 5: from 0.011 to 0.020 e.Å -2 and region 6: from 0.021 to 0.030 e.Å-2) by calculating the overall populations of charge present on the surface of each ion for each of them. For each region, the overall population of charge of the selected ILs was then determined as the sum of the charges found for its anion and cation as reported in Table S2 -ESI. Then, the following equation was used to set the QSPR model: where e, f, g, h and i are the QSPR type fitting parameters reported in Table 5, the mean bubble size and regions are given in µm and Å 2 , respectively. From the sigma profiles reported in Table 4, it appears that, for all investigated ions no population of charge was found in region 1, explaining why this descriptor was neglected in Eq. 10. By regressing the experimental mean bubble size distribution using Eq. 10 qualitative correlation (y = x, R 2 = 0.58, RAAD = 56 %) was found as shown in Figure 11 and Table 5 which reflects the importance of hydrogen bonding acceptor (region 2) and donor sites (region 6) to form smaller bubbles in the ILs. However, even if the number of fitting parameters increases by using this second QSPR method rather than initial one (Eq. 8), a lower RAAD was in fact observed (56 % compared with 26 %). Furthermore, as also observed using the Eq. 8 the wrong variation of the mean bubble size with the respect to ILs structure was observed using this second approach as the In the light of this analysis, a third approach was then examined by combining Eqs. 8 and 10 and assessing the impact of each descriptor on the QSPR performance. This analysis is shown in Figures 12 and S13-14 along with the comparison of experimental vs. correlated data in each case as reported in Tables 6 and S3-S4. Using this approach, a good correlation was achieved by using all descriptors reported in Eqs. 8 and 10 (see Figure S13 and Table S3, y = x, R 2 = 0.91, RAAD = 21 %) although the mean bubble sizes in the [C2mim][DCA] and in Table 6. QSPR-type fitting parameters combining the Eqs. 8 and 10 and comparison between experimental and correlated mean bubble sizes. Please do not adjust margins Please do not adjust margins [P66614]Cl were overestimated affecting the overall mean bubble size order with the respect to ILs structure. As shown in Figure S14 and depicted in Table S4, a similar result and conclusion were observed by neglecting the free volume in this QSPR approach (y= x, R 2 = 0.91, RAAD = 22 %) showing that this property could be neglected without affecting the correlation performance. Surprisingly, by neglecting both free volume and molecular weight descriptors in the combined QSPR using both Eq. 8 and 10, a better correlation was found (y = x, R 2 = 0.91, RAAD = 19 %) as shown in Figure 12 and Table 6. More interestingly, in contrast to what has been observed for the two first reported QSPR approaches, the variation of the mean bubble size with the respect to ILs structure is more correctly evaluated by using this third approach as the mean bubble size follows this order: (11) where A X and M X i are the molecular surface area and the i th sigma moment of the species X, respectively. MHBacci X and MHBdoni X are its i th hydrogen bonding acceptor and donor moments, while the coefficients C0 to C15 are the QSPR fitting parameters. The sigma moments of investigated ILs were then computed as the sum of the sigma moments of the cation and anion as reported in Table S5 -ESI. However, in our present case, in order to avoid over-parameterisation due to the limited number of experimental data points available, the multilinear regression had to be performed with a much lower number of descriptors than those reported in Eq. 11. According to Klamt et al., 23 the molecular surface area A X , the second and third sigma moments M2 X (i.e. electrostatic interaction energy) and M3 X (i.e. the kind of skewness of the sigma profile), as well as, the third hydrogen bonding acceptor and donor moments MHBacc3 X (i.e. representing the hydrogen bond acidity) and MHBdon3 X (i.e. representing the hydrogen bond basicity) are the five most significant descriptors to be used for sigma moment QSPR applications. These five parameters were in fact used by default during our parametrisation by keeping the constant as equal to zero for the same reason as stated above. Equation 11 was then set to minimise the number of descriptors to provide the best correlation performance. In the light of this multilinear regression analysis of our experimental data, this COSMO moment approach, using only the number of parameters representative of the structure of the IL, provides the best correlation, as shown in Figure 13 and Table 7, with y = x, R 2 = 0.98 and RAAD = 13 %: Mean bubble size = 0 • + 2 • 2 + 3 • 3 + 4 • 4 + 7 • 1 + 9 • 3 + 11 • 1 + 14 • 4 (12) As reported in Table 7, the variation of the mean bubble size with the respect to the ILs structure seems to be correctly evaluated by using this fourth approach as the mean bubble size follows this order: It was, therefore, concluded that no individual physical property was the determining factor. However, it was noted that the strongest correlations were observed with contact angle and viscosity. A QSPR-based model approach was also used to investigate these key properties but was unable to provide a strong correlation or reproduce the experimental trend observed. Therefore, QSPR models were used to relate the strength of the anion and cation interaction (as described by COSMOthermX generated sigma profiles and sigma surfaces) with the bubble size observed and this approach showed an increased degree of correlation. However, the strongest relationship was observed (R 2 = 0.98 and RAAD = 13 %) when the physicochemical parameters for each IL was neglected and only the sigma moments were used to describe the ILs. This final approach was the most successful at reproducing the experimental trend for all ILs and bubble size ranges investigated. The use of this model to accurately reproduce the experimental results shows the potential for selection or design of an IL with a specific average bubble size and could be very useful in the implementation of such materials in gas capture applications. This study has demonstrated that it is possible to generate microbubbles in ionic liquids which has the potential to lead to increased kinetics for gas separation processes. Importantly, the predictive model which has been developed provides a path for process design based on bubble size as well as the thermodynamics of gas absorption in ionic liquids which has been reported previously. 44
8,162
2017-06-07T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Information Security Risk Assessment and prioritize risks against criteria for risk acceptance and objectives relevant to the organization. Risk management refers to a process that consists of identification, management, and elimination or reduction of the likelihood of events that can negatively affect the resources of the information system to reduce security risks that potentially have the ability to affect the information system, subject to an acceptable cost of protection means that contain a risk analysis, analysis of the “cost-effectiveness” parameter, and selection, construction, and testing of the security subsystem, as well as the study of all aspects of security. Introduction Over time, the complexity of information systems is increasing, and, therefore, the issues of information security are becoming increasingly important for any organization.In this context, particular attention is paid to the analysis and assessment of information security risks as a necessary component of an integrated approach to information security. Typical analysis (and the associated assessment) of information security risks is performed during the information security audit of a system or the design stage.The main task of an information security audit is to assess the ability and effectiveness of control mechanisms applied to the information technology components, as well as the architecture of information systems in general.An information security audit includes many tasks, such as assessing the effectiveness of the information processing system, assessing the security of the technologies used, the processing process, and management of the automated system.The overall purpose of an information security audit is to ensure the confidentiality, integrity, and availability of an organization's assets.Information security risk assessment is also an integral part of an information security audit. Depending on the result of their evaluation, methodologies for assessing information security risks can be either quantitative or qualitative.The output of the algorithm of a quantitative methodology is the numerical value of risk.The input data for evaluation are usually used to collect information about adverse or unexpected events in the information security system, which may jeopardize the protection of information (information security incidents).However, the frequent lack of sufficient statistics leads to a decrease in the accuracy and relevance of the results. Qualitative techniques are more common, as they use overly simplistic scales, which usually contain three levels of risk assessment (low, medium, high).The assessment is carried out by interviewing experts, and intelligent methods are still insufficiently used. It is apparent that both of the above options have a number of inherent shortcomings.In order to overcome them, recent research focused on identifying alternative techniques that would be both more accurate and more adaptive, as the constant emergence of new sources of threats often renders existing methodologies inaccurate and ineffective.Among the promising methods, there are models based on solving uncertainty problems such as fuzzy logic models and artificial neural networks. Existing textbooks and studies provide a substantial amount of information, describing either the theoretical concept, a novel approach, or a specific case study implementation.While relevant for specific audiences, such studies are either too extensive or too specific, hence not providing a summary for potential researchers and adopters in the area of information security risk assessment.This entry provides an analysis and comparison of existing methods of information security risk assessment, highlighting their common features, benefits, and shortcomings.The structure of the entry follows closely the concept of information security risk.Section 2 provides a definition, followed in Section 3 by a comparative review of the two main categories of risk analysis (qualitative and quantitative).After the necessary theoretical context is provided, Section 4 provides an extensive analysis of proposed information security risk assessment approaches, including CRAMM, FRAP, OCTAVE, and RiskWatch.Section 5 reviews the limitations shared by the existing techniques and provides possible solutions to overcome them, and then Section 6 concludes the entry. Concept of Information Security Risk Risk, in a wider sense, is the probability of an event that entails certain losses (for example, physical injury, loss of property, damage to the organization, etc.).Information security risk is the potential probability of using vulnerabilities of an asset or group of assets as a specific threat to damage the organization [1]. The main features of risk are inconsistency, alternativeness, and uncertainty [2].Classification of information risks is shown in Figure 1 and classified into five groups [3,4].Three additional terms are necessary to describe the risk assessment spectrum boundaries.An inconsistency in risk emerges when the subjective assessment does not adequately and reliably assess and describe the objectively existing risky actions.An alternativeness is the need to choose from two or more possible solutions or actions.If there is no choice, then there are no risky situations and, consequently, risk.Uncertainty is the incompleteness or inaccuracy of information about the conditions of the decision [5].The existence of risk in itself is possible only when decisions are taken in absence of or with insufficient information about the implications of a decision.These features can lead to serious difficulties in the risk assessment process. Risk analysis includes a process of risk assessment and potential methods to reduce risks or reduce the associated adverse effects [6].The concept of risk analysis did not originate with information-related assets, as it generically focuses on two main characteristics: probability and impact of an event onto an organization.In the context of information security, the impact represents the likely damage caused to an organization as a result of information security breaches, taking into account the possible consequences of loss of confidentiality, integrity, or availability of information or other assets.The probability estimates the likelihood of such a breach, taking into account existing threats and vulnerabilities, as well as implemented information security management measures.The level of damage is a monetary parameter and an equivalent of the cost, and the cost can be calculated according to the methodology proposed in [7]. In order to evaluate the level of threat and potential impact of an event, an analysis is carried out, using various tools and methods, on the existing information security processes.Based on the results of this analysis, the highest risks are highlighted, which should be perceived as dangerous threats, requiring immediate additional protective measures. Qualitative and Quantitative Approaches for Risk Analysis Information security risk analysis can be divided into two types: qualitative and quantitative.Qualitative analysis identifies factors, areas, and types of risks and it typically uses human interaction, for instance, through workshops or interviews, to generate its inputs.Following data collection, risk manager analysis is applied in a qualitative rather than quantitative way.While the process may not satisfy a numerical model, it is often employed for its ability to translate the complexity surrounding the risks studied and to draw relationships between apparently inconsequential pieces of information. Different types of qualitative analysis can be conducted, for instance, looking at transcripts of the interviews conducted or of the topics discussed during workshops and using some kind of thematic analysis.This can, for instance, be based on analyzing the discourse used, as language can enlighten details about the environment and context of risks. Different types of qualitative analysis can be conducted.A time-consuming but convenient approach is to apply a thematic analysis to the transcripts of the interviews conducted or the topics discussed during workshops.This can, for instance, be based on analyzing the discourse used, as human language can highlight specific details about the environment and context of risks.Given its input, qualitative risk assessment represents an effective way to consider interrelationships in business areas and hence to be able to assess not only the technical aspects but also the issues arising from people and processes. For qualitative risk assessment, the main focus is on the likelihood of an event rather than its statistical probability.These likelihoods are derived from analyzing the threats and vulnerabilities, and then generating a qualitative or quantitative value for the asset or assets that may be affected (impact): where Threat × Vulnerability is likelihood.One example of qualitative risk rating methodology is the OWASP (Open Web Application Security Project) Risk Rating Methodology [8].Following its analysis, OWASP generates a summary similar to the one presented by Figure 2a, where Impact and Likelihood are qualitatively evaluated as Low, Medium, or High.Another methodology is the SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis, also known as the SWOT matrix, a tool that is intended to identify and assess the internal factors (strengths and weaknesses) and also external factors including opportunities and threats of a product, project, or a business.SWOT produces a 2 × 2 matrix similar to the one presented by Figure 2b.As visible in the matrix, the focus of the analysis is the external/internal location of the identified characteristics and their negative/positive impact on the organization.Given the combination of the two variables, the respective characteristic is qualified as a strength, weakness, opportunity, or threat and appropriately added to the respective field.Following the analysis, the matrix is fully populated with attributes; typical examples of this methodology can be found in [9,10].In contrast, quantitative risk analysis should make it possible to quantify the size of losses.However, quantification is challenging as it requires an appropriate input to quantify the risks.There is no single methodology allowing to determine the quantitative value of risk.This is primarily due to the lack of the required amount of statistical information on the possibility of a specific threat.Secondly, determining the cost (and resulting value) of a specific information resource plays an important role and is often a challenging task [11][12][13].For example, a business can quantify and indicate the cost of the equipment and media storing a specific information resource but is likely to be unable to determine the exact cost of the data on this equipment and media or the financial consequences of losing it. Vulnerability and threat are additional key concepts for assessing information security risks.A vulnerability is a defect or weakness in a protected asset that may compromise its confidentiality, integrity, or availability.A threat is a potential opportunity to disrupt information security.An attacker materializes a threat by exploiting a vulnerability in a resource and mounting an attack [1].Basically, to compile a risk model, the following components are needed to be considered: A list of potential threats to the assets; 4. A list of potential vulnerabilities; 5. A list of countermeasures and recommendations for risk mitigation [14,15]. Vulnerabilities related to information systems vary from configuration flaws that allow access to assets to third parties with limited access information to software errors. The period between the moment when the vulnerability is identified and can be exploited and the moment it is eliminated represents the window of opportunity associated with this vulnerability.The increasing complexity of software combined with the fast-paced release of new applications and architectures leads to the continuous emergence of new vulnerabilities and the means to exploit them.The permanent presence of one or more windows of opportunity requires organizations to continuously monitor the spectrum of user interactions and countermeasures taken as quickly as possible. The first approach considers the causality between threat and damage [16,17].It includes four factors: the probability of an event occurring, the amount of damage, the probability of the threat, and the magnitude of vulnerability.The magnitude of risk, as shown in (2), is a product of the probability of event and amount of damage.Probability of event, introduced by (3), indicates the likelihood of exploiting the existing vulnerabilities, successfully implementing the threat to the asset, and inflicting damage to the organization and is a product of the probability of threat, which determines how likely it is for the threat to materialize, and the magnitude of vulnerability, which defines how likely it is for a threat to materialize using a specific vulnerability.The amount of damage aims to quantify the extent of the effect on the infrastructure: where the value of Probability of event is determined by the formula: where Probability of event is a probability of successful implementation of the threat to the asset using vulnerabilities and damage to the organization; Probability of threat is a probability that the threat to the asset will be realized (the success or failure of the threat is determined by the magnitude of the vulnerability); Magnitude of vulnerability is a likelihood that in the event of a threat to an asset, that threat will be successfully exploited using that vulnerability. The second formula for quantitative risk assessment also includes the Magnitude of the vulnerability and the Amount of damage, but it focuses instead on the amount of effort required to mount an attack, expressed through the Number of attempts to implement the threat: For each risk, we calculate the Annual Loss Expectancy (ALE), which is a businessfriendly measure of a risk in a quantitative risk assessment approach [19,20].ALE requires defining a number of additional parameters: Annual Rate of Occurrence (ARO), Single Loss Expectancy (SLE), Annual Loss Expectancy, Asset Value, and Exposure Factor, further defined below.The value of ALE characterizes the potential annual losses (risk).It is calculated based on ARO and the SLE for each risk: The ARO is a business-friendly measure of the probability of occurrence of an event, which helps in terms of the annual budget.ARO shows the likelihood of a specific threat to be realized within a specified period (most often, in one year) and can also take values in the range from 1 to 3 (low, medium, high). SLE is the monetary value expected from the occurrence of a risk on an asset.It follows from the asset value and how much of that asset value will be taken away in the event of risk being realized.Another way to calculate ALE is where AV (Asset Value) is a resource cost, reflecting the value of a particular information resource.With a qualitative assessment of risks, the cost of a resource, as a rule, ranges from 1 to 3, where 1 is the minimum cost of the resource, 2 is the average resource cost, and 3 is the maximum resource cost.For example, in the banking information system, an automated service will have the rank AV = 3, while a separate information terminal will have an AV = 1.The EF (Exposure Factor) is the degree of vulnerability of the resource to the threat.This parameter demonstrates how vulnerable a resource is to the threat under consideration.As part of qualitative risk assessment, this value also ranges between 1 and 3, where 1 is the lowest degree of vulnerability (minor impact), 2 is medium (there is a high probability of resource recovery), and 3 is the highest degree of vulnerability (complete replacement of the resource is required after the threat has been disposed of).For example, considering a banking organization, the same server of an automated banking system is characterized by the greatest availability, carrying, therefore, the maximum associated threat of 3. After making the initial risk assessment, the calculated values need to be ranked according to their importance to determine the low, medium, and high levels of information risks. In practice, risk assessment is always performed at a certain level of detail.All components of the risk can be broken down into smaller components or can be grouped to obtain more general estimates.It all depends on the goals of the organization: to obtain general information about the state of possible threats and vulnerabilities or to build a quality comprehensive information security system.Therefore, the equation can be used to calculate the risk: where the Number of attempts to implement the group of threats is the expected number of attempts to implement threat groups during the year; and the Total magnitude of the vulnerabilities is the total probability that in the event of threats to assets, these threats will be successfully implemented using this group of vulnerabilities.The Amount of total damage/losses is the amount of damage in case of loss of all assets to which threats are realized.In general, the magnitude of the risk depends on the asset values, the threats and related probabilities of occurrence of a dangerous event for assets, the ease of implementation of threats exploiting specific vulnerabilities, and the existing or planned remedies that reduce vulnerabilities, threats, and adverse effects.The stages of risk analysis, in general for most of the methods used, are shown in Table 1. Depending on the needs of the organization and the conclusions of its management regarding the value of the asset, the risk may be eliminated, reduced, transferred, or approved.Eliminating the risk would be achieved when refusing to use the resource.Reducing the risk would require, for example, the introduction of means and mechanisms of protection that reduce the likelihood of a threat or the coefficient of destructiveness.The risk could also be transferred to an insurance company or a third party responsible for the respective element, who, in the event of a security threat, will bear the costs associated with the loss, instead of the owner of the information system.Finally, the risk can be approved by developing an action plan and setting in place appropriate conditions [21].Risk acceptance varies between organizations.Its level depends on a sum of factors, including the specific business goals of the company, the risk security risk profile, number of customers, financial impact, and portion of investment or budget dedicated to risk management [6].The risk appetite or risk tolerance sets the boundaries for prioritizing which risks need to be addressed.A company with a high risk appetite may approve of more risk for a higher reward associated with the risk, while a company with a low risk appetite would seek less uncertainty, for which it would accept a lower return.Setting the appetite is critical to managing the business effectively and efficiently to help an organization know where to invest time and resources. The purpose of any approach to information security risk assessment is to study the risk factors and make the best decision on risk management.Risk factors are the main parameters that are taken into account when assessing risks.There are only seven such parameters: asset, losses, threat, vulnerability, control mechanism, amount of average annual losses, and return on investment. Analysis of Existing Methods of Information Security Risk Assessment In order to solve the problem of information security risk assessment, many software packages have been created according to the developed methods, which are now used by enterprises and auditors [22].There are over 30 methodologies and frameworks that can be used for IT security risk assessment [22][23][24][25][26].A complete analysis of the entire risk analysis spectrum is beyond the scope of this work; in order to provide a substantial coverage based on the current usage trends, we focus on the subset that is most commonly used by enterprises, focusing on methodologies that include a budget decision.As a result, we do not consider frameworks designed for audit, IT governance, and certification, such as ISO/IEC 27001:2005 [1], ISO/IEC 15408:2006 (Common Criteria for Information Technology Security Evaluation), COBIT (ISACA), and NIST SP-800 [16] standard whose main purposes are audit, IT governance, and certification.This section highlights the most significant ones. The Central Computer and Telecommunications Agency (CCTA) Risk Analysis and Management Method (CRAMM) is one of the most common alternatives of risk control [27].Risk analysis using this method involves identifying and calculating risk levels based on estimates assigned to resources, threats, and resource vulnerabilities.Risk control is the identification and selection of countermeasures that can reduce risks to a level that the company can take. The study of the system using CRAMM is carried out in five stages: 1. Initiation, which produces a description of the boundaries of the information system, its main functions, categories of users, and personnel involved in the survey; 2. Definition and valuation of assets, to describe and analyze everything related to determining the value of system resources.At the end of this stage, it is determined whether the customer is satisfied with their existing practice or whether they need a full risk analysis.In the latter case, a model of information system will be built from the position of the information security; 3. Threat and vulnerability assessment (optional stage, depending on whether the customer satisfies the basic level of information security), aiming to deliver a full risk analysis.Finally, the customer receives identified and assessed levels of threats and vulnerabilities for its system; 4. Risk analysis, to assess the risks either on the basis of assessments of threats and vulnerabilities in full risk analysis or by using simplified techniques for the basic level of security; 5. Identification of countermeasures. Following the above stages, the process then selects the criteria applicable to this information security and assesses the damage on a scale with values from 1 to 10.In the CRAMM descriptions, as an example, the rating scale is given according to the criterion "Financial losses associated with the restoration of resources" as follows: • 2 points-less than USD 1000; • 6 points-from USD 1000 to USD 10,000; • 8 points-from USD 10,000 to USD 100,000; • 10 points-over USD 100,000. All the necessary input for the CRAMM methodology comes in the form of expert assessments and responses to surveys of employees of the organization on aspects of their use of various resources.These surveys are also formulated based on the data on the information system of the organization entered by experts [27].This process of collecting data can be cumbersome and time-consuming, which represents one of the most significant drawbacks of this approach.From a processing perspective, the CRAMM software then generates a list of unambiguous questions for each resource group and each of the 36 threat types.The level of threats is rated, depending on the responses, as very high, high, medium, low, and very low.The level of vulnerability is assessed, depending on the answers, as high, medium, and low.Thus, CRAMM is an example of a calculation method in which the initial estimates are received at a qualitative level and then translated to a points-based quantitative assessment.One issue of CRAMM is that its implementation cannot be reused, as it cannot remember or reuse previous results as factors that affect risk parameters. Facilitated Risk Analysis Process (FRAP) is a technique where information security provision is considered as part of the risk management process [28].Within FRAP, risk management begins with risk assessment: properly documented findings are the basis for decisions to strengthen the security of the system in the future.Once the assessment is complete, a cost-benefit analysis is performed to identify the protection tools needed to reduce the risk to an acceptable level for the organization.Creating a "threat" list, according to the FRAP methodology, can use several approaches: • Lists of possible threats prepared in advance by experts; • Analysis of adventure statistics in this information system; • "Brainstorming" conducted by employees of the company. When the list of threats is complete, each of them is compared with the probability of occurrence by an expert, followed by an assessment of the damage that may be caused by this threat.The level of threat is estimated based on the obtained values, thus assessing the level of risk for the unprotected IP, which further shows the effect of the introduction of information security tools.The last stage of this technique is documentation.When the risk assessment is completed, its results should be documented in detail in a standardized format for further use or more in-depth analysis.The list of basic stages of risk assessment largely resembles the stages and approach employed by other methods but FRAP provides a slightly more in-depth view of a system and its vulnerabilities.However, the FRAP techniques require expert communication and meetings inside companies which makes the collecting data process time-consuming. Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) is a methodology developed by the Software Engineering Institute (SEI) at Carnegie Mellon University for the identification and management of information within an organization.According to this method, the whole process of analysis is carried out by the staff of the organization, without the involvement of external consultants, following a self-directed approach.This involves the creation of a mixed group that includes both technicians and managers at various levels, which provides a comprehensive assessment of possible security incidents, together with the business implications of the development of countermeasures [29]. OCTAVE provides three phases of analysis: • Development of a profile of threats related to the asset; • Identification of vulnerabilities; • Development of security strategies and plans. A profile described by the OCTAVE method follows a tree structure, as shown in Figure 3.Such a threat profile is created for each critical asset, with one tree structure for each category of threat [30].When creating a threat profile, it is recommended to limit the number of technical details, as this is the task associated with the second stage of the methodology.In OCTAVE, risk assessment is provided primarily through the perspective of the expected damage, without an assessment of probability, using a qualitative high/medium/low scale.The expected damage is represented as a combination of financial damage, damage to the company's reputation, life and health of customers and employees, and damage that can cause legal prosecution as a result of an incident. OCTAVE has a catalog of countermeasures to mitigate the threat landscape.Unlike other methods, it does not imply the involvement of third-party experts in the study of information security, and all documentation on OCTAVE is publicly available and free of charge, making the methodology especially attractive for enterprises with a tightly limited budget allocated for information security.The main advantage provided by OCTAVE is its modular implementation.Given its exhaustive analysis, organizations may choose to implement portions of the workflow that they find appropriate.OCTAVE has two variants: OCTAVE-S and OCTAVE Allegro.OCTAVE-S has fewer processes, nevertheless adhering to the overall OCTAVE philosophy [31] and thus simplifying application for SMBs.OCTAVE Allegro is a later variant, which focuses on protecting information-based critical assets. The RiskWatch methodology is another alternative risk assessment method that uses the expected annual losses ALE and returns on investment as criteria for assessing and managing risks.RiskWatch is focused on accurately quantifying the ratio of losses from security threats and the cost of creating a protection system.The RiskWatch product is based on a four-stage risk analysis technique [32].The first stage is to determine the subject of research using the following parameters: type of organization, system composition, and basic security requirements.The second stage is the input of data describing the specific characteristics of the system.Data can be imported from reports created by computer network vulnerability research tools, describing resources, losses, and incident classes.Incident classes are obtained by comparing the category of losses and the category of resources.It also sets the frequency of each of the identified threats, the degree of vulnerability, and the value of resources.The third stage is quantitative risk assessment [11].This tool allows the assessment of both the risks that currently exist in the enterprise and the benefits that may bring the introduction of physical, technical, software, and other means and mechanisms of protection.The resulting reports and graphs provide sufficient decision-making input material for managing and changing the security system of the enterprise.In the fourth step, reports are generated. The RiskWatch technique divides the information system into several profiles and uses the specified system for the account of risks of information security of the enterprise.The RiskWatch family includes several software products for various types of security audits: an intelligent physical security risk assessment platform, an information security risk management platform, a compliance assessment and management platform, a supplier security risk assessment platform, and a vendor security risk assessment platform. Thus, RiskWatch makes it possible to assess not only the risks that the enterprise currently has but also the benefits that the introduction of physical, technical, software, and other means and mechanisms of protection can bring.The prepared reports and graphs provide material sufficient for making decisions on changing the enterprise security system. The matrix-based approach of risk analysis links assets, vulnerabilities, threats, and controls and determines the importance of the various controls to the assets of the organization [33], which are perceived to be significant from the point of view of objects that can be both material and intangible.The matrix methodology includes three separate but related matrices: a threat matrix, a vulnerability matrix, and a control matrix to collect the data that are required for risk analysis [34].The vulnerability matrix contains the relationship between assets and vulnerabilities in the organization, the threat matrix contains the relationship between vulnerabilities and threats, and the control matrix contains the relationships between threats and controls.The value in each cell of the matrix shows the value of the relationship between the row and column element, using the standard qualitative low/medium/high rating system. It is convenient to use the scale for measuring the impact: One of the main advantages of this method is that it can be applied to almost any organization.The methodology contains convenient matrix templates that can be improved with the advent of new information for analysis.This method can be used independently without contacting specialists. MEHARI methodology [35] provides a structured approach to risk assessment that is designed to assist in the implementation of ISO 13335, 27001, and 27005 standards to ensure certification.Risk analysis is scenario-based; for each scenario, a database is compiled and documented either using tables that are integrated into Microsoft Excel or OpenOffice or connecting to specialized software such as Risicare, which has a user-friendly interface, as well as modeling, visualization, and optimization capabilities. MEHARI provides an opportunity to assess the risk factors and threats both qualitatively and quantitatively.For the assessment, a structured risk model is used that takes into account "risk reduction factors".The risk analysis process is implemented in seven stages: 1. Risk identification to identify what is under the risk; 2. Risk analysis to establish seriousness; 3. Risk evaluation to decide whether the risk is acceptable or not; 4. Risk treatment to decide if one needs to accept, redact, transfer, or avoid risk; 5. Developing the action plan; 6. Implementing the action plan; 7. Monitoring and steering direct management of risks. Although this methodology is simple to implement, free, and compatible with the ISO information security standards, it requires a large "knowledge base" of risks at the initial stage to support semi-automatic risk assessment procedures based on a set of input factors.A number of studies have considered alternatives for improving the process automation, such as [36], which aims to combine MEHARI with risk forecasting. CORAS was originally developed as part of a European project but has not been funded since 2003.However, this non-profit methodology is supported by the efforts of volunteers and has its own community [37].It is based on the ISO 27002 standard and is ISO 27005 compliant.CORAS uses a special UML-based diagramming language for visualization and risk assessment.The language offers five types of basic diagrams: asset diagrams, threat diagrams, risk diagrams, treatment diagrams, and treatment overview diagrams. The method describes eight sequential steps: 1. Preparatory: define the purpose of the assessment and the depth of analysis. 2. Requirements analysis: working with the customer to reach a common understanding of the overall goals and planning, as well as the purpose, focus, and scope of the assessment. 3. Critical appraisal: investigate the company infrastructure and its most valuable assets.Several high-level threat scenarios, vulnerabilities, and risks have been agreed upon for further study.Refined targets and detailed target descriptions are documented using the CORAS language.4. Identification of risk assessment criteria that will be used in the future.This step also checks whether the customer approves of the detailed description of the goal and its context, including assumptions and preconditions. 5. Brainstorming: workshop-based activity that aims to identify as many risks as possible.6. Risk level assessment: interdisciplinary brainstorming session that aims to determine the likelihood and consequences of each of the previously identified risks.7. Making decisions on risks.8. Evaluate and compare possible treatments and mitigation. CORAS's core strengths, aside from the fact that it is free, are its use of a model-based approach for risk assessment and its methodology, written in an accessible language for managers and IT experts.Its use of diagrams facilitates communication between different stakeholder groups.The most significant drawback of CORAS is its duration, as the process requires a significant number of meetings with personnel from different backgrounds, and the actual risk assessment is performed only in the second half of the analysis, starting from step 5 (risk identification, risk analysis, risk assessment, and risk treatment). Table 2 summarizes the defining characteristics of the risk assessment methods discussed within Section 3 in terms of approach used and input/output parameters.For an integrated presentation of the risk parameters [38], we used parameters from Section 3: Vvulnerability, T-threat, I-impact, M-measure of risk (either qualitative or quantitative, QLT and QTY respectively), F-frequency, L-losses, P-probability, E-events that can lead to a violation of information security, and C-controls that needed to be in place to treat the identified risk.E, M, F, P, T, I, V E, A, М, F, P, L, C QLT FRAP [28] M (QLT) T, C P, M (QTY) QLT MEHARI [35] E, C I, M QLT, QTY OCTAVE [29] E, T, V, M I, M C QLT Risk Matrix [33] D, M, V, T, C P, I QLT, QTY RiskWatch [32] E, I, F, L, V M, P, L, C QTY * Assets as input are used in every method. Based on the analysis carried out, information security specialists can then choose the appropriate methods and tools they need, according to the data available at the input or the results they want to get at the output.For example, if the process requires information about the measure and probability of risk at the output, as well as information about possible losses, then one can use the RiskWatch method.If the input includes information about the action as well as the extent and the likelihood of the risk, then one can use the CRAMM method. Shortcomings of Existing Methods and Possible Solutions Existing risk assessment methodologies mostly differ in the applied risk assessment scales: quantitative or qualitative.The output of the algorithm of quantitative methodology is the numerical value of risk.The information on unexpected events and threats is usually used as input for evaluation.However, the frequent lack of sufficient statistics leads to a decrease in the adequacy of the results.Qualitative techniques are more common, but they use overly simplistic scales, which usually contain three levels of risk assessment (low, medium, high).The assessment carried out by interviewing experts and by using intelligent methods is still insufficient.Moreover, such results cannot be reused. In connection with the above shortcomings, experts are actively looking for a technique that would give a high-quality outcome that can adapt to the constant changes of the threat landscape, exclude the inadequate and irrelevant expert assessments, and allow reuse of previous evaluations.The most promising method in this area is the artificial neural network (ANN) approach, which addresses the challenges of existing methods, particularly with regards to flexibility and adaptability, although it requires a lot of time and intellectual resources [39-42].In addition, the ANN has intelligent features such as self-learn, and thus it is possible to find the best way to solve the problem, accumulating information about external and internal processes. The neural network module acts as the main mechanism for calculating the magnitude of the risk associated with the vulnerability.First, a fuzzy model of information systems needs to be constructed where the input variables of the system are the values of three risk parameters in the range [0, 1] obtained by interviewing experts when following one of the risk assessment methods described earlier.According to Equations ( 3), ( 4) and ( 7), these three parameters correspondingly represent the approximate probability that the attacker will exploit the vulnerability, the level of damage, and the level of vulnerability (also described as the degree of difficulty when aiming to eliminate the vulnerability).The neural network can use fuzzy logic during calculation, while the quality and accuracy can be ensured by the ability of the neural network to learn and by the process of correlation of synaptic weights.The output is a system that receives the input values of threat, damage, and vulnerability and calculates a quantitative indicator of risk.The user is able to obtain risk values for various vulnerabilities of the automated system and draw appropriate conclusions about the overall level of risk.The same methodology can be applied to all the models presented in Table 2, where the parameters from the risk calculation column can be used as input for the fuzzy model. This approach to assessment is new and can solve the existing problems due to the shortcomings of the usual, widely used method of risk assessment based solely on expert assessment of the level of threats, losses, and vulnerabilities. Another approach that can simplify and speed up the process of risk assessment is using ontology-based modeling [43][44][45][46][47].This approach uses semantic elements defined during risk analysis and provides an easy way of duplicating the data once the object or process has been described.The risk factors can be described and structured using different formalization languages (Web Ontology Language, natural language, UML).The advantage of this modeling approach is that the model can be reused by the company during the next risk evaluation exercise or can be adapted to a new application.This can solve the problem of some of the methods regarding the reuse of survey results. Conclusions Beyond the concept definition of information risk assessment, the entry provides an insight into the qualitative and quantitative risk assessment methods, highlighting the shortcomings and details of the process.A number of software packages have been created that allow automating individual stages of a specialist's work, but, as highlighted, they provide mainly a series of mathematical calculations or the formation of reports or other documentation and are not able to automate the technical risk analysis and auditing activities or to train the system when re-performing the operation.In almost all risk analysis methodologies, the processes of qualitative and quantitative assessment are separated.Qualitative information on the levels of threats and vulnerabilities is collected first, followed by a quantitative expert assessment of these parameters. Figure 1 . Figure 1.Classification of information risks. [ Magnitude of group of risks] = [Number of attempts to implement the group of threats] × [Total magnitude of the vulnerabilities] × [Amount of total damage/losses] Figure 3 . Figure 3.The variant tree is used to describe the profile in the OCTAVE method. Table 1 . Stages of risk assessment. Table 2 . Risk assessment methods and characteristics.
8,829.8
2021-07-24T00:00:00.000
[ "Computer Science" ]
Keratin 19 maintains E-cadherin localization at the cell surface and stabilizes cell-cell adhesion of MCF7 cells ABSTRACT A cytoskeletal protein keratin 19 (K19) is highly expressed in breast cancer but its effects on breast cancer cell mechanics are unclear. In MCF7 cells where K19 expression is ablated,we found that K19 is required to maintain rounded epithelial-like shape and tight cell-cell adhesion. A loss of K19 also lowered cell surface E-cadherin levels. Inhibiting internalization restored cell-cell adhesion of KRT19 knockout cells, suggesting that E-cadherin internalization contributed to defective adhesion. Ultimately, while K19 inhibited cell migration and invasion, it was required for cells to form colonies in suspension. Our results suggest that K19 stabilizes E-cadherin complexes at the cell membrane to maintain cell-cell adhesion which inhibits cell invasiveness but provides growth and survival advantages for circulating tumor cells. Introduction The majority of cancer deaths are due to metastasis [1][2][3]. Tumor metastasis is a multistep process that involves spreading of cancer cells from a primary site to colonization at a distal site [2,4]. During its initiation, metastasis has been linked closely to epithelial-to-mesenchymal transition (EMT) whereby cells lose morphological traits of epithelial cells, including tight cell-cell adhesion and apical-basal polarity, and gain mesenchymal traits such as increased ability to undergo migration and invasion [2,5,6]. At the molecular level, epithelial markers such as E-cadherin and keratins become downregulated while mesenchymal markers such as vimentin become upregulated during EMT. Keratins belong to an intermediate filament family of cytoskeletal proteins, and keratin filaments maintain epithelial cell polarity and mechanical integrity through intercellular and cell-extracellular matrix junctions called desmosomes and hemidesmosomes, respectively [7,8]. Decreased expression of keratins during EMT is considered to contribute to an initiation of metastasis by loosening cell-cell attachment through disassembly of desmosomes [9][10][11]. Therefore, it has been suggested that maintenance of intercellular adhesion by keratins serves as a barrier against EMT and cell migration [11], a concept supported by several in vitro studies involving keratins expressed in simple epithelium and keratinocytes [5,[12][13][14]. However, upregulation of select keratins has been shown to enhance cell migration and invasion in certain cancer settings [12,15,16], likely due to the fact that some cancer cells invade extracellular matrix collectively [17,18]. Following initiation, metastatic cells intravasate into the bloodstream and must survive in suspension as circulating tumor cells (CTCs) en route to a distal site [2,4,19]. High metastatic potential of CTCs has been associated with stem-like properties [20] and also with clusters of cells with higher levels of cell-cell adhesion molecules plakoglobin or E-cadherin [21,22]. However, the role of keratins on stem-like traits and clustering of cancer cells upon detachment from the extracellular matrix remains unclear. In the context of breast cancer, previous studies have shown that depletion of K19 increases cell migration and invasion in vitro, potentially through upregulation of Akt and Notch signaling pathways [23][24][25]. However, mammary stem/progenitor cells transformed with sets of oncogenes including mutant Ras and p53 were more metastatic when K19 was present [26]. In addition, the role of K19 in cell-cell mechanics contributing to metastasis-related cancer cell behaviors has not been defined. To study the role of K19 on processes fundamental to metastasis, we examined the luminal-subtype MCF7 breast cancer cell line which expresses high levels of K19 [27,28]. MCF7 cells maintain polarized epithelial phenotype with intact cell-cell adhesions made of desmosomes and adherens junctions [29][30][31]. Using MCF7 cells with complete ablation of K19 [32] and KRT19 knockout (KO) cells with re-expression of K19, we observed that K19 is required for the epithelial-like cell shape and proper cell-cell adhesion. These events were accompanied by lower levels of plakoglobin but accumulation of E-cadherin in endocytic compartments in the absence of K19. Importantly, while we confirmed the inhibitory role of K19 on cell migration and invasion, K19 was found to be required for cells to grow in low attachment conditions. KRT19 KO cells display an elongated phenotype Under the microscope, MCF7 KRT19 KO cells showed a considerable difference in morphology from their parental counterpart. While parental (P) MCF7 cells were mostly epithelial-like and rounded in shape, KRT19 KO cells exhibited more mesenchymal-like morphology with elongated and spindled shapes (Figure 1a-Figure 1b). Of note, two KRT19 KO clones were used to confirm phenotypes associated with a loss of K19. To quantify the difference in shapes between parental and KRT19 KO cells, cells were sorted into two categories, rounded and elongated, based on their shapes ( Figure 1c). An elongated spindled cell shape with protrusions at cell edges was categorized as elongated while a rounded morphology characterized by a circular cell shape was categorized as rounded. Scoring of cell shapes confirmed that KRT19 KO cells were more elongated than parental cells as the majority of KRT19 KO cells (54.4-72.3%) were elongated while the majority of parental cells (58.4%) were rounded in shape ( Figure 1d). KRT19 KO cells also exhibited decreased minor/major axis ratio (0.39-0.44) compared to parental cells (0.50), further verifying more elongated shape (Figure 1e). Circularities of KRT19 KO cells were also less than that of parental cells ( Figure S1A) and when a cutoff value of 0.48 was used for circularity ( Figure S1B), the result mirrored what was observed in Figure 1d. Of note, while two KRT19 KO clones exhibited subtle differences from each other, both were more elongated than parental cells. Finally, digital holographic microscopy (DHM) was used to quantitate morphologies of parental and KRT19 KO cells (Figure 1f). DHM measured cells based on index of refraction and physical thickness [33,34] and produced 17 parameters per single cell based on individual cell pseudoheight (units in nm) derived from phase maps (Table S1). Optical measurements included pixel mean, standard deviation, and texture parameters, while geometric parameters included roundedness of cells, eccentricity and circularity. A higher eccentricity and a lower circularity frequency confirmed the elongated phenotype of KRT19 KO cells (Figure 1g-Figure 1h). Weakened cell-cell adhesions in KRT19 KO cells In addition to elongated shape, KRT19 KO cells were forming loose contacts between neighboring cells, whereas parental MCF7 cells were in close contacts with their neighbors (Figure 1b). To quantitate the difference, cell-cell adhesions made by subconfluent parental and KRT19 KO cells were examined. Cell-cell adhesions were categorized into three different degrees: high indicates a cell attached to its neighboring cells by making contiguous contacts all along adjoining sides; medium indicates a cell attached to its neighbor with contiguous and pointed cell-cell adhesions; and low indicates a cell attached to its neighbor only by pointed cellcell adhesion (Figure 2a). In the absence of K19, lower percentages of cells exhibited high adhesion (51.9-74.6% vs 88.1% for parental cells), but more cells showed low adhesion (17.9-36.6% vs 5.2% for parental cells), confirming the observation that a loss of K19 resulted in decreased cell-cell adhesion (Figure 2b). Since parental MCF7 cells grow in tight clusters even at low confluency, cell-cell adhesion was also assessed by counting number of cells in each cluster after passaging ( Figure 2c). The number of cells making contacts as a group was fewer in KRT19 KO cells compared to parental cells as 50.0-52.2% of KRT19 KO cells formed cell clusters with only 2-3 cells while 26.4% of parental cells did so (Figure 2d). In addition, measuring cell-cell contact length and perimeter for each cell showed that KRT19 KO cells exhibited decreased cell-cell contact length/perimeter ratios compared to parental cells (Figure 2e & S2). Finally, disrupting cell-cell adhesion with dispase treatment and mechanical force induction showed that K19 was required for proper cell-cell adhesion as indicated by a higher number of fragmented monolayers in KRT19 KO cells (6.9-9.8 vs 5.2 fragments in parental cells, Figure 2f & S3). Collectively, these data support the notion that K19 is required for proper adhesion between cells. cells/cluster. Data from three experimental repeats are shown as mean ± SEM. Any statistically significant difference between P and KRT19 KO cells is denoted by * (p < 0.05, Student's t-test) above the cluster of KRT19 KO cells. Chi-square test: p < 0.0001. (e) Ratio of cell-cell contact length to cell perimeter per cell. Data from four experimental repeats normalized to that of the parental control are shown as mean ± SEM. Student's t-test: *p < 0.05; **p < 1 × 0.001. (f) Monolayer fragment numbers of P and KRT19 KO cells from the dispase assay. Data from three experimental repeats, each with three replicates, are shown as mean ± SEM. Student's t-test: *p < 0.05. Data from at least three experimental repeats normalized to that of the parental control are shown as mean ± SEM. Student's t-test: *p < 0.05. (f) Co-IP of β-catenin with E-cadherin. IP with anti-β-catenin antibody or IgG control was perform in P and KRT19 KO (KO2) cells. Immunoprecipitates and inputs were subjected to SDS-PAGE and immunoblotting was performed with antibodies against the indicated proteins. Molecular weights in kDa. (g) Signal intensities of E-cadherin in IP from (F) were quantified and normalized to those of β-catenin in IP. Data from at least three experimental repeats normalized to that of the parental control are shown as mean ± SEM. Student's t-test: *p < 0.05. (h) Phase-contrast images of P and KRT19 KO (KO2) cells. Cells were either grown in normal growth condition (Normal media), placed in calcium-free media for 6 h, then left unstimulated (-Calcium) or stimulated (+Calcium) with CaCl 2 for 4 h. Arrows indicate high cell-cell adhesions and arrowheads indicate low cell-cell adhesions. Bar, 20 µm. Decreased cell surface localization of E-cadherin in KRT19 KO cells and defective cell-cell adhesion Consistent with decreased cell-cell adhesion, KRT19 KO cells expressed reduced levels of plakoglobin, a desmosomal and adherens junction component based on RNA-sequencing analyses (Figure 3a- Figure 3b). Surprisingly however, a functional enrichment analysis using Database for Annotation, Visualization and Integrated Discovery (DAVID) showed that many genes upregulated in KRT19 KO cells were associated with functions in cell membranes, cell adhesions, and cell junctions ( Figure S4). Indeed, although levels of βcatenin, a K19-interacting component of adherens junction ( Figure S5) remained the same, levels of E-cadherin, a Ca 2+ -dependent adhesive molecule and a key regulator of cell morphology, were found to be increased by 1. Figure 3g). Indeed, while stimulating Ca 2+ -deprived parental cells with Ca 2+ allowed cells to re-adhere to each other, Ca 2+ stimulation had little effect on adherence of KRT19 KO cells (Figure 3h), indicating that K19 is required for the formation and function of calcium-dependent junctional complexes. Defective cell-cell adhesion in KRT19 KO cells was also observed upon serum stimulation ( Figure S7). Internalization of E-cadherin into endocytic compartments in KRT19 KO cells E-cadherin constantly undergoes endocytosis and recycling in the absence of stable cell-cell contacts. The fact that total E-cadherin level is higher while its surface level is lower in KRT19 KO cells indicates that most E-cadherin is localized in endocytic compartments. To determine the exact location of E-cadherin in KRT19 KO cells, we performed co-immunostaining of E-cadherin and endocytic markers. Cells were stimulated with labeled transferrin to mark early/recycling endosomes ( Figure 4a) while LAMP1 was used as a marker of late endosomes/lysosomes (Figure 4b). Results show that while a subset of E-cadherin colocalized with labeled transferrin in KRT19 KO cells, very little, if any, E-cadherin colocalized with LAMP1, suggesting that E-cadherin in KRT19 KO cells localized to early and/or recycling endosomes. Indeed, when KRT19 KO cells were treated with a dynamin inhibitor dynasore to inhibit E-cadherin internalization, KRT19 KO cells showed marked improvement in cell-cell adhesion (7.0 vs 2.44 fragments, Figure 4c), suggesting that K19 maintains functional E-cadherin from being internalized ( Figure 4d). K19 re-expression rescues defects in in KRT19 KO cells To confirm that altered morphology ( Figure 1) and cell-cell adhesion ( Figure 2) in KRT19 KO cells were due specifically to the absence of K19, K19 was reexpressed in KRT19 KO cells. Introducing GFP-tagged K19 in KRT19 KO cells reverted vast majority of cells into rounded shape ( Figure 5a). Quantitation of cell shape, as is done in Figure 1d Figure 3b). Finally, expression of GFP-K19 in KRT19 KO cells increased E-cadherin coimmunoprecipitating with β-catenin, suggesting that re-expression of K19 increased formation of the adherens junction complex (Figure 5f). Altogether, these data confirm that K19 is required for proper cell shape and cell-cell adhesion while maintaining plakoglobin level and E-cadherin-β-catenin complex. K19 inhibits cell migration and invasion but is required for the anchorage-independent growth Cell morphology and adhesion of cancer cells are linked to events during metastasis such as cell migration, invasion, survival and growth in low adherence conditions. To determine how K19 affects cell migration, wound closure assays were performed. KRT19 KO cells migrated faster to close scrape wounds compared to parental cells as although 45 (Figure 6f). This increased invasiveness in the absence of K19 was reverted upon GFP-K19 overexpression (Figure 6g). Interestingly however, when cells were placed on low attachment plates to assess growth in suspension, mammosphere formation was compromised in KRT19 KO cells (Figure 7a-Figure 7b). Re-expression of K19 using GFP-K19 confirmed that K19 is required for mammosphere formation on low attachment plate (Figure 7c-Figure 7d). Similarly, growing cells in soft agar also showed that K19 is responsible for the formation of colonies in anchorage-independent conditions as colony area was decreased in KRT19 KO cells (Figure 7e- Figure 7f) and reduced formation of colonies was rescued upon GFP-K19 re-expression in KRT19 KO cells (Figure 7g). Taken together, these data demonstrate that K19 hinders cell migration but promotes colony formation in suspension. Discussion In this study, we demonstrate the role of K19 in maintaining rounded epithelial cell morphology of the MCF7 breast cancer cell line, which is of estrogen-positive luminal subtype where predominant expression of K19 can be found [35][36][37][38][39]. Interestingly, altered cell shape due to changed K19 expression was also observed in transformed mammary stem/progenitor cells [40] and triple negative breast cancer cell lines BT549 [23] and MDA-MB -231 [41], suggesting that K19 plays a crucial role in maintaining the architecture of breast cancer cells across different differentiation stages and molecular subtypes. Although functions of K19 in normal settings have not been fully resolved, K19 is likely involved in maintaining cell architecture through desmosomes as a member of the keratin family of proteins. Still, there are other keratins present in cells to compensate for the loss of K19, and a loss of all keratins did not by itself alter the shape of tumor cells in lung [42]. Therefore, it is rather intriguing that the absence of K19 can have such profound effects on breast cancer cell morphology. Along with decreased cell-cell adhesion, MCF7 cells lacking K19 showed more migration and invasion, suggesting that the maintenance of cell-cell adhesion by K19 helps cancer cells retain epithelial morphology and inhibits cancer cells from leaving the primary tumor site. Such function would be consistent with the wellknown role of keratins during EMT. Of note, vimentin overexpression also caused MCF7 cells to become elongated and more motile [43]. Therefore, K19 may be an integral member of keratins whose levels relative to vimentin govern the morphology and motility of epithelial cells. Despite faster migration and invasion, KRT19 KO cells were less efficient in forming mammospheres on low attachment plates and colonies in soft agar, conditions where cells were subjected to growth without being able to adhere to extracellular matrix. Anchorageindependent cell growth has been shown to correlate with metastatic potential as it mimics conditions that circulating tumor cells (CTCs) encounter inside the vasculature [44]. Our findings suggest that strong cellcell adhesion made by K19 provides survival and growth advantages to cancer cells, thus increases metastatic potential of CTCs. Indeed, it has been shown that clustering of circulating tumor cells confers high metastatic potential [21,45,46], and cell adhesion molecules K14, plakoglobin and E-cadherin have been shown to be required for metastasis [15,21,22]. Therefore, K19 may also be part of breast cancer cell molecular machinery involved in maintaining CTC clusters during metastasis. As our data from migration, mammosphere and soft agar colony formation assays show, K19 seems to either promote or inhibit metastasis in a stage-specific manner. Future studies using an animal model are needed to determine the exact role of K19 in metastasis in vivo. The fact that K19 was required to form colonies in low attachment conditions may help explain interesting observations that have been made previously in regard to tumor metastasis. While the basal-like subtype of breast cancer is considered to be more invasive and aggressive than the luminal subtype [39,47], luminallike cells without basal-like traits were fully capable of initiating invasive tumors in immune-deficient mice [48]. In fact, phenotypically pure luminal-like breast cancer cells formed larger and more invasive tumors than basal-like cells [48]. Also, increased levels of KRT19 mRNA in CTCs during metastasis are associated with worse patient survival [49][50][51][52], and K19 has been shown to be even released by breast cancer Transwell invasion of (f) parental and KRT19 KO (KO2) cells or (g) KRT19 KO (KO2) cells stably expressing GFP or GFP-K19 in the presence of either 0.1 or 10% serum as chemoattractant. Migrated/invaded cells were identified either by staining nuclei with propidium iodide or using GFP signals under a fluorescent microscope. For (d-g), Number of migrated or invaded cells per highpower field were quantified the ImageJ software. For (b-g), data from three experimental repeats, each with three replicates, are shown as mean ± SEM. Student's t-test: ns: not significant; * p < 0.05; **p < 1 × 0.005. cells of high metastatic potential [53]. Still, as the lack of K19 expression is also correlated with worse survival of young women with triple-negative breast cancer [54], additional studies are needed to reconcile these seemingly conflicting observations. Differences in the role of K19 in tumor metastasis are likely due to the context-dependent role of K19 as breast cancer is a heterogeneous disease and metastasis itself is a complex process involving multiple factors [2,4,6]. K19 is a stem cell marker in several tissue types, including breast tissue [55][56][57], but its role in cancer stem cells is unclear. As mammosphere formation in low attachment conditions has been linked to cancer stem cell activities due to resistance to anoikis, the requirement of K19 to form mammospheres suggests that K19 plays an active role in maintaining stem-like properties of breast cancer cells. Consistent with our observation, K19-negative mammary progenitor cells showed delayed tumor onset and displayed lower metastatic potential in xenograft assay than K19-positive progenitor cells [26]. As a subpopulation of cancer cells exhibiting stem-like properties are considered to be critical for metastasis [58], detailing the role of K19 in cancer stem cells will be important to study in the future. In the absence of K19, E-cadherin was found in endocytic compartments indicating that most E-cadherin was internalized in the absence of K19. Indeed, the use of dynamin inhibitor to inhibit internalization of cell surface proteins including E-cadherin strengthened cell-cell adhesion of KRT19 KO cells, suggesting that internalization of adhesion molecule is a culprit for defective cell-cell adhesion. Still, the mechanism of how K19 affects localization of E-cadherin remains unclear. Without keratins, desmosomes are smaller and consistent with this, plakoglobin levels were dependent on K19. Since a reduction of desmosomes precedes the loss of adherens junctions in tumors [11,59,60], a loss of K19 may internalize E-cadherin by first deregulating desmosomes which would then trigger the disassembly of adherens junctions. However, plakoglobin is also a part of adherens junction and K19 interacts with β-catenin [24]. Therefore, K19 may directly stabilize adherens junction components at the cell surface independent of its effect on desmosomes. Future studies will need to elucidate the exact mechanism of how K19 regulates E-cadherin localization. E-cadherin has long been considered as a tumor suppressor as it is one key component maintaining the epithelial state [19]. However, recent evidence suggest that E-cadherin is required to promote breast cancer metastasis. While loss of E-cadherin increased invasion, it reduced cell proliferation and survival, CTC number, and metastasis in various models of invasive ductal carcinomas [22]. In addition, knockdown of E-cadherin abrogated mammosphere formation of MCF7 cells [61]. The parallel between K19 and E-cadherin in inhibition of cell invasiveness and promotion of cell proliferation and mammosphere formation [22,32], together with regulation of E-cadherin localization by K19 suggest that K19 may functionally interact with E-cadherin to promote breast cancer metastasis. While our study did not get to determine if increased cell migration and decreased cell proliferation of KRT19 KO cells are due to decreased cell surface localization of E-cadherin, this study suggests that subcellular localization of E-cadherin may play a critical role in metastasis. Therefore, targeting cell surface E-cadherin may be a promising therapeutic approach for cancer. Conclusions Our study demonstrates that K19 is required to maintain cell morphology and cell-cell adhesion in MCF7 breast cancer cells. Cells lacking K19 show defects in plakoglobin expression, E-cadherin-β-catenin interaction and E-cadherin localization. Importantly, while K19 inhibited cell migration and invasion, it was required to form colonies in low adherent conditions. These data suggest that regulation of cell-cell mechanics of cells by K19 differentially affects processes critical to cancer metastasis. Antibodies The following antibodies: anti-K19 (A-53), anti-K18 (C-04), anti-GAPDH (FL-335), anti-β catenin (15B8), anti-E-cadherin (67A4), anti-β-actin (C4), anti-mouse IgG, and anti-rabbit IgG were from Santa Cruz Biotechnology (Santa Cruz, CA); anti-plakoglobin (D9M1Q) was from Cell Signaling Technology (Danvers, MA); anti-α-tubulin (11,224-1-AP) was from Proteintech (Rosemont, IL); and anti-LAMP1(H4A3) and anti-GFP (12A6) was from the Developmental Studies Hybridoma Bank (Iowa City, IA). 15,000 cells were plated on each well in a 6-well plate. 24 h after plating, at least five random fields of cells in culture were taken using a phase contrast microscope (Olympus optical company. LTD, Japan). To assess cell shape, cells were categorized as either round or elongated. For cell-cell adhesion, cells were categorized into three types (high, medium, or low) based on their attachments to surrounding cells. To quantify number of cells per cluster, cells in contact with adjoining cells were counted a day after passaging. Total of three experiments were analyzed. To measure cell-cell contact lengths, contacts between cells were manually traced using the ImageJ software (National Institute of Health). Minor and major axes as well as circularity were determined by ImageJ after manually tracing outer edges of each cell. Circularity was calculated using the mathematical formula 4π(area)/ (perimeter) 2 , and circularity of 1 represents a perfect circle (ImageJ). Digital holographic microscope (DHM) 30,000 cells were plated in each 35 mm glass bottom plate. The next day, images were taken using DHM as described previously [34]. The DHM system utilized a 633 nm wavelength HeNe laser to generate reference and object beams and had a lateral resolution of 1.2 µm with a pixel scale of 0.18 µm/pixel. The holograms were captured by a 1.3 MP CMOS camera (Lumenera Corporation, Inc., Ontario, Canada). Detailed information about the set up was published in [34,41,62]. Cell phase-derived pseudoheight maps, calculated directly from phase maps by assuming a cell index of refraction of 1.381, of parental and KRT19 KO2 cells cultured on glass were collected with a sample size of n = 259 and 173, respectively. Phase images of each individual cell were segmented and 17 phase parameters were extracted (Supplementary Table 1) using an in-house MATLAB code published in [33]. Representative cells were selected by measuring the sum of squared deviations (SSD) of all normalized parameters of each individual cell from the population's mean, and selecting the cells with the lowest measured SSD values [34]. Dispase assay Dispase assays were performed as described previously [63]. At 100% confluency, cells in a 6 well plate were washed with 1X PBS then subjected to 2.5 units/ml of dispase (Stemcell Technologies, Kent, WA). The plate was placed on an orbital shaker to induce mechanical stress at room temperature (RT). After 40 min, number of fragments was counted by naked eyes. Images of the plate were taken using ChemiDoc Touch Imager (Bio-Rad, Hercules, CA). Each experiment contained at least three replicates per condition, and every experiment was performed at least three times. For dynasore (Ambeed, Inc, Arlington Heights, IL) treatment, 100 µM of dynasore was added for 2 h in each well before the dispase treatment. Preparation of cell lysates, protein gel electrophoresis, and immunoblotting Cells grown on tissue culture plates were washed with 1X PBS and prepared in cold Triton lysis buffer (1% Triton X-100, 40 mM HEPES (pH 7.5), 120 mM sodium chloride, 1 mM EDTA, 1 mM phenyl methylsulfonyl fluoride, 10 mM sodium pyrophosphate, 1 μg/ ml each of cymostatin, leupeptin and pepstatin, 10 μg/ ml each of aprotinin and benzamidine, 2 μg/ml antipain, 1 mM sodium orthovanadate, 50 mM sodium fluoride). For immunoblotting, cell lysates were centrifuged at 14,000 rpm for 3 min at 4°C to remove cell debris. Protein concentration was determined using the Bio-Rad Protein Assay (Bio-Rad) with bovine serum albumin (RMBIO) as standard then were prepared in Laemmli SDS-PAGE sample buffer. Aliquots of protein lysate were resolved by SDS-PAGE, transferred to nitrocellulose membranes (0.45 μm) (Bio-Rad, Hercules, CA) and immunoblotted with the indicated antibodies at 1:1000 dilution, followed by horseradish peroxidase-conjugated goat anti-mouse or goat anti- Immunoprecipitation Immunoprecipitation (IP) was performed as described previously [32]. Cells were lysed using 1% triton lysis buffer. Cell lysates were centrifuged at 14,000 rpm for 3 min at 4°C to remove cell debris, and protein concentrations were measured. Samples were then precleared with prewashed Protein G Sepharose (PGS) beads (GE Healthcare) for 30 min on a shaker at 4°C. Precleared lysates were centrifuged at 14,000 rpm for 3 min at 4°C and beads were discarded. Input sample was prepared and lysates were incubated with indicated antibodies including IgG as a negative control on a shaker for 4 h at 4°C. Samples were then incubated for 45 min with prewashed PGS beads. Afterward, beads were washed 3 times with the lysis buffer and prepared for protein gel electrophoresis and immunoblotting. Biotin labeling of cell surface proteins Biotin labeling of cell surface proteins was performed as described previously [64]. Cell surface proteins were biotin-labeled using sulfo-N-hydroxysulfosuccinimidebiotin (Thermo Fisher Scientific) following the manufacturer's protocol. Cells were washed with ice-cold 20 mM HEPES, pH 7.5 in 1X PBS then treated with 400 μg/ml sulfo-N-hydroxysulfosuccinimide-biotin prepared in the washing buffer for 40 min on an orbital shaker at 4°C. A duplicate set of cells was kept without biotin as a negative control. Cells were then lysed with prechilled 1% triton lysis buffer, and cell lysates were centrifuged at 14,000 rpm for 3 min at 4°C to remove cell debris. After measuring protein concentrations, input samples were prepared, and IP was performed with prewashed neutravidin agarose beads (Thermo Fisher Scientific) for 1 h at 4°C. Beads were washed 3 times with the lysis buffer and prepared for protein gel electrophoresis and immunoblotting. Immunofluorescence (IF) staining IF staining of cells was performed as described previously [41]. Cells grown on glass coverslips (VWR) were washed with 1X PBS, fixed in 4% paraformaldehyde in 1X PBS for 35 min, and permeabilized in 0.1% Triton X-100 for 20 min. Samples were blocked in 5% normal goat serum (NGS, RMBIO) in 1X PBS for overnight at 4°C, then stained with primary antibodies diluted in blocking buffer (Table 1) Calcium or serum depletion and re-stimulation Calcium [65] or serum [32] depletion and restimulation were performed as described previously. 15,000 cells were plated on each well in a 12 wells plate. Next day, cells were placed in calcium free, low glucose with L-glutamine DMEM (USBiological life science, Salem, MA) or 0.1% serum-containing medium for time indicated in each figure. Cells were then either left unstimulated or stimulated with 5 mM of CaCl 2 or 10% serum for indicated time in figures. Images of either live cells in culture or cells stained with crystal violet were taken using Olympus CK2 Inverted Trinocular Phase Tissue Culture Microscope (OlympusOptical Co., Japan) equipped with an AM Scope 3.7 digital camera (AmScope, Irvine, CA) [41]. For crystal violet staining of cells, a mixture of 0.1% of crystal violet and 10% ethanol prepared in 1X PBS was added to cells, which were then placed on a shaker at RT for 15 min. Cells were washed with 1X PBS before imaging. Wound healing assay Wound healing assay was performed as described previously [66]. Wounds were made on cells grown to a confluent monolayer on each of a six well plate using a pipet tip. Cells were washed with the medium and images were taken using the light microscope (Olympus) at 0 and 30 h after wounds were made. Percentage of wound closure was calculated by measuring wound area using a free hand tool of the ImageJ software and normalizing area at 30 h to the area of the initial wound at 0 h. Each experiment contained at least three replicates per condition, and every experiment was performed at least three times. Transwell migration and invasion assays Transwell migration [67] and invasion assays [16] were performed as described previously. The Transwell inserts with 8 μm pores (VWR, West Chester, PA) were either left uncoated for migration assay or coated with 40 μl Matrigel (Corning Life Sciences, Corning, NY). Serum-deprived cells in 0.1% serum-containing media were plated in the top chamber for 2 h to allow attachment. 100,000 cells were used for migration assays while 50,000 cells were used for invasion assays. Either 0.1% or 10% serum-containing medium was then added to the bottom chamber. After 48 h, cells were washed, fixed in methanol at -20°C, and stained with propidium iodide. Cotton swabs were used to remove cells from the top chamber. Fluorescence images of migrated or invaded cells were processed using the ImageJ software to quantify the number of cells. All conditions were performed in triplicates and four high-power fields were imaged per replicate. Mammosphere formation assay Mammosphere formation assay was performed as described previously [61,68]. 100,000 cells were plated on each well of an ultra-low attachment plate (Corning Life Sciences, Corning, NY). After 7 days, images were taken using the light microscope (Olympus). Total area of mammospheres was obtained using the ImageJ software. Each experiment contained at least three replicates per condition, and every experiment was performed at least three times. Anchorage-independent growth in soft agar assay Soft agar colony growth assay was performed as described previously [67]. 20,000 cells were plated per well in a 6-well plate in 1 ml of 0.1% agarose in media on top of a 2-ml bottom layer of 0.5% agarose in media. Plates were incubated at 37°C in incubator for one week. Cells were fed every other day and images were taken using ChemiDoc Touch Imager (Bio-Rad) or CL1500 Imaging System (Thermo Fisher Scientific). Colony area was measured using the ImageJ software. Each experiment contained at least three replicates per condition, and every experiment was performed at least three times. MCF7 RNA-sequencing and bioinformatic analyses RNA-sequencing of parental and KRT19 KO MCF7 cells was performed previously [32]. List of genes was sorted based on q-value (less than 0.05), and fold change (greater than 1.5). 387 genes upregulated in KRT19 KO cells were uploaded in functional annotation tool (DAVID software) and keywords in functional categories were selected. Top thirteen pathways were sorted based on gene number. Graphs and statistics Data in bar graphs represent the mean ± standard error of means. For comparisons between two conditions, Student's t-test was performed to test the statistical significance. When comparing values given in percentages, chi-square test was performed. For experiments involving DHM, all statistical analyses were performed using Systat 13.1 (Systat Software Inc., Chicago, IL). For comparisons between two conditions, the nonparametric Mann-Whitney U-test was performed, with significance set at p < 0.05. Before using the test, assumptions of data independence, normality, and homoscedasticity were checked using visual assessment of phase maps (to ensure data were derived from unique cells), the Shapiro-Wilk test, and Levene's test, respectively. Since most datasets were not normally distributed, the nonparametric test was used. Resulting p-values are indicated for select comparisons in Figure 1 and all comparisons in Supplementary Data Table S1. The mean and 95% confidence intervals were placed in Table S1 to facilitate comparisons between parental and KRT19 KO groups. Representative cell phase maps from digital holographic microscopy were selected based on the minimum sum of squared deviations (SSD) of seventeen normalized phase parameters, as previously described [34]. Briefly, phase parameters from all cell phase maps of either parental or KRT19 KO cells were standardized using the zscore function in MATLAB vR2015a (The Mathworks, Natick, MA), and SSD calculated for each phase parameter, for each group. A perfectly average cell phase map would have SSD = 0; larger values of SSD imply a cell phase map further from average. All characterizations of cell-cell adhesion from phase contrast and immunofluorescence images were independently evaluated and analyzed by at least two researchers.
7,617.4
2020-05-28T00:00:00.000
[ "Biology" ]
Real-time monitoring and gradient feedback enable accurate trimming of ion-implanted silicon photonic devices Fabrication errors pose significant challenges on silicon photonics, promoting postfabrication trimming technologies to ensure device performance. Conventional approaches involve multiple trimming and characterization steps, impacting overall fabrication complexity. Here we demonstrate a highly accurate trimming method combining laser annealing of germanium implanted silicon waveguide and real-time monitoring of device performance. Direct feedback of the trimming process is facilitated by a differential spectroscopic technique based on photomodulation. The resonant wavelength trimming accuracy is better than 0.15 nm for ring resonators with 20-μm radius. We also realize operating point trimming of Mach-Zehnder interferometers with germanium implanted arms. A phase shift of 1.2π is achieved by annealing a 7-μm implanted segment. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, Introduction Silicon photonics is rapidly becoming a mainstream technology with significant impact in areas such as data communication [1,2], optical information processing [3,4] and optical sensing [5,6].It also provides a fruitful platform for fundamental research to demonstrate novel phenomena and functionalities [7][8][9][10][11][12][13]. To facilitate large-scale integration, tens to hundreds of photonic components are to be densely packed onto a chip at a scale of up to several cm 2 [9,10,13], hence high fabrication precision have been long pursued to improve device performance.Despite high levels of process optimization, a significant challenge remains posed by the requirement that every component completely matches the design.Fabrication errors and silicon wafer uniformity are contributing to deviations that need to be corrected after fabrication.For example, components with high quality factor are very sensitive to dimensional variations [14] which could shift the operating wavelength out of desired bandwidth due to the resulting change in effective index.Active control techniques such as integrated heaters have been developed to compensate fabrication errors by tuning the operating wavelength back to desired range [15,16].However, the increase in system complexity and power consumption of such components is a concern as it directly impacts cost-effectiveness and power-efficiency of silicon photonics systems.Additionally, a high density of such tuning elements on a chip increases thermal loading and crosstalk, with potential loss of performance.As an alternative approach, post-fabrication trimming has been proposed and investigated as a way to realize wavelength compensation for fabrication errors on silicon photonic devices.The post-fabrication processing usually tunes operating wavelength by applying permanent alteration such as compaction, strain and additional claddings on parts or whole sections of target circuits [17][18][19].Integrated heaters, fabricated in close proximity to the waveguides, were also investigated for resonant wavelength trimming by using dopant diffusion in silicon [20].A resonant wavelength shift of 0.24 nm was achieved by applying 20 V bias for 2 hours.Recently the use of germanium (Ge) ion implantation in silicon waveguides was reported as new method for refractive index tuning which can be combined with flexible and efficient trimming ability (0.6 nm per micron of implanted length for ring resonators with 10-μm radius) [21,22].The ion-beam implantation resulted in induced amorphization and led to large increase of refractive index, while recrystallization by annealing of the implanted segments reduced the index back close to its original value [23,24].The implantation of Ge ions is CMOS compatible, which allows mass production.However, the trimming accuracy of post-fabrication processing methods is largely dependent on the precise knowledge of the target device status during the process.Previous studies have employed separate trimming and characterization steps [17][18][19][20][21][22]25,26].Even when a dose for processing based on calibration results is established, other unknown variations between calibration and production devices may result in deviations.These conditions pose a challenge for accurate post-fabrication trimming using electron beam or rapid thermal annealing where real-time optical characterization of individual devices is not likely applicable. Here we show that highly accurate trimming can be realized with laser annealing of Geimplanted waveguide segments and real-time monitoring of device performance.As one of the most essential building blocks for silicon photonic circuits, ring resonators with implanted segments are used to demonstrate precise wavelength trimming of optical resonances.Realtime monitoring and gradient feedback is facilitated by the use of a differential spectroscopic technique, photomodulation, which not only yields the real time throughput, but also its first derivative with respect to the trimming parameter.The technique makes use of a reversible thermo-optic effect induced by the pump laser, which is used as a local perturbation to provide a dither onto the trimming process.Both broadband and narrowband spectroscopic methods were tested, the latter being more applicable to high-throughput industrial applications.Trimming accuracy better than 0.15 nm was achieved experimentally with the proposed schemes for rings with 20 μm radius, corresponding to 3% of the free-spectral range (FSR).Using photomodulation spectroscopy and 2D mapping we found that thermo-optic effect from laser heating induced a transient resonant red-shift, which can be overcome by calibration.We also performed the laser annealing on Ge-implanted Mach-Zehnder interferometers (MZIs) with real-time monitoring.2D photomodulation maps scanned on the implanted section of an MZI before and after annealing showed that the phase difference between two arms was flipped by more than π due to the substantial change of refractive index of one arm.The developed techniques could find applications in large-scale postfabrication trimming of silicon photonic devices with high accuracy. Device fabrication and laser annealing of Ge-implanted ring resonators with real-time spectrum monitoring Silicon photonic resonators and Mach-Zehnder interferometers were fabricated on a SOI wafer with 220 nm top silicon layer and 2 μm buried oxide layer, using electron beam lithography and inductively coupled plasma etching.A 20 nm thick silicon dioxide layer was deposited by Plasma-Enhanced Chemical Vapor Deposition (PECVD) as a protective cladding.Silicon slab thickness and the width of the rib waveguides were 100 nm and 500 nm, respectively.In the optical micrograph image [Fig.1(a)], a segment on the right side (framed by dashed lines) of a 30-μm-radius ring resonator was partially implanted with Ge ions with energy of 130 keV and the dose of 1 × 10 15 ions/cm 2 , which increased the refractive index at 1550 nm from 3.46 to 3.96 according to previous studies [23].The implanted range is denoted as the angle θ of the arc that is varied for testing.Short tapers (angle θ tap = 5°) with a tip width of 120 nm were formed at both ends of the implanted region which is 400 nm wide.The tapers are for suppression of reflection between the silicon part and Ge-implanted part of the waveguide, and potential resonance inside the implanted section. In the annealing setup illustrated in Fig. 1(b), the pump light at 400 nm wavelength was generated from the second harmonics of a femto-second laser (Coherent Chameleon, pulse duration 200 fs, repetition rate 80 MHz) and was focused on the implanted segment of a ring resonator by using a microscope objective (Mitutoyo, 100X, 0.5NA).The trimming laser had an average power of 4 mW and was focused to a spot of 800 nm, resulting in an intensity of around 8x10 5 W/cm 2 on the device.Since the pump wavelength was much smaller than the bandgap wavelength of silicon (~1.12 μm), the silicon waveguide became more absorptive, thus the heating efficiency and the trimming effect was stronger compared to a case of using an infrared laser for annealing.Standard fiber-grating couplers were employed for launching the probe as well as for collecting the output light.To characterize the shifting of the spectral resonances while trimming the device, we used a broadband spectroscopy setup.The output from an optical parametric oscillator (Chameleon Compact OPO), providing 200-fs pulses at 1550 nm with a 20 nm bandwidth, was used as the probe and the spectrum was analyzed using a grating spectrometer equipped with an InGaAs CCD array (Andor).The spectra were normalized to the spectral envelope of the probe.By scanning the microscope objective using a 3D piezo stage, the pump focus was moved along the implanted section of the ring.Figure 1(c) shows the spectra for a ring resonator with 30 μm radius as the pump was scanned along the length of the annealed segment.The resonant wavelength shift, normalized to the freespectral range, of the mode located around 1550 nm, was extracted and plotted in Fig. 1(d) against the annealed arc length.The spectral shift shows a linear trend, with a slope of 0.09 μm −1 of the fitted line.Previous studies of similar devices using rapid thermal annealing in an oven [22] showed a slope of 0.17 μm −1 , which is almost two times larger than in our current work.This deviation is attributed to a limitation in available power of the pump laser in our current study, but the capability of tuning over more than one FSR as demonstrated here is sufficient for most applications.The measured propagation loss of ion implanted waveguides is around 30 dB/mm, which means that insertion loss of up to 0.3 dB will be added to a ring resonator if the length of an implanted segment is up to 10 μm.Experimental results revealed that the Q-factors of unimplanted and implanted ring resonators with the same radius were similar (less than 10% decrease in Q factor was observed).Therefore, an additional loss due to Ge-ion implantation would not cause substantial device performance degradation.On the other hand, annealing at high power causes a temporary spectral offset, as shown in Fig. 1(e), showing the spectra of the last annealed point and after cooling down of the ring.A blue-shift of 0.1 nm can be seen from heated status to cool status.This shift is attributed to a stationary thermo-optic effect resulting from the laser heating, which produces a red-shift to the resonance in agreement with previous studies [15,16].Thus, high annealing powers results in some spurious thermal tuning effects that needs to be compensated in the real-time feedback response, for example using calibration. Laser annealing of Ge-implanted ring resonators with real-time gradient feedback While the broadband spectroscopic approach discussed above provides full information over the evolution of the spectral response, industrial applications could benefit from a more simple feedback signal corresponding to the wavelength of interest.We therefore developed another real-time monitoring method for decision making on whether the trimmed wavelength has reached a target wavelength.As illustrated in Fig. 2(a), the transmission T of a probe at a single wavelength through a resonator will change with the shifting resonance during an annealing process.Ideally, if T of the through port reaches its minimum it means the resonance is located at the probe wavelength, which can be set as our target.The use of the absolute transmission T for decision making comes with a number of disadvantages.Generally, the location of the minimum can only be established once the system has already moved past it, therefore the signal strength by itself is not a good feedback indicator.Additionally, noise from the setup and from the environment, laser fluctuations, or drift in the fiber coupling will add spurious variations which complicate a precise absolute measurement of T. Much more reliable measurements can be obtained using differential measurements to an external perturbation, which can be used to establish the relative variation ΔT normalized to T as a gradient feedback, at a well-defined modulation frequency of the perturbation.In our work we use the photomodulation response, which provides a derivative T' with respect to the tuning parameter as an indicator of the relative position of the trimmed resonance.The target position of the resonance wavelength is reached when the derivative of the resonance lineshape is zero as illustrated in Fig. 2(b).Therefore, once T' reaches zero from negative side, the resonance is aligned with the target wavelength.Taking the spatial derivative directly from the transmission signal versus anneal length would suffer from the same noise sources as the transmission itself.The photomodulation effect from the pump light in combination with lock-in amplification detection allows sensitive extraction of the feedback signal with high noise suppression.As illustrated in Fig. 2(c), an optical chopper at 80 Hz is added to the annealing pump and is synchronized with a lock-in amplifier which replaces the spectrometer from our previous setup.A narrow line tunable laser at telecom wavelength was used for the probe light.The system mainly uses the thermo-optic effect from the UV laser to obtain a periodic perturbation on the implanted waveguide's refractive index.Thus, while the annealing produces an irreversible recrystallization of the implanted region, the photomodulation response provides the reversible differential response of the system with respect to the tuning parameter, namely the local refractive index.High signal-to-noise ratio is obtained using the combination of optical chopper and lock-in detection.The photomodulation signal ΔT reflects the relative position of the probe wavelength to the resonance and can be considered as proportional to T' for a very small perturbation.This approximation still holds even with the fact that the modulation is done in one direction because the dependence of T on index perturbation Δn is linear within the tiny Δn regime.If at the same time the annealing is taking place and shifting the resonance permanently, we will see ΔT gradually changes following the curve of T' in shape.Therefore, ΔT reaching zero from a negative number indicates the target resonance wavelength has been achieved and the annealing should be stopped.Figure 2(d) gives a typical measured curve of ΔT whilst increasing the annealed length for a 20-μm-radius ring.This curve do shows a similar trend to the resonance derivative in Fig. 2(b).Again, the annealing was performed by scanning the focused pump light (average power 4 mW) on the implanted segment with an angular step as small as 0.02°.We set the probe wavelength to 1550.3 nm in order to obtain a target wavelength of 1550 nm after trial tests, which calibrated the additional thermo-optical shift effect and some delay in the feedback.The trimming laser was blocked as soon as ΔT reached zero as marked by the blue arrows in Fig. 2(d).The inset shows the corresponding T signal near ΔT = 0. We note that the T signal is ambiguous around the peak between 3.5 to 4.0 μm annealed length, with fluctuations near the minimum which make the judgment very challenging.In comparison, the ΔT response shows a very precise crossing point with a precision better than 50 nm in the annealing length.The same annealing experiment using the drop port outputs yielded very similar results with opposite signs.Before and after annealing, transmission spectra of processed rings were checked with a standard high-resolution sweptfrequency testing system (Agilent 8163B Lightwave Multimeter).As depicted in Fig. 2(e), a trimmed resonance is at 1549.93 nm, only 0.07 nm away from the target.The transmission spectrum before annealing is also given for comparison, revealing a 4-nm (FSR ~4.9 nm) wavelength tuning range of the laser trimming with a 7-μm annealed length.The quality factor basically remains unchanged (about 1400).In total 12 different ring resonators with 20μm radius were trimmed for the same target wavelength 1550 nm. Figure 2(f) gives the plot of variation dλ normalized to FSR between actual trimmed resonances and the target wavelength at 1550 nm of all the samples, among which there are 9 results (75%) within standard variation σ = 3% (or 0.15 nm in wavelength, as indicated by the shaded region).It is worth considering how scanning precision may affect the trimming accuracy.Based on the scanning precision of 50 nm, the trimming efficiency of 0.09 μm −1 and the FSR of ring resonators, a contributed error of 0.02 nm is obtained, which is an order of magnitude less than the trimming accuracy 0.15 nm.Hence, there should be other sources of error affecting the accuracy.We think that the room temperature variations during annealing and calibration could have more significant influence on the accuracy of both processing and characterization, explaining a few of the results which have larger errors. Regarding the dynamics of the photomodulation, one would consider that the free carrier nonlinear effect might be dominant in producing periodic refractive index perturbation similar to previous studies conducted with both pulsed pump and a probe [11,27].We have chosen a chopping speed of 80 Hz, which corresponds to a period of 12.5 ms, as it was found in experiment that this time is sufficient for heat to accumulate in a single chopping cycle to reach the annealing threshold.Thermal heating accumulates over many individual femtosecond laser pulses, and it was found that the necessary heating required for irreversible annealing took place over a time scale of hundreds of microseconds.In silicon, the free carrier lifetime is in the range of 10-100 ps [11,28].The repetition period of our femtosecond pulses is 12.5 ns, which leaves enough time for the excited free carriers to relax before the next pulse arrives.For photomodulation experiments involving a CW probe, the reversible free carrier nonlinearity and the resultant reduction in T will be averaged over the period of the pulses by at least a factor of 125.If the free carrier effect was dominant in our photomodulation experiment, one could expect that in the case when synchronized femtosecond pulses are used as a probe, there would be over 100 times stronger ΔT/T signal, which was actually only on the similar level [11] to what we obtained in this work.On the other hand, the speed for thermal tuning of silicon resonators is in the range of 1-100 μs [15], suggesting that the thermal effect coming from each pulse can be accumulated with our dense pulse train and contribute to photomodulation.Hence, we believe that it was the thermo-optic effect that dominated in the photomodulation instead of the free carrier nonlinear effect. Investigation of laser annealing process by 2D photomodulation mapping To investigate in more detail the trimming process, we performed an experiment where a more detailed photomodulation mapping was done while at the same time the annealing was applied.By using a 2D scan in a raster-like pattern, we obtained simultaneous photomodulation maps of the implanted segment of a ring resonator with 20-μm radius during the laser annealing, as shown in Fig. 3.The pump beam was scanned outwards along the radial direction crossing the waveguide at a certain radial angle as indicated by the straight grey arrow in Fig. 3(a), and then the next radial scan was repeated with the angle increased by a fixed step of 0.02°.In this way we scanned the waveguide arc from 0° to 20° and covered 2 μm long in the radial direction with the waveguide located in the middle of this 2 µm wide section.Output signals T and ΔT were collected from the drop port at each annealing step simultaneously.As the scan was done at a high power for trimming (4 mW), annealing of the device was accumulated during the scan with the increasing angle while along only one radial path the annealing effect wasn't strong enough to produce big change on T with limited annealed area.Figures 3(a) and 3(b) shows the obtained photomodulation maps for T and ΔT, respectively.The condition ΔT = 0 across the waveguide is framed by dashed lines in the ΔT map and emphasized by the white regions in the color map.The radial profile of this line is bent towards larger arc angle on the waveguide, corresponding to the shift of the feedback signal due to laser heating.Away from the waveguide the pump still produces a measurable ΔT response caused by thermo-optic effect but the heating effect is evidently weaker than that on the waveguide.For the same area in the map of the T signal in Fig. 3(a), a decreased output can be seen (shown in the inset) when the pump was on the waveguide, mainly because of some reversible losses induced by the pump.In more detail, what happened on the scanning line (indicated by the straight grey arrows) starting from point 1 where ΔT = 0 was: after the previous annealing, ΔT was already at zero at the beginning of this radial scan; however as the pump beam approached the implanted waveguide, the thermo-optic effect redshifted the resonance a little and raised ΔT [inset of Fig. 3(b)].When the pump beam passed and moved away from the waveguide, the red-shift gradually disappeared and ΔT became zero again.This interpretation can also explain the bending direction as the annealing process looks slightly delayed on the waveguide and thus the process tends to overshoot the optimum thus requiring the calibration offset.The map for ΔT/T is also plotted in Fig. 3(c), which resolves another ΔT = 0 area (denoted as ii) corresponding to the valley of the resonance with a similar bending feature.These results help us understanding the interplay between different optical effects induced by the pump light on the Ge-implanted circuit during laser annealing, which can in turn be used to improve the performance of this trimming process. Laser annealing of the Ge-implanted MZI with real-time monitoring We used the same technique of gradient feedback to trim the operating point of MZIs using the setup as shown in Fig. 2(b).Figure 4(a) shows a schematic layout of the tested MZI.The inset shows an optical micrograph of the implanted arms with a different color from that of intrinsic silicon waveguides under the microscope.The longer segment is 7 μm in length, while the short segment is 2 μm long.The short segment has the purpose of balancing the loss of the two arms and is not annealed in our study.While slightly reducing the overall device throughput, the loss introduced by the short segment balances the output amplitudes of the two arms to achieve larger extinction ratio, which is important for optical modulation.Multimode interference (MMI) couplers were used for the optical splitter and combiner of the fabricated MZIs.Tapers were used for inputs and outputs of the MMIs to enhance transmission.During annealing, the pump beam was scanned along the 7-μm segment linearly to change the phase difference between the two arms, while Output 1 was coupled to the photodetector and lock-in amplifier for T and ΔT signal collection.Due to the small optical path difference between the two arms, the FSR of the MZI is much wider than the telecom range and we can use ultrafast laser pulses with relatively large (20 nm) bandwidth as the probe for optical characterization.Figures 4(b) and 4(c) depict the respective curves of T and ΔT for an annealing scan covering the whole 7-μm implanted segment plus some of the intrinsic silicon parts on both sides.The average pump power was 2.4 mW.Significant change of T started from position 0 μm and ended at 6.5 μm, which agrees well with the actual implanted length.A gradual increase of the signal is seen at the edges of the implantation area, which is attributed to the convolution with the Gaussian profile of the trimming laser and gradient in the implantation region.Taking into account this initial gradient, we estimate that the ΔT curve starts around −1.0 mV at the beginning of the section and then further decreases showing a minimum value of −1.8 mV at around 3 μm.This minimum indicates the location of the quadrature point of the MZI with the largest derivative, which is of great interest for optical modulation purpose as it can offer the highest modulation efficiency.The small discontinuity of ΔT is attributed to a step error from our slip-stick piezo stage along the line scan.It is well known that the phase-dependent transmission of an MZI output follows the trend of (1 + cos φ), where φ is the phase difference between the two arms.Therefore the derivative of the transmission should exhibit a trend of -sin φ.Since ΔT is proportional to the derivative and its negative maximum (φ = 0.5π) is about −1.8 mV, comparing with the value of ΔT at the start of the implanted segment we can estimate that the change of phase difference Δφ is about 0.72π when the whole implanted section is annealed by the pump.Larger Δφ could be realized by repeating the annealing cycle several times, as is shown in Figs.4(d) and (e).In these diagrams, both the T and ΔT signals were mapped onto the expected behaviour of the MZI using Δφ as a free parameter.A maximum extinction of −15 dB was achieved at the minimum of the transmission, indicating roughly equal amplitudes in both arms of the implanted MZI as was found in earlier work [21].Notably, the sign of the differential ΔT changes as the transmission T crosses the minimum.Ultrafast photomodulation scans showing the value of the differential transmission ΔT/T were taken on the implanted arms before and after annealing and are shown in Figs.4(f) and 4(g), respectively.During these measurements a low pump power of 0.5 mW was used to avoid any annealing effect.The flipping of photomodulation directions for both arms can be clearly observed from the maps and the implanted segments are quite outstanding from their silicon counterparts showing a higher nonlinearity for the Ge-implanted sections compared to unimplanted silicon.Our results show that laser annealing technique with real-time monitoring holds promise on trimming MZI's operating point. Conclusion In conclusion, we have developed accurate trimming techniques for Ge-ion implanted silicon photonic devices using real-time monitoring and a feedback control loop.Real-time monitoring of the resonant wavelength shift of the ring resonators was achieved using two different methods, monitoring the spectrum with a broadband short laser pulse or monitoring the derivative of resonant wavelength position with a narrowband laser taking the advantage of low-noise lock-in detection.In both methods a pump light is used for annealing the implanted waveguide and a probe light is used for device characterization.Calibration is required to remove a small wavelength shift because of the thermo-optic effect and measurement feedback delay.An accuracy better than 0.15 nm was achieved for trimming ring resonators with 20 μm radius, which could be further improved with more precise calibration and better temperature control of the silicon chip.We have applied the same phase trimming technique to MZIs and have achieved a maximum of about 0.72π change in phase difference between the two arms of an MZI by annealing a 7-μm-long implanted segment in a single scan, and up to 1.2π phase change for up to 4 subsequent annealing cycles.This technique is promising for adjusting operating point of MZI-based optical modulators.The demonstrated methods provide flexible and reliable approaches for trimming silicon photonic devices.Benefitting from the large variety of commercial positioning stages and wafer-scale post-fabrication testing setups, post-processing using laser annealing can be performed accurately, automatically and rapidly.In those occasions where active control is necessary, the methods reported in this paper may also help to reduce the power consumption required by correcting the performance of each device according to our design.We believe these techniques to be very promising and have a great potential to be further developed for industrial applications. Fig. 1 . Fig. 1.Laser annealing of Ge-implanted ring resonator with real-time spectrum monitoring.(a) Optical micrograph of the ring with 30 μm radius.The dashed curves schematically indicate the implanted section on the ring.(b) Experiment setup scheme for laser annealing of implanted ring resonators with real-time spectrum monitoring.(c) Measured resonant wavelength shift of the ring as a function of annealing length.(d) Extracted resonant wavelength shift in the range from 1540 to 1560 nm normaliszed to the FSR as annealing length was varied.(d) Measurement spectrum of the last annealed point before and after cooling-down. Fig. 2 . Fig. 2. Laser annealing with real-time differential transmission monitoring of Ge-implanted ring resonator.(a) Schematic spectra showing the transmission T of a ring resonance blueshifting from blue, red to black curves.The corresponding arrows indicate the probe output change due to the shifting.(b) derivative T' of the black resonance in (a).(c) Schematic of the experimental setup used for laser annealing of implanted ring resonators with the ability of differential transmission ΔT extraction.(d) Measured signal for ΔT from a 20-μm-radius ring with annealed segment length increasing.Inset shows the corresponding T signal which shares the same labels and units of the axes with the main figure.The two blue arrows mark the ΔT = 0 position at 4.01 μm.(e) Measured transmission spectra of a tested ring resonator before and after laser annealing.(f) Variation between actual trimmed resonances and target wavelength 1550 nm normalized to the FSR of all 12 samples.The obtained standard variation σ = 3% corresponds to 0.15 nm in wavelength. Fig. 3 . Fig. 3. Photomodulation maps of T (a), ΔT (b) and ΔT/T (c) from a Ge-implanted segment of a ring with 20-μm radius.These results were obtained from one scanning run.Inset of (a) and (b) give the profiles along the straight grey arrows across the waveguide, respectively.The units of the vertical axes are the same as that of the color bar.The curved dashed lines outline the position of the waveguide.The thin grey arrows in (a) and (b) indicate a scanning along the radial direction when the probe wavelength is near the resonance peak.Rectangle dashed Frame (i) shows the area of ΔT = 0 around the resonance peak while dashed Frame (ii) shows the ΔT = 0 area around the resonance valley. Fig. 4 . Fig. 4. Laser annealing of the Ge-implanted MZI with real-time monitoring.(a) Experiment schematic of the laser annealing with an illustrated layout of an MZI.The inset shows an optical micrograph of the two implanted arms.(b) Measured transmission (T) signal dependent on the pump spot position during the scanning along the whole 7-μm implanted section.(c) Corresponding measured ΔT signal dependent on the pump spot position.(d) and (e) Resulting values of T and ΔT for four subsequent annealing cycles, labelled 1-4 in (d), with red dot corresponding to balanced working point as determined from minimum in ΔT of (c).(f) and (g) Ultrafast photomodulation maps of the implanted sections on the two arms before and after laser annealing, respectively, showing sign reversal of the differential response.
6,827.8
2018-09-10T00:00:00.000
[ "Physics" ]
Instrumental evaluation of traction machines sparking In the article, the authors considered the possibility of the sparking monitoring at a commutator of a traction motor under operating conditions. The authors developed a method for the evaluation of sparking level at the commutator by means of the current flowing through separate, isolated parts of the split brush recording. The relationship between an auxiliary commutation current and the sparkling level is revealed. Introduction The importance of brushed direct current traction motors (DC TM) on electrified sections of the railway has not decreased in recent years, as it was expected, but on the contrary began to grow.There have appeared new electric locomotives 2ES4, 2ES5, 2ES6 and others, intended for the replacement of outmoded types of machines constructed on the basis of DС machines [1].At the same time, there are problems related to the commutator unit, monitoring of its quality and magnetic system maintenance at the depot test stations.Tests carried out on the locomotives equipped with DC motor shows that reliability is also affected by design flaws as well as drawbacks associated with control circuits [2]. The failure analysis shows that about 90 % of DC TM damages occur at about 800 thous.km run from the factory repairs, and approximately 30 % of the DC TM fail during the warranty period at the established mileage per one cycle from the beginning of operation (or overall repair) to the next overall repair 1400 thous.km.As experience has shown one of the effective ways to reduce the number of failures is to monitor the DC TM condition after repair and during operation [3].However, if testing is performed at repair depot there are National standards and rules for machines testing, then for the testing directly on the electric locomotive there are neither normative documents for controlling the DC TM condition during operation, nor technical facilities for its implementation (except armature current measurement).The availability of methods and technical facilities for controlling could prevent the DC TM from being installed on the electric locomotive without an appropriate setting of commutation and magnetic system, which initially guarantees a mileage rate under normal operating conditions.Monitoring of the DC TM condition during operation makes possible to regulate the load and apply the technique of thrust redistribution, and effectively prevent the slippage.The presented work is done by Theoretical study The main indicator of the DC TM is the commutator sparking.According to the commutation theory, there will be no sparking if the commutation is linear and the additional commutation current is zero.If the machine has overcommutation or undercommutation, this current differs from zero value.It flows in the section closed with a brush, and flows through the brush in the cross direction [4].Therefore, K.I.Shenfer called it as the cross current of the brush.As a result of the studies, a quantitative relationship was established between the magnitude of the additional commutation current and the sparking energy.The form of the additional commutation current was investigated as on the developed mathematical model, as well as on the real traction motor, that contributed on a measured signal processing technique.The estimation of the armature and excitation currents influence, the armature speed, brush contact area, commutator temperature and other performance parameters of DС traction motor made it possible to create a new method for the sparking evaluation which is based on additional commutation current measuring and correlation with sparking level in accordance with National Standard GOST 183-74. The main point of the method involves recording the cross current and identifying the relation with the sparking level of the commutator unit.The application of this method is most convenient when the motor brush is split (for example, TL2K1, NB-514, DPT810U), as for cross current measuring no change in the commutator design is required.Consider the fact that electromagnetic processes in the machine influence the performance of the commutation process, thermal and mechanical, the value and waveform of the additional commutation current should be construed as a universal characteristic of the electric motor condition [5]. To measure the commutation current, a spark monitoring device (SCR) was used, this device consists of a sensor made on the basis of stock-produced split brushes 1 and 2 that are isolated from each other (Fig. 1), a converter device 3 and a signal measuring system consisting of an analog-to-digital converter.The output of signal measuring system is connected to a personal computer.Unlike the well-known devices, for example PKK2, PKK5, this spark monitoring device uses standard composite motor brush in a function of sensitive element, the sparking on this brush-commutator unit is required to be determined.As a primary transformer we use a specially designed current transformer.The brush is connected to this current transformer so that an additional commutation current flows through its primary winding, the frequency of this current depends on the armature speed and the number of commutator segments.The current transformer is mounted on the brush holder.Its installation on brush holder does not cause any difficulty on any direct current traction or auxiliary motor.The secondary winding of the current transformer is connected to an analog-to-digital converter, which in turn is connected to a computing system.As computing system a portable PC with a specialized computing system or a special controller with appropriate memory can be used.The connection diagram is shown on Fig. 1. A main feature of the spark monitoring device design is (Fig. 2): 1) the galvanic isolation of the measuring circuit and DC TM supply circuits, which makes the operation with the spark monitoring device safe; 2) the absence of an additional power source of the sensor installed in the traction motor, which dramatically simplifies the measurement circuit. Experimental study To determine the sparking intensity, the value of the additional commutation current was taken across the load resistor (RT) of the measuring current transformer mounted on the brush holder.According to the energy theory of commutation [3], the excess of the measured current variable component over a certain acceptable value leads to the sparking appearance.It is known that the sparking intensity or sparking level is determined by the power segregated under the brush per length Lbr in the time tk/ϑk,( tk is the segment pitch, ϑk is the commutator peripheral velocity) [4,6]: According to this equation, if the commutation current ik is recorded for a particular electrical machine, the power ΔP segregated under the brush will be known; its magnitude can determine the occurrence and sparking level on the commutator.During the tests carried out on experimental model of the traction motor, the cross current curves (Fig. 3) flowing between the individual parts of the split brush were recorded.The sparking intensity was determined visually according to National Standard GOST 183-74. Next comes the parallel processing of signals, in which two problems need to be solved: the determination of the sparking level and the degree of commutation adjustment.For spark indication, the effective value of the signal is calculated using the RMS method, then the signal is calibrated by recalculating the data on the calibration curve.The result is displayed on the front panel of the devices on the computer screen as shown in Fig. 4.These data are information on the sparking level on the commutator and are graded in accordance with National Standard GOST 183-74.To adjust the commutation, it is necessary to analyze the array of instantaneous values of the spark monitoring device.From this array, at each two points, the minimum and maximum values of the signal are calculated, after calculating its absolute value is taken and compared between the minimum and maximum.If the maximum value is greater than the minimum value, the signaling information "Overcommutation" is presented on the front panel, if the condition is not met, Virtual instruments panel signals "Undercommutation" (Fig. 4).The difference between the absolute values of the maximum and minimum is displayed on the virtual panel in the form of a pointer device with indication in per unit values.When carrying out a series of tests for each type of traction motor, it is possible to calibrate the indication of this device in millimeters of the diamagnetic gap under the interpoles that significantly simplify the magnetic system tuning.In the suggested study, only a qualitative assessment of the general commutation type is possible.Next, the algorithm implements the test report recording.For this, it is necessary to ensure that only the values specified in National Standard GOST 183-74 are recorded in the test report. Discussion The suggested spark monitoring device on direct current traction motors can be used not only at test stations, but also during operation, which makes it possible to use the information obtained for diagnosing and automated DC TM control not only for new locomotives testing at depot but also for monitoring during operation.The additional commutation current change that measured by the spark monitoring device has complex waveform.Studies show that data obtained from spark monitoring device benefit not only the determination of sparking level according to the cross current magnitude, but also provide other important indicators of the DC TM operation, based on the waveform analysis, frequency characteristics, and others [6]. Conclusion As follows from the provided study, the commutation tests and monitoring technology of traction and auxiliary motors of electric locomotives are developed and adjusted. It should also be noted that the wide use of the spark monitoring system and the diagnosis of traction motors is obstructed by the lack of regulations or rules for instrumental sparking detecting.National Standard GOST 2582-81 in the annex of 1991 permits instrumental indication, but this was not reflected in industry Standards.The most important unit of an electric locomotive is a traction electric motor, nowadays it does not have a regulatory framework for the sparking control during operation and diagnosis organization. Fig. 1 . Fig. 1.Connection diagram of spark monitoring device and motor. Fig. 2 . Fig. 2. The exterior of the spark monitoring device and its installation on the brush holder. Fig. 4 . Fig. 4. Virtual instruments panel.The pilot installation of the spark monitoring device at the test stations of the Ulan-Ude Locomotive and rail-way car repair plant, the Taiga locomotive repair depot of the West Siberian Railroad, and the Ural Locomotives shows that the monitor of sparking level of the DC TM is carried out not only in commutation tests but also in other test modes .It is
2,375.8
2017-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Targeted surveillance reveals native and invasive mosquito species infected with Usutu virus The emergence of Usutu virus (USUV) in Europe was first reported in Austria, 2001, and the virus has since spread to many European countries. Initial outbreaks are marked by a mass die-off of European blackbirds (Turdus merula) and other bird species. During outbreaks, the virus has been detected in pools of Culex pipiens mosquitoes, and these mosquitoes are probably the most important enzootic vectors. Beginning in 2017, a second wave of blackbird deaths associated with USUV was observed in eastern Austria; the affected areas expanded to the Austrian federal states of Styria in the south and to Upper Austria in the west in 2018. We sampled the potential vector population at selected sites of bird deaths in 2018 in order to identify infected mosquitoes. We detected USUV RNA in 16 out of 19 pools of Cx. pipiens/Cx. torrentium mosquitoes at sites of USUV-linked blackbird mortality in Linz and Graz, Austria. A disseminated virus infection was detected in individuals from selected pools, suggesting that Cx. pipiens form pipiens was the principal vector. In addition to a high rate of infected Cx. pipiens collected from Graz, a disseminated virus infection was detected in a pool of Aedes japonicus japonicus. We show herein that naturally-infected mosquitoes at foci of USUV activity are primarily Cx. pipiens form pipiens. In addition, we report the first natural infection of Ae. j. japonicus with USUV, suggesting that it may be involved in the epizootic transmission of USUV in Europe. Ae. j. japonicus is an invasive mosquito whose range is expanding in Europe. Background Usutu virus (USUV) is a flavivirus (Flaviviridae) in the Japanese encephalitis virus serogroup originating from Africa [1]. In 2001, USUV was first identified in Austria, associated with a large die-off of Eurasian (or common) blackbirds (Turdus merula Linnaeus, 1758) [2], although the initial emergence in Europe may have been earlier [3]. Following the initial introduction, the virus spread to many European countries and is typically associated with the death of certain species of native birds, mainly blackbirds [4][5][6][7]. An observed reduction of bird deaths over time may be attributed to protection by herd immunity [8]. Despite this, there exists evidence of continued low-level virus activity in the years following the initial outbreaks in the form of bird seroconversion and the detection of viral nucleic acid in pools of mosquitoes [9,10]. In 2016, USUV was reported from live and dead birds in Austria, Belgium, Germany, Hungary, France, Germany and the Netherlands [11,12], as well as from human blood donors in Germany in 2017 [13] and in Austria from 2016-2018 [14,15]. Therefore, USUV has established transmission in Europe. The identification of USUV nucleic acid in field-captured mosquito pools suggests that Culex pipiens Linnaeus, 1758 is the principal vector in Europe [16]. In regions where West Nile virus (WNV, Flaviviridae) is endemic, USUV and WNV have been observed to co-circulate in an avian-mosquito transmission cycle [10,16,17]. Experimental vector competence studies have demonstrated that European Cx. pipiens form pipiens populations are competent vectors of USUV [18,19]. However, it is unconfirmed if natural populations of Cx. pipiens are infected with USUV as only pooled adult females were tested in mosquito surveillance efforts and their infection status could not be determined [10,16,17]. Beginning in 2016, the presence of viral RNA in blood from human donors and in tissue samples from dead birds signaled increased transmission of USUV in Austria [12,14]. In 2018, bird deaths in Austria increased over the prior year, and multiple USUV-infected blackbirds were confirmed from several sites including Linz, Upper Austria and Graz, Styria. Furthermore, obligatory seasonal blood donation screening in eastern Austria revealed 18 USUV infections among donors in 2018 [15], which is the highest number of human infections reported since the emergence of USUV in Austria in 2001 [2]. Recently, we reported the analysis of integrated human-vector-host surveillance for arboviruses in Austria [20]. Using this model, we performed targeted entomological investigations at sites where cases of blackbird deaths were confirmed to be linked to USUV infection. The goal was to determine the infection status of mosquitoes at sites of virus activity. Results In total, 380 mosquitoes were collected from the two sites (Table 1). In Linz, 37 Cx. pipiens/Cx. torrentium Martini, 1925 were captured, 18 of which were gravid, and seven Aedes japonicus japonicus (Theobald, 1901) were collected ( Table 1). In Graz, two nights of trapping resulted in 315 Cx. pipiens/Cx. torrentium (8 from the light trap, 2 of which were gravid, and all except for 32 of the remaining specimens collected in the gravid trap were gravid), 17 Ae. j. japonicus (10 from the light trap, and 2 of 7 from the gravid trap were gravid), three Aedes vexans (Meigen, 1830) captured in the light trap, and one An. maculipennis (Meigen, 1818) captured in the light trap (Table 1). Mosquitoes were pooled by site and species, and then tested for the presence of viral nucleic acids. Two of the three pools containing seven and 15 Cx. pipiens/Cx. torrentium mosquitoes, respectively, from Linz were positive for USUV nucleic acid (Table 1). Further testing of the individuals' legs and wings revealed that the pool consisted of 2 Cx. torrentium and 5 Cx. pipiens form pipiens; USUV nucleic acid was found in the legs and wings of a single Cx. pipiens form pipiens individual (Table 2). Similarly, pooled bodies and pooled legs and wings from the 7 Ae. j. japonicus specimens captured in Linz were negative for flavivirus nucleic acid ( Table 1). From Graz, 14 of the 16 pools of Cx. pipiens/Cx. torrentium were positive for USUV nucleic acid ( Table 1), all of which contained gravid individuals except for two pools consisting of 25 and 7 non-gravid individuals, respectively. The legs and wings from mosquitoes comprising two USUV-positive pools of 15 gravid Cx. pipiens/Cx. torrentium each were then tested individually. The pools consisted entirely of Cx. pipiens form pipiens, and USUV was detected in the legs and wings of two of the 30 Cx. pipiens form pipiens, indicating a disseminated infection ( Table 2). In addition, USUV nucleic acid was detected in a pool of six Ae. j. japonicus; the legs and wings were tested separately and were positive for USUV nucleic acid, suggesting that the infection was disseminated. Partial sequences within the NS5 gene of six USUV positive mosquito pools were determined, including 2 Cx. pipiens/Cx. torrentium pools from Linz (accession nos. MK121948 and MK121949), 3 Cx. pipiens/Cx. torrentium pools from Graz (accession nos. MK121944, MK121946 and MK121947) and 1 Ae. j. japonicus pool from Graz (accession no. MK121945). The sequences were 99.5-100.0% identical to each other and to the USUV sequences obtained from the birds found dead in the corresponding sites, all belonging to USUV cluster "Europe 2". The sequence identities to the previous Austrian strains were between 99.2-100.0%. All mosquito pools tested negative for WNV. Discussion In vector surveys, USUV is most frequently detected in pools of Cx. pipiens/Cx. torrentium [16]. However, in Italy for example, USUV nucleic acid was also identified in pools of the invasive mosquito, Aedes albopictus (Skuse, 1894), at relatively high frequency [21]. Other species of mosquitoes have been occasionally identified to be USUV-positive at a much lower frequency: Anopheles maculipennis (s.l.), Culiseta annulata (Schrank, 1776), Ochlerotatus caspius (Pallas, 1771) and Ochlerotatus detritus (Haliday, 1833) in Italy, and Culex perexiguus (Theobald, 1903) in Spain [16]. However, it is unknown whether these species are competent vectors. The ability to identify naturally infected vectors represents a challenge to the study of the enzootic transmission cycles of arboviruses. Additionally, female Cx. pipiens cannot be separated from Cx. torrentium by morphology, and therefore the detection of arboviral nucleic acid in mixed pools of Cx. pipiens/Cx. torrentium is ambiguous. To address these challenges, we used bird deaths to identify foci of USUV transmission during the most recent outbreak in Austria. We used gravid traps to increase the likelihood that we would sample infected mosquitoes, i.e. those that have already fed upon viremic hosts. We tested for disseminated infection in selected individual mosquitoes by analysing legs and wings separately. This also allowed us to determine the species of mosquitoes that were infected with the virus, particularly to distinguish Culex spp. using molecular tests. We found disseminated infections in Cx. pipiens form pipiens, which others have determined is a competent vector species of USUV [18,19], and thus this is most likely the principal vector involved in USUV transmission. Neither of the Cx. torrentium (n = 2) individuals were positive for USUV, although the number tested was much lower than the number of individual Cx. pipiens form pipiens tested (n = 35). The lower relative abundance of Cx. torrentium at the sites of virus activity here ( Table 1) may suggest that they are not as important as Cx. pipiens form pipiens in enzootic transmission and maintenance of the virus. In addition, we report the first natural infection of Ae. j. japonicus with USUV. In Austria, Ae. j. japonicus was first noted in southern Styria in 2011 near the Slovenian border and has also been reported from multiple countries in central Europe, including Switzerland and Italy [22][23][24][25]. It appears that multiple introductions into Europe have occurred [26] and the population in central Europe is aggressively expanding in range and local abundance [27]. It is a highly invasive mosquito and may displace endemic species where it is introduced [28]. Experimental studies have shown that Ae. j. japonicus is a competent vector of both WNV-lineage 1 in the USA [29,30] and WNV-lineage 2 in Europe [31,32], as well as chikungunya virus and dengue virus [33]. To our knowledge, the vector competence of Ae. j. japonicus for USUV has not yet been established. Despite its wide distribution and high vector competence for many arboviruses, there is only a single report of a field population of Ae. j. japonicus being positive for WNV, identified in the USA during the initial outbreak of WNV [34]. Ae. j. japonicus has a strong preference for mammalian hosts [35][36][37], taking blood from many mammal species including humans [38]. Although avian blood meals have not been identified from field specimens, laboratory colonies take blood when offered captive birds [39]. Therefore it is unlikely that Ae. j. japonicus will be an important vector of enzootic transmission of USUV; however, this invasive species may be a bridge vector of USUV and/or WNV. Conclusions Targeted entomological surveillance at foci of USUV-associated bird deaths supports the hypothesis that Cx. pipiens form pipiens is the major vector of USUV in Austria. The surveillance also identified that Ae. j. japonicus, an invasive species, was naturally infected with USUV. Methods Through coordinated surveillance efforts, bird deaths in 2018 were investigated at the University of Veterinary Medicine Vienna [40]. Sites with four or more dead blackbirds testing positive for USUV were selected for targeted entomological surveillance. This included a site in Linz (Upper Austria; 48°17.001'N, 14°16.663'E; 1 trap-night) and a site in Graz (Styria; 47°04.995'N, 15°2 7.865'E; 2 trap-nights). Traps were set between one and three weeks following confirmed USUV-linked bird deaths. To sample the general mosquito population a CDC standard miniature light trap ("light trap") baited with 1 kg of dry ice was used. In order to target the recently-infected mosquito population, an updraft gravid trap using a 10-day-old hay infusion as an oviposition attractant was used (both traps from J.W. Hock Co., Gainesville, FL, USA). Gravid traps baited with grass infusion are known to be effective sampling methods for both Cx. pipiens and Ae. j. japonicus [41]. Traps were set 1 h before sunset and collected 1 h after sunrise. Trap contents were cooled for 2 min at -20°C, and mosquitoes were sorted to species on dry ice using morphological identification keys [42,43]. Mosquitoes were pooled by species, site, Sella stage, and trap-night. Species identifications were confirmed by molecular barcoding: a 684 bp portion of the mitochondrial cytochrome c oxidase 1 (cox1) gene was amplified by PCR (GoTaq® G2 PCR master mix, Promega, Mannheim, Germany) using VF1d and VR1d primers [44], sequenced by the Sanger method and compared to available sequences in Gen-Bank. The legs and wings were removed from some specimens, selected haphazardly, and stored separately to test for a disseminated viral infection. Selected individual specimens identified as Cx. pipiens/Cx. torrentium were identified to species based on amplicon length polymorphism of the Ace2 gene using primers ACEpip, ACEtorr and B1246s according to a published protocol [45]. To differentiate biotypes of Cx. pipiens, a 650 bp portion of the cox1 gene was amplified by PCR (primers COIF and COIR) and then digested with HaeIII restriction enzyme (New England Biolabs, Frankfurt, Germany) according to a published protocol, which reveals a restriction site present in Cx. pipiens form pipiens but not in form molestus [46]. Mosquito pools or mosquito parts were homogenised in buffer on a bead mill (TissueLyser, Qiagen, Hilden, Germany), and nucleic acid was extracted from the cleared homogenate using a commercial kit (QIAamp viral RNA kit, Qiagen). Virus nucleic acid was amplified using real-time RT-PCR with a published 'universal' flavivirus primer set (PF1S and PF2Rbis) and SYBR green [47] (Luna®, New England Biolabs). Two virus-specific primer-probe sets were used to identify USUV or WNV nucleic acid [3,48]. USUV-positive samples were further tested with conventional RT-PCR [4]. Amplicons were sequenced by Sanger sequencing (Microsynth Austria GmbH, Vienna, Austria), identified by nBlast search (https://blast.ncbi.nlm.nih.gov/ Blast.cgi), and aligned with published USUV sequences from Austria (GenBank accession nos. MF063042, MF991886 and AY453411) in MEGA v.6 to determine sequence similarity.
3,147.2
2019-01-21T00:00:00.000
[ "Environmental Science", "Biology" ]
Re-injection feasibility study of fracturing flow-back fluid in shale gas mining Fracturing flow-back fluid in shale gas mining is usually treated by re-injecting into formation. After treatment, the fracturing flow-back fluid is injected back into the formation. In order to ensure that it will not cause too much damage to the bottom layer, feasibility evaluations of re-injection of two kinds of fracturing fluid with different salinity were researched. The experimental research of the compatibility of mixed water samples based on the static simulation method was conducted. Through the analysis of ion concentration, the amount of scale buildup and clay swelling rate, the feasibility of re-injection of different fracturing fluid were studied. The result shows that the swelling of the clay expansion rate of treated fracturing fluid is lower than the mixed water of treated fracturing fluid and the distilled water, indicating that in terms of clay expansion rate, the treated fracturing flow-back fluid is better than that of water injection after re-injection. In the compatibility test, the maximum amount of fouling in the Yangzhou oilfield is 12mg/L, and the maximum value of calcium loss rate is 1.47%, indicating that the compatibility is good. For the fracturing fluid with high salinity in the Yanchang oilfield, the maximum amount of scaling is 72mg/L, and the maximum calcium loss rate is 3.50%, indicating that the compatibility is better. Introduction In the process of fracturing in the shale gas mining will continue to produce a large number of fracturing flow-back fluid, with the composition of complex, high viscosity, high organic content, high solid content and low biodegradability and other characteristics [1][2]. If the disposal is improper, there is a risk of environmental pollution. And most of the shale gas mining areas for the lack of water, so most of the shale gas fracturing flow-back fluid are used to re-injection [3]. However, because of the suspended matter and the oil content will clog the stratum. The scaling ion in the fracturing flow-back fluid will deposit in the porous media of the reservoir and block the pore throat. Therefore, it has a strict standard for the treatment of fractured effluent after injection into the stratum [4][5]. At present, the water quality index of re-injection water is controlled according to the suspended matter, oil content and median value of particle size in SY/T 5329-2012 standard, the influence of injected flowback fluid on the swelling of clay and scaling of water to the injection bottom is neglected, which results in the defect of feasibility evaluation of re-injection [6]. Based on the clay swelling method and compatibility experiment, two kinds of fracturing flow-back fluid with different degree of Apparatus and Reagents Main equipment: TDL-80-2P low-speed desktop centrifuge; BT224S analytical balance; 211 pH meter; SHZ-D (Ⅲ) circulating water-type vacuum pump and supporting glass sand core filter device. Main agents: standard montmorillonite; KCl (analytical grade): fracturing back from the Yangzhou oil field and Yanchang oil field; formation water from Yangzhou oil field and Yanchang oil field. Experimental (1) Water quality ion concentration analysis method; According to the relevant methods of "water and waste water detection method" and "oil and gas field water analysis method" (SY/T 5523-2016), the water content of the treated fracturing fluid was measured and analyzed. Measurement and analysis of scaling ions may cause blockage in formations and pipelines in oil fields. (2) Method of clay swelling rate; According to the centrifugal method in the determination method of clay stabilizer for fracturing and acidizing (SY/T 5762 -1995), the clay swelling rate was determined. Weigh the 0.50g bentonite powder, accurate to 0.01g, loaded 10mL centrifuge tube, add 10mL liquid to be measured, shake well, at room temperature for 2h, into the centrifuge, the speed of 1500r / min centrifugal separation 15 min, read out the volume of bentonite after expansion. The expansion rate was calculated based on the expansion volume of bentonite and 3% KCl solution. % 100 V 1 is 3% of the KCl solution mixed with clay after the expansion of the volume, ml; V 2 to be measured water samples and clay after the expansion of the volume, ml. Scale mass analysis method is the use of membrane filtration method. After the treatment was carried out for 72 hours at 40°C and then filtered through a 0.45 μm filter on a glass sand core filtration apparatus, the filter was subjected to filtration and the formation water (adjusted to different pH values and filtered through a 0.45 μm filter). And with distilled water or petroleum at least 3 times after cleaning, drying constant weight and weighing the quality of the filter before and after filtering the poor quality of the filter is the amount of water. Analysis of water quality of backfill fracturing According to the SY / T 5329-2012 standard, the fracturing fluid and the fracturing fluid after treatment are analyzed for water quality. The treatment of fracturing effluent is mainly carried out by flocculation sedimentation and oxidation-reduction method. After the fracturing solution is oxidized with Fenton reagent, the flocculant and coagulant are added to settle the suspended particles to realize solid-liquid separation [8][9][10]. (Table 1 and Table 2 Mineral content(mg/L) 14403.71 17187.37 Through the analysis of water quality in Table 1 and Table 2, it can be seen that the salinity of fracturing flow-back fluid in Yangzhou oilfield is relatively low, which is 2628.66mg/L, After the treatment of Yanchang oilfield, the salinity of fracturing flow-back fluid is relatively high, which is 17187.3mg/L .Therefore, this paper studied the feasibility of re-injection for two kinds of fracturing flow-back fluid with different salinity. Influence of water quality on clay swelling According to the test method of determining clay swelling rate in 2.2.2, the influence of fracturing flow-back fluid after treatment with different salinity on clay stability was studied. (Table 3 and Table 4 From tables 3 and 4 can be seen, the fracturing flow-back fluid of the oil in the Yangzhou oil field with a salinity of 2628.66 mg/L, compared with the aqueous solution of 3% KCl, makes the expansion degree of clay larger, the expansion rate increased from 314.78% to 436.52%. While the salinity of 17187.3mg/L after the treatment of Yanchang oilfield fracturing flow-back the clay hydration degree is smaller, the expansion rate only increased from 87.27% to 156.36%,The lower the ion content in the two fracturing effluents, the larger the expansion volume. This is because the bentonite belongs to the expansive material, after the treatment of fracturing flow-back fluid in minerals, especially Ca 2+ , Mg 2+ and other high-priced ions will spread to the clay layer, inhibit the further hydration between the clay expansion [11]. Through the above results can also be obtained, because the salinity of the treated fracturing flow-back fluid is higher than that of the clear water, the re-injection will not make the reservoir clay the expansion, suitable as a re-injection of the formation of water. Effect of pH on Clay Stability According to the method of 2.2.2, the effect of the pH value of the fracturing solution after treatment on the stability of the clay was determined. (Table 5) 5, the Yangzhou oilfield fracturing flow-back fluid with low degree of mineral content, clay swelling volume increases with the increase of pH, the growth rate of the larger; the Yanchang oilfield fracturing flow-back fluid with high degree of mineral content, the clay swelling volume increases with the increase of pH, the growth rate is smaller. This is due to the bentonite belongs to the expansion of the material, the pH value in the fracturing flow-back fluid increases and the corresponding OHconcentration increases and the clay swelling property increases [11][12]. And the low fracturing flow-back fluid with relatively low salinity has relatively large expansion rate, so the growth rate is relatively large. The relative expansion rate of the fracturing flow-back fluid with high salinity is small, so the growth rate is relatively small. When the pH value is between 7.0~7.3, the relative expansion volume increases little. Therefore, in order to protect the reservoir, the pH value of the fracturing flow-back fluid to be re-injected should be controlled between 7.0~7.3. The experimental results show, compared with the treated fracturing fluid, the swelling rate of clay swelling is lower than the mixed water of fracturing flow-back fluid and distilled water. Therefore, the treatment of fracturing flow-back fluid re-injection and pH value control between 7.0~7.3 is more conducive to reservoir protection, and it also shows that the fracturing flow-back fluid is better than clean water re-injection in terms of clay expansion. Effect of mixed treatment of fracturing flow-back fluid and formation water on scaling According to the method of 2.2.3, the influence of the amount of scaling when the fracturing flowback fluid is mixed with formation water is determined. (Table 6 and table 7). Table 6 shows the Yangzhou oilfield after fracturing fluid with low degree of mineralization, and the formation of water compatibility test, the maximum amount of fouling in the Yangzhou oil field is 12mg·L -1 , the maximum value of calcium loss is 1.47%. The compatibility is good. Table 7 shows the Yanchang oilfield after fracturing fluid with high degree of mineralization, and the formation of water compatibility test, the maximum amount of fouling is 72mg·L-1, the maximum calcium loss rate is 3.50%, and the compatibility is better. Conclusions After treatment, the fracturing flow-back fluid will make the clay expand, and the volume of expansion is closely related to the pH value. As the degree of mineralization increases, the rate of expansion of clay will be smaller,And the salinity of fracturing flow-back fluid after treatment is higher than that of the clean water, so the relative expansion rate is lower and the feasibility of the reinjection is better. In the compatibility test, the maximum amount of fouling in the Yangzhou oilfield is 12mg·L -1 , the maximum value of calcium loss rate is 1.47%. The compatibility is better; The maximum amount of fouling in the Yanchang oilfield is 72mg·L-1, the maximum value of calcium loss rate is 3.50%, and the compatibility is well.
2,336.6
2018-02-01T00:00:00.000
[ "Engineering" ]
Long Term Stable ∆ - Σ NDIR Technique Based on Temperature Compensation Featured Application: With the development of industrial precision measurement, highly accurate and stable sensing systems usually require temperature compensation to maintain stability. In this study, the effects of ambient temperature are analyzed and a temperature compensation model is established for the NDIR gas sensing system with ∆ - Σ structure to achieve a long term stable gas measurement. Abstract: For a fast and long term stable Non Dispersive Infrared (NDIR) technology of gas concentration measurement, the temperature compensation is required. A novel proposed ∆ - Σ NDIR system was investigated and built with a closed-loop feedback system to stabilize the signal readings without temperature drift. The modulation of the infrared heater gives a corresponding signal of gas concentration based on our proposed ∆ - Σ conversion algorithm that was affected by the drift of temperatures for the infrared sensor. For our study, a new temperature compensation model was built and verified that formulates the relationship between gas concentration and temperature of sensor. The results show that our proposed ∆ - Σ can measure efficiently with half of the startup time than our previous design and maintain long term stability. Introduction The major techniques for monitoring gas concentration include the Non Dispersive Infrared (NDIR) sensor, and the solid electrolyte type of gas sensor. There are different considerations to choose the NDIR or solid electrolyte type of sensor, which is more affordable than that of NDIR. Nevertheless, the NDIR gas sensor has more technical advantages than the solid electrolyte type, likes long term stability, high accuracy, and low-power consumption [1]. According to Beer-Lambert's law, a non-dispersive infrared spectrometer is capable of measuring the light intensity absorbed by the particular gas. The basic architecture of the NDIR includes an infrared light source, an optical tube containing the target gas, and an infrared sensor with a specific wavelength filter. In general, NDIR instruments use a fixed periodic heating source and two infrared sensors with different filters for two channels. One is the reference channel used to detect the infrared radiation that not be absorbed by gas concentrations. The other channel gives a corresponding reading used to detect the infrared radiation in specific wavelength that will be absorbed by gas concentrations to achieve wavelength selectivity [2]. The Analog-to-Digital Converter integrated circuit (ADC IC) was used to convert the signal readings of these two channels from analog signals into digital signals. The advantage of the dual channel is that it can remove the factors that cause the signal to drift by comparing the differences in light intensity between different channels. Besides multi-filters type sensing, there is a new approach applied in NDIR gas sensor, which comprises of a sensor and a Fabry-Perot tunable filter by modulating two parallel mirrors to achieve selected band. One selects a band tuned to the absorption band of the gas being measured and the other is fixed at a band of reference light [3]. The Δ-Σ architecture we proposed earlier is an NDIR channel with a closed-loop feedback system that amplifies the signal readings of the sensor and compares it with the threshold voltage, then through microprocessor to count and adjust the heating duty of the light source that can quickly stabilize the entire system signal readings. However, temperature changes will cause the signal readings drifting especially before the system stabilized [4,5]. Therefore, we recorded the relationship between the signal readings of sensor and its temperature changes to explore the dynamic performance of signal readings. Design of Proposed Δ-Σ Gas Sensing The working principle of a typical NDIR spectrometer is based on the Beer-Lambert's theorem to describe the relationship between the infrared wavelength (λ), gas concentration (C), absorption medium length (d), and intensity of light source (I), shown in Figure 1. The NDIR spectrometer consists of three basic components: an infrared light source; an optical tube containing target gas concentrations; and an infrared sensor with a specific bandpass filter. In the experiment, we used OIR-715 as an infrared light source with a wide wavelength band (visible to 4.4 μm). The component that we use to achieve the NDIR selection feature is the thermopile sensor HTS-E21-F3.91/F4.26 which detects the band of infrared radiation absorbed by CO2 (4.26 μm) and the band non-absorption (3.91 μm). Finally, in order to maintain a stable absorption medium length (d), we installed the infrared source and infrared detector at both ends of the 5 cm chamber [6,7]. In the experiment, we used OIR-715 as an infrared light source with a wide wavelength band (visible to 4.4 µm). The component that we use to achieve the NDIR selection feature is the thermopile sensor HTS-E21-F3.91/F4.26 which detects the band of infrared radiation absorbed by CO 2 (4.26 µm) and the band non-absorption (3.91 µm). Finally, in order to maintain a stable absorption medium length (d), we installed the infrared source and infrared detector at both ends of the 5 cm chamber [6,7]. To build the ∆-Σ modulation architecture, shown in Figure 2. Firstly, the infrared light source emits thermal radiation that is proportional to the fourth power of the temperature of light, T b . The temperature was raised and controlled by the heating power P e as the bottom line in Figure 2. Then, the thermal radiation would travel through the chamber and the radiation intensity was multiplied by the factor of Beer-Lambert's law as ⊗ in the figure. Due to the reading signal being too small to be observed and susceptible to external disturbance. Therefore, we use an integrator to amplify it and high-frequency sampling to filter out the noise, and then quantize it from analog signal into digital signal through a comparator with an adjustable threshold voltage (V th ), shown in middle part of the figure. Finally, the signal is transmitted to the microprocessor which monitors the signal by preset scan time (t s ) meanwhile counts working duty of light source to obtain m (heating duty of light source) and N (total duty of light source) through the ∆-Σ conversion algorithm to obtain the correct gas concentration. In this study, it is practical to develop measurement of gas concentration with long term stability based on temperature compensation. The analytical function for the reading of first-order delta-sigma modulation and sensor's temperature was built to compensate the drift of measurement of gas concentration. To keep good long term stability for measurement, it is also feasible for higher-order delta-sigma architecture with the same approach because it follows the same thermal behavior and phenomenon. The sampling time of NDIR gas concentration is about few seconds and it is quite long enough to adopt the first-order delta-sigma architecture with low noise. In the experiment, we used OIR-715 as an infrared light source with a wide wavelength band (visible to 4.4 μm). The component that we use to achieve the NDIR selection feature is the thermopile sensor HTS-E21-F3.91/F4.26 which detects the band of infrared radiation absorbed by CO2 (4.26 μm) and the band non-absorption (3.91 μm). Finally, in order to maintain a stable absorption medium length (d), we installed the infrared source and infrared detector at both ends of the 5 cm chamber [6,7]. Figure 3 shows the measurement results as two curves, one is the sensing voltage of thermal radiation measured by the sensor and the other is the output signal of delta-sigma modulation. When the operating state of the light source altering, the output of the modulated signal will change to the corresponding state. In this process, the microprocessor based on the delta-sigma algorithm plays an important role. It monitors the change of the electrical signal from the light source through the preset scan time (denoted as t s ) and identifies the current working state of the light source. After N times of sampling which is comprise of heating duty (denoted as m) and cooling duty (denoted as N-m) of light source, we can obtain the total sampling time (denoted as N × t s ) and total heating time (denoted as m × t s ). In our study, the signal reading is quantized as Ratio (Ratio = m×t s N×t s = m N ). Finally, we can obtain the relationship between correct gas concentration and Ratio by mathematical derivation [6]. To build the Δ-Σ modulation architecture, shown in Figure 2. Firstly, the infrared light source emits thermal radiation that is proportional to the fourth power of the temperature of light, Tb. The temperature was raised and controlled by the heating power Pe as the bottom line in Figure 2. Then, the thermal radiation would travel through the chamber and the radiation intensity was multiplied by the factor of Beer-Lambert's law as ○ × in the figure. Due to the reading signal being too small to be observed and susceptible to external disturbance. Therefore, we use an integrator to amplify it and high-frequency sampling to filter out the noise, and then quantize it from analog signal into digital signal through a comparator with an adjustable threshold voltage (Vth), shown in middle part of the figure. Finally, the signal is transmitted to the microprocessor which monitors the signal by preset scan time (ts) meanwhile counts working duty of light source to obtain m (heating duty of light source) and N (total duty of light source) through the Δ-Σ conversion algorithm to obtain the correct gas concentration. Working Principle Based on ∆-Σ Modulation In this study, it is practical to develop measurement of gas concentration with long term stability based on temperature compensation. The analytical function for the reading of first-order delta-sigma modulation and sensor's temperature was built to compensate the drift of measurement of gas concentration. To keep good long term stability for measurement, it is also feasible for higher-order delta-sigma architecture with the same approach because it follows the same thermal behavior and phenomenon. The sampling time of NDIR gas concentration is about few seconds and it is quite long enough to adopt the first-order delta-sigma architecture with low noise. Figure 3 shows the measurement results as two curves, one is the sensing voltage of thermal radiation measured by the sensor and the other is the output signal of delta-sigma modulation. When the operating state of the light source altering, the output of the modulated signal will change to the corresponding state. In this process, the microprocessor based on the delta-sigma algorithm plays an important role. It monitors the change of the electrical signal from the light source through the preset scan time (denoted as ts) and identifies the current working state of the light source. After N times of sampling which is comprise of heating duty (denoted as m) and cooling duty (denoted as N-m) of light source, we can obtain the total sampling time (denoted as N × ts) and total heating time (denoted as m × ts). In our study, the signal reading is quantized as Ratio (Ratio = × × = ). Finally, we can obtain the relationship between correct gas concentration and ratio by mathematical derivation [4]. NDIR Mathematical Model Based on ∆-Σ Modulation with Temperature Drift In order to find out the relationship between Ratio ( m N ) and gas concentration (C), we separately analyzed the physical phenomena of the infrared light source and the sensor during the ∆-Σ modulation. . Infrared Light Source According to the Heat Transfer Theory, it is given by [8]: For our proposed delta-sigma system, the scan time (t s ) is about microsecond and the time constant of infrared light source is about 200 ms from the datasheet. It means that dT b , dt can be replaced by ∆T b and t s , respectively, within enough accuracy and Equation (2) can be derived from Equation (1): where H is the heat capacity coefficient, G is the thermal conductivity coefficient, T a is the room temperature, the subscript b refers to the infrared light source, ε is the emissivity, σ is the coefficient of Stefan-Boltzmann, and P e is the instantaneous power of the infrared light source. In addition, we discussed the working state of infrared light source in order to describe the temperature phenomenon and the working duty of light source more specific. ∆T b can be separately derived as ∆T r and ∆T f during preset scan time (t s ) in Equations (3) and (4). The temperature changes of the infrared light source during heating period, ∆T r : The temperature changes of the infrared light source during cooling period, ∆T f : From Equations (3) and (4), the description of physical phenomenon of the infrared light source based on ∆-Σ modulation is derived and it yields the formulation that indicates the relationship between Ratio and the infrared light source can be expressed as Equation (5): where T b(t) is the initial temperature of the light source, T b(t+Nt s ) is the temperature after the light source is heated and cooled for N times, m∆T r is the increased temperature when the light source is heated, and (N-m)∆T f is the decreased temperature when the light source is cooled. In this work, after the light source starts to work for a period of time, the temperature of the light source (T b ) will gradually balance with the room temperature (T a ). At the same time, the original temperature of light source (T b(t) ) will also tend to the operating temperature of light source (T b(t+Nt s ) ). Therefore, we can obtain the approximate equation, which shows the relationship between the average operating temperature of the light source and Ratio in the steady state, as Equation (6): Sensor The ∆-Σ architecture is based on the net radiation signal received by the sensor, which is used to modulate the heating duty of the infrared light source. According to Stefan-Boltzmann's law, the revived infrared radiation depends on the temperature of the light source as Equation (7) and it also is described by the Beer-Lambert's L which is related to the gas concentration, shown in Equation (8). The infrared radiation from the sensor to the environment is also considered to be derived for the net radiation of sensor, which is related to the temperature of sensor described in Equation (9). The temperature of sensor is easily affected by the heat from the environment, PCB (Printed circuit board) and the heat of the infrared light source. This is the major factor to be evaluated for the drift of gas concentration based on ∆-Σ architecture. where I O is the intensity of the infrared light source before being absorbed by gas concentrations, A and B are geometric parameters. In order to describe the physical phenomenon more completely and realistically, we considered the emitting angle of light source, shown in Figure 4. The Δ-Σ architecture is based on the net radiation signal received by the sensor, which is used to modulate the heating duty of the infrared light source. According to Stefan-Boltzmann's law, the revived infrared radiation depends on the temperature of the light source as Equation (6-1) and it also is described by the Beer-Lambert's Law which is related to the gas concentration, shown in Equation . The infrared radiation from the sensor to the environment is also considered to be derived for the net radiation of sensor, which is related to the temperature of sensor described in Equation (7). The temperature of sensor is easily affected by the heat from the environment, PCB (Printed circuit board) and the heat of the infrared light source. This is the major factor to be evaluated for the drift of gas concentration based on Δ-Σ architecture. IO is the intensity of the infrared light source before being absorbed by gas concentrations, A and B are geometric parameters. In order to describe the physical phenomenon more completely and realistically, we considered the emitting angle of light source, shown in Figure 4. By substituting Equation (5) into Equation (8), we will obtain an equation for the description of the threshold voltage (Vth), which is also the key parameter in our proposed Δ-Σ architecture; the equation can be written as: Where a and b are the electronic circuit gain and related system parameters, S is the sensitivity of the sensor. The net radiation of sensor can be described as: By substituting Equation (6) into Equation (10), we will obtain an equation for the description of the threshold voltage (V th ), which is also the key parameter in our proposed ∆-Σ architecture; the equation can be written as: where a and b are the electronic circuit gain and related system parameters, S is the sensitivity of the sensor. Finally, the equation for description of the gas concentration (C) has been derived to rewrite Equation (11); the equation can be written as follows: These expressions reveal several important factors regarding the effects of operating time of system (t) and temperature T a , which is supposed to be investigated carefully in this research. Dynamic Temperature Compensation In our previous study, we discussed the steady state system and obtained a good linear transfer function between the Ratio and the CO 2 concentration, as shown in Figure 5. system (t) and temperature Ta, which is supposed to be investigated carefully in this research. Dynamic Temperature Compensation In our previous study, we discussed the steady state system and obtained a good linear transfer function between the ratio and the CO2 concentration, as shown in Figure 5. In this study, we analyze and observe the change of Ratio which showed a dynamic drift before the system into steady state (40 min), shown in Figure 6. There is a rising phase which shows the increasing of the ratio for the curve in Figure 6 and a subsequent falling phase which shows the decreasing of ratio. To compensate for dynamic drift, we discussed the physical phenomenon of the infrared light source and the sensor during rising and falling phase. In this study, we analyze and observe the change of Ratio which showed a dynamic drift before the system into steady state (40 min), shown in Figure 6. There is a rising phase which shows the increasing of the Ratio for the curve in Figure 6 and a subsequent falling phase which shows the decreasing of Ratio. To compensate for dynamic drift, we discussed the physical phenomenon of the infrared light source and the sensor during rising and falling phase. Measurements and Results In our proposed Δ-Σ system, a threshold voltage (Vth) designed to determine the amount of average thermal radiation that the infrared light source needs to provide and to quantize the signal reading of sensor. Infrared Light Source (1) The rising phase affected from thermal radiation produced by infrared light source: After the sensing device turned on, the operating temperature of the infrared light source will gradually increase. However, the light source produced less thermal radiation in the beginning (the Measurements and Results In our proposed ∆-Σ system, a threshold voltage (V th ) designed to determine the amount of average thermal radiation that the infrared light source needs to provide and to quantize the signal reading of sensor. Infrared Light Source (1) The rising phase affected from thermal radiation produced by infrared light source: After the sensing device turned on, the operating temperature of the infrared light source will gradually increase. However, the light source produced less thermal radiation in the beginning (the light intensity was weak), which might cause the sensor have more chances to receive less amount of thermal radiation. That is, the circuit would feedback control the light source to increase the heating duty (Ratio = m N ↑) and higher frequently to maintain the average amount of thermal radiation. (2) The falling phase affected from thermal radiation produced by infrared light source: As the measurement time increases, the operating temperature of the infrared light source is getting higher than in beginning. To fix the heating duty of light source as constant, the light source will generate more thermal radiation, which means the sensor will receive higher amount of the thermal radiation. That will cause the circuit feedback control the light source to decrease the heating duty (Ratio ↓) till the light intensity is saturated. Sensor (1) The rising phase affected from signal reading drift of sensor: After turning on the sensing device, the ambient temperature around sensor will increase due to the light source heating and affected by the changes of room temperature. The increment of the ambient temperature will result in the increment of the outward radiation of sensor. Assume that the input radiation is constant; the net radiation of sensor will decrease, which means the circuit will feedback control the light source to increase the heating duty (Ratio ↑) to maintain the sensor and receive the average amount of thermal radiation. (2) The falling phase effected from signal reading drift of sensor: As the measuring time increases, the ambient temperature of sensor will gradually balance with the room temperature, which means the changes of the outward radiation of sensor will slow down. The net radiation of sensor will gradually insensitive to its temperature changes. Therefore, the drifting thermal radiation of the infrared light source will be the main reason for the decrease of Ratio in this phase. In order to analyze the changes of the operating temperature of the infrared light source caused by different heating duty, we keep the CO 2 concentration at 500-600 ppm and set several V th (0.580, 0.680, 0.684, 0.712 V) to observe how the light source effect on the dynamic drift of Ratio. Besides, the dynamic drift of Ratio is also due to the temperature changes of sensor. Therefore, a thermistor is installed at the sensor to record the temperature change during the measurement. There is a certain correlation between signal reading and the temperature change before 40 min, as the drift of Ratio region shown in Figures 6 and 7. As shown in the Figure 8, the Ratio will change when the T sensor changes, which allows us to compensate for the dynamic drift of the Ratio through changes of T sensor . In this study, we discuss the relationship between the T sensor and the Ratio in the falling phase; it not only shows a good regular correlation between the T sensor and the Ratio but also reveals a method of temperature compensation, as shown in Figure 8. by different heating duty, we keep the CO2 concentration at 500-600 ppm and set several Vths (0.580, 0.680, 0.684, 0.712 V) to observe how the light source effect on the dynamic drift of Ratio. Besides, the dynamic drift of Ratio is also due to the temperature changes of sensor. Therefore, a thermistor is installed at the sensor to record the temperature change during the measurement. There is a certain correlation between signal reading and the temperature change before 40 min, as the drift of Ratio region shown in Figures 6 and 7. As shown in the Figure 8, the Ratio will change when the T sensor changes, which allows us to compensate for the dynamic drift of the Ratio through changes of T sensor. In this study, we discuss the relationship between the T sensor and the ratio in the falling phase; it not only shows a good regular correlation between the T sensor and the ratio but also reveals a method of temperature compensation, as shown in Figure 8. We analyzed the slope in Figure 8 and figured out the temperature compensation formula as following: (0.580, 0.680, 0.684, 0.712 V) to observe how the light source effect on the dynamic drift of Ratio. Besides, the dynamic drift of Ratio is also due to the temperature changes of sensor. Therefore, a thermistor is installed at the sensor to record the temperature change during the measurement. There is a certain correlation between signal reading and the temperature change before 40 min, as the drift of Ratio region shown in Figures 6 and 7. As shown in the Figure 8, the Ratio will change when the T sensor changes, which allows us to compensate for the dynamic drift of the Ratio through changes of T sensor. In this study, we discuss the relationship between the T sensor and the ratio in the falling phase; it not only shows a good regular correlation between the T sensor and the ratio but also reveals a method of temperature compensation, as shown in Figure 8. We analyzed the slope in Figure 8 and figured out the temperature compensation formula as following: We analyzed the slope in Figure 8 and figured out the temperature compensation formula as following: where Ratio (new) is temperature compensated data, Ratio (old) is temperature uncompensated data, slope (V th , T sensor , Ratio) is variable coefficient which will change with different conditions. Figures 9 and 10 show two results; one is the time of dynamic drift after temperature compensation is significantly reduced from 40 min to 18 min as the drift of Ratio region after compensation and the other is that the Ratio does not change with different T sensor , which makes us to achieve the purpose of temperature compensation. The evaluation of temperature compensation for our proposed delta-sigma NDIR technique after 20 min is compared and is shown in Table 1. It shows that the temperature dependence of the CO 2 after temperature compensation is reduced from an average of 2.589% to 0.115%, which comes from the effectiveness of temperature compensation we built and it reveals our proposal architecture gives a practical application of NDIR gas concentration measurement. Where Ratio (new) is temperature compensated data, Ratio (old) is temperature uncompensated data, slope (Vth) is variable coefficient which will change with different conditions. Figures 9 and 10 show two results; one is the time of dynamic drift after temperature compensation is significantly reduced from 40 min to 18 min as the drift of Ratio region after compensation and the other is that the Ratio does not change with different T sensor, which makes us to achieve the purpose of temperature compensation. The evaluation of temperature compensation for our proposed delta-sigma NDIR technique after 20 min is compared and is shown in Table 1. It shows that the temperature dependence of the signal after temperature compensation is reduced from an average of 2.589% to 0.115%, which comes from the effectiveness of temperature compensation we built and it reveals our proposal architecture gives a practical application of NDIR gas concentration measurement. Vth=0.580 Vth=0.680 Vth=0.684 Vth=0.712 Dynamic drift time is reduced from 40 mins to 18 mins. Drift of Ratio region after compensation The effect of temperature is removed Where Ratio (new) is temperature compensated data, Ratio (old) is temperature uncompensated data, slope (Vth) is variable coefficient which will change with different conditions. Figures 9 and 10 show two results; one is the time of dynamic drift after temperature compensation is significantly reduced from 40 min to 18 min as the drift of Ratio region after compensation and the other is that the Ratio does not change with different T sensor, which makes us to achieve the purpose of temperature compensation. The evaluation of temperature compensation for our proposed delta-sigma NDIR technique after 20 min is compared and is shown in Table 1. It shows that the temperature dependence of the signal after temperature compensation is reduced from an average of 2.589% to 0.115%, which comes from the effectiveness of temperature compensation we built and it reveals our proposal architecture gives a practical application of NDIR gas concentration measurement. Vth=0.580 Vth=0.680 Vth=0.684 Vth=0.712 Dynamic drift time is reduced from 40 mins to 18 mins. Drift of Ratio region after compensation The effect of temperature is removed Figure 10. Relationship between Ratio and T sensor with temperature compensation. Conclusions This research reports a new NDIR gas-sensing device based on ∆-Σ architecture, it has closed-loop feedback system for modulating the heating duty of infrared light source. The average working temperature of infrared light source showed slow drift during the startup period. It is found that the dynamic drift of signal readings affected by the temperature change of sensor and it is related to the heat of infrared light source and room temperature. A thermistor is installed to monitor the temperature change of sensor and discussed with its signal readings. To develop a long term stable CO 2 concentration measurement technique, a temperature compensation model was built, which describes the relationship between the temperatures and gas concentrations, and verified by our proposed ∆-Σ NDIR system to correct the effect of overheating temperature of sensor. The results give a new approach not only to measure gas concentration fast but also maintain a long-term stability significantly reduce the dynamic drift time from 40 min to 18 min. Moreover, we could obtain long term stable signal readings which won't be affected by the changes of temperature.
6,736.8
2019-01-16T00:00:00.000
[ "Engineering", "Physics" ]
Prognostic Value of Malic Enzyme and ATP-Citrate Lyase in Non-Small Cell Lung Cancer of the Young and the Elderly Background Lung cancer is the leading cause of death among malignancies worldwide. Understanding its biology is therefore of pivotal importance to improve patient’s prognosis. In contrast to non-neoplastic tissues, cancer cells utilize glucose mainly for production of basic cellular modules ‘(i.e. nucleotides, aminoacids, fatty acids). In cancer, Malic enzyme (ME) and ATP-citrate lyase (ACLY) are key enzymes linking aerobic glycolysis and fatty acid synthesis and may therefore be of biological and prognostic significance in non-small cell lung cancer (NSCLC). Material and Methods ME and ACLY expression was analyzed in 258 NSCLC in correlation with clinico-pathological parameters including patient’s survival. Results Though, overall expression of both enzymes correlated positively, ACLY was associated with local tumor stage, whereas ME correlated with occurrence of mediastinal lymph node metastases. Young patients overexpressing ACLY and/or ME had a significantly longer overall survival. This proved to be an independent prognostic factor. This contrasts older NSCLC patients, in whom overexpression of ACLY and/or ME appears to predict the opposite. Conclusion In NSCLC, ME and ACLY show different enzyme expressions relating to local and mediastinal spread. Most important, we detected an inverse prognostic impact of ACLY and/or ME overexpression in young and elderly patients. It can therefore be expected, that treatment of NSCLC especially, if targeting metabolic pathways, requires different strategies in different age groups. Introduction Lung cancer is the leading cause of malignancy related death worldwide [1]. Despite tremendous advances in medical therapies, prognosis of lung cancer patients remains poor with a 5-year survival rate ranging between 7.9% and 16.5% [1,2]. By showing that patients benefit from different chemotherapy regimens dependent on histological subtype, Scagliottti abolished the dogma to treat non-small cell lung cancer (NSCLC) as an oncologically homogenous group [3]. Thus, the major NSCLC subgroups, adenocarcinoma (LAC), squamous cell carcinoma (SCC) and large cell carcinoma (LCC) do not only show different histological patterns, but present specific biological and molecular features, too. A better insight into these distinct characteristics will aid to direct personalized therapies. In this context, metabolic changes associated with malignant cellular transformation are of pivotal importance. For decades, it is known that malignant tumors produce excessive lactate even in the presence of sufficient oxygen (Warburg effect) [4]. Yet, mechanisms behind this phenomenon are not fully understood. Malignant cells function with metabolic autonomy, and glucose, its metabolites as well as glutamine are not only energy sources. They also serve as basic building units via generation of key molecules needed for cellular and thus malignant tumor growth [5,6]. As enzymes of glucose metabolism represent a common downstream endpoint for various tumor driver mutations, they could be promising targets for new chemotherapeutic agents, too [7]. Among these enzymes, ATP-citrate lyase (ACLY) and malic enzyme (ME) are two key players: ME serves as source of reductive equivalents in highly cataplerotic malignant cells. ACLY builds a physiological shunt between glucose metabolism and fatty acid synthesis [6]. We therefore analyzed the expression patterns of these two enzymes to elucidate their association with clinico-pathological features and their biological impact on patient's survival in NSCLC. Our results clearly show that functional metabolic changes in NSCLC are complex, differ in histological subtypes and predict different outcomes depending on patient's age. Ethics statement The study has been approved by the University Medical Center Freiburg (Ethics committee University Medical Center Freiburg, EK 10/12). Patient related data has been pseudonymized and results obtained by this study did not influence patient's treatment. Archived material had been used at least three years after initial diagnosis. By signing the treatment contract with the University Medical Center Freiburg, each patient agrees that his/her pseudonymized tissue(s) may be suspect to retrospective research trials not interfering with or influencing current treatment options. The ethics committee of the University Medical Center Freiburg thus approved that no individual study specific consent of each patient had to be obtained. Cohort 258 patients suffering from NSCLC were included in this study. Patients underwent surgical treatment between 1990 and 2007 (Department of Thoracic Surgery, University Medical Center Freiburg; S1 Dataset) and had not received neoadjuvant therapy. Fixation, grossing and paraffin embedding were performed according to routine protocols. All cancer cases were reclassified according to the current WHO classification [8], staging was reassessed in concordance with the latest UICC classification [9]. Tissue multi arrays (TMA) were constructed with a core diameter of 2 mm. From all specimens three TMA-cores were taken from different sites to avoid bias from intratumoral heterogeneity. A TMA of 36 corresponding non-neoplastic lung tissues served as control set. (S1 Table: summary of clinic-pathological data). For both immunohistochemical stains, ACLY and ME, protocols were validated for specific staining by omission of the primary antibodies. These validation procedures did not show unspecific chromogen reactions. Enzyme expression was considered positive, if specific cytoplasmic staining was detected. For ACLY, specific nuclear positivity was also assessed. Immunohistochemical scoring followed previously described protocols [10,11] and was evaluated in analogy to internationally accepted scoring of predictive markers [12,13]. Staining intensity was evaluated semi-quantitatively using a 4-tired scoring system (Fig 1). Percentage of positive tumor cells was determined by considering all positive tumor cells in relation to their absolute number. Percentage figures were rounded to the next decimal. Nuclear and cytoplasmic expressions of ACLY were evaluated separately. Statistics For all statistical analyses mean values of the three TMA-cores of each case were used. Differences in enzyme expression were evaluated by non-parametric tests. Survival analysis included Kaplan-Meier curves and log-rank tests. For multivariate analyses, Cox-regression models were used. All statistical analyses were performed using the SPSS 21.0 software suite. Level of significance was set to 5% (i.e. p<0.05). The overall level of significance has been adjusted for multiple testing using the Benjamini-Hochberg method [14] (S2 Table). ACLY and ME are upregulated in NSCLC ACLY expression in tumor tissue was detected in both, cytoplasm and nucleus (Fig 1), whereas ME was detectable only in the cytoplasm (Fig 1). Immunohistochemical enzyme expression was significantly higher in tumor cells (ACLY nucl.: 19.92 +/-21.57; ACLY cytopl.: 22.23 +/-21.57; ME: 52.49 +/-37.86) than in corresponding non-neoplastic lung tissue (ACLY nucl.: 16.57 +/-16.75; ACLY cytpl.: 11.43 +/-20.97; ME: 9.53 +/-10.50; p < 0.001) ME but not ACLY is differentially expressed in histological NSCLC subtypes As differentiation between LAC and SCC of the lung is of therapeutic importance, we analyzed expression patterns in relation to these two histological NSCLC subtypes. ME expression was higher in SCC compared to LAC (p < 0.001). ACLY did not show a significant correlation with histological subtype. Furthermore, a significantly higher expression only of ME was found in smokers compared to non-smokers (p = 0.012). ME but not ACLY expression is associated with mediastinal metastatic events To investigate the relationship of ME and ACLY expression with systemic tumor expansion, we separately analyzed NSCLC of nodal positive patients in correlation with location of lymph node metastases. The presence of mediastinal lymph node metastases was significantly correlated with higher expression of ME in primary tumor tissue (p = 0.041) but not of ACLY (cytoplasmic: p = 0.511; nuclear: p = 0.446). This association was particularly strong in LAC (ME: p = 0.030). Overexpression of either ME, ACLY or both is an independent prognostic factor To avoid statistical bias, mean values of ME and ACLY expression were used for dichotomization. In the overall analysis no significant correlation of ME and ACLY overexpression with patient's survival was found. Similar results were obtained in subgroup analyses according to smoking habits, sex, histological grading, pT-or pN-stages. Age stratification was performed by using the median age (65 years) at NSCLC diagnosis. No significant correlation between age and pT, pN or overall UICC stage was detected. In young patients, nuclear ACLY overexpression proved to be associated with a favorable overall survival (p = 0.029), while this was not the case in patients older than 65 years (p = 0.626). ME overexpression in these two age subgroups only showed a statistical trend in older patients (p = 0.093). Young patients with overexpression of ME or nuclear ACLY or both in their tumors, had a significantly longer overall survival compared to those without overexpression of these enzymes (p = 0.007; Fig 2). Multivariate analysis, which included UICC stage, the only additional prognosticator in this patient group, proved this to be an independent prognostic factor (p = 0.002; Table 1). On the other hand, overexpression of either or both of the two enzymes resulted in shorter overall survival in older patients (Fig 2, p = 0.058). Discussion Due to its high incidence and mortality, lung cancer still remains one of the major health burdens, worldwide. It is therefore of pivotal importance to better understand its biology in order to develop new suitable treatment options. The metabolic switch of tumor cells to aerobic glycolysis is a well-known event. On the one hand, it serves to facilitate the uptake and incorporation of nutrients into basic cellular building blocks. On the other hand, it results in the production of lactate, which facilitates metastasis formation and therapy resistance [15,16]. In several malignant tumors it has been shown that ACLY is not only elementary for denovo fatty acid synthesis [17], but also its rate limiting step [18][19][20][21]. As one of the key enzymes of de-novo fatty acid synthesis, ACLY generates cytosolic acetyl-coenzyme A (acetyl-CoA) [22,23] and oxaloacetate. The latter is reduced to malate by malate dehydrogenase. The cytosolic isoform of malic enzyme converts malate into pyruvate [24]. Pyruvate that is not shuttled into the mitochondrion to generate oxaloacetate, is further converted into lactate [6]. ACLY and ME expression may therefore be altered in malignant tumor cells compared to non-neoplastic tissues. Comparing non-neoplastic lung and NSCLC tissue, we found a statistically significant increase of both, ACLY and ME expression within neoplastic cells. This is in concordance with other findings concerning altered carbohydrate metabolism in cancer [10,11,[25][26][27]. In our cohort, ME and ACLY expression, nuclear as well as cytoplasmic, showed a positive correlation. The fact that correlation coefficients are rather small may reflect complex interrelations between several metabolic enzymes as well as high histomorphological heterogeneity of NSCLC. Comparison of immunohistochemical expression patterns of ME and ACLY further supports this statement as only ME-expression significantly differed between LAC and SCC. This kind of differential expression patterns in NSCLC subtypes has also been shown for other enzymes related to altered cellular cancer metabolism [10,11,28]. Since SCC is often found in heavy smokers and LAC are so called typical non-smoker carcinomas, significant higher expression of ME in tumors of patients with smoking history is not surprising. This can be due to different hypoxic states of tumors and/or patients leading to different metabolic states in NSCLC and most probably in SCC compared to LAC, too. The majority of smoking associated NSCLC possess p53 mutations and thus, the frequency of p53 mutations in SCC is higher compared to LAC [29]. Recently, Jiang could verify that ME expression is regulated by p53. According to their findings, p53 is responsible for down regulating ME expression [30]. These findings are in good concordance with ours, that ME is not only overexpressed in NSCLC compared to tumor free lung tissue but also, that ME expression is higher in SCC compared to LAC and in smokers compared to non-smokers. Striking to us was, that ACLY and ME revealed different expression patterns dependent on local or mediastinal tumor spread. While ACLY was negatively correlated with local tumor extension measured by pT-stage, as well as metric tumor size, ME only showed a significant correlation with mediastinal metastatic events in comparison to hilar lymph node metastases (pN1 vs pN2/pN3). Changes in tumor metabolism, therefore, seem to be complex and may be different not only in histological subtypes but also according to local or systemic tumor spread. Furthermore, in recent research results, additional functions of ACLY beside involvement in glucose metabolism have been detected: ACLY is also a key-player in histone acetylation. These findings suggest a link between growth factor changes in cancer metabolism and gene Overexpression of ACLY and ME in correlation with patient's overall survival. A) Overexpression of ACLY is associated with a better outcome in young patients but not in older patients. B) ME overexpression shows a statistical trend towards a poorer overall survival in older patients. C) Young patients whose tumors revealed either ACLY and/or ME overexpression had a significantly longer overall survival compared to those without overexpression of either enzyme. But in older patients overexpression of either ACLY and/or ME may be associated with poorer overall survival. expression which is realized by ACLY [31]. In analogy to Wellen, Londono Gentile just recently published that ACLY at least in part regulates DNA methyltransferase-1 (DNMT1) [32]. Different localization of ACLY may therefore reflect different activities within the cell, i.e. cytoplasmic ACLY is predominantly involved in cancer cell metabolism, while nuclear ACLY is predominantly involved in regulation of gene expression [31,33]. For this reason, we assessed different localizations of ACLY, i.e. nuclear and cytoplasmic, separately. In concordance with our results, Migita showed that ACLY was significantly higher expressed in LAC compared to non-neoplastic lung tissue [22]. In their publication, ACLY was also a prognostic factor, while they showed that high levels of ACLY were associated with a poorer outcome [22]. We could not reproduce this finding of Migita. But in contrast to their results, nuclear ACLY overexpression was of significant benefit in young patients of our cohort. Compared to Migita, we included not only LAC but also SCC as well as LCC and immunohistochemical analysis of ACLY expression was assessed not only for staining intensity but also according to the fraction of positive cells and with regard to subcellular localization of ACLY. In addition to these analytical differences, Migita investigated the expression of phosphorylated ACLY, whereas our antibody is directed against ACLY regardless its phosphorylation status. Furthermore, in our detailed subgroup analyses, we could show that patient's age may also influence NSCLC biology. While high levels of either ACLY and/or ME were a good prognostic factor in patients younger than 65 years, a statistical trend to the opposite was detected in elderly patients. This may indicate different changes in tumor metabolism and enzyme function. Several metabolic changes are implicated with older age, such as increasing hypoxia and higher incidence of diabetes mellitus type 2. In a large cohort study, type 2 diabetes mellitus was associated with a higher risk to develop several types of cancer, including lung carcinoma [34]. In this context, both, ACLY and ME have been shown to be important for glucose related insulin secretion [24]. By the fact, that incidence of type 2 diabetes as well as generalized hypoxia due to lung function impairment is higher in older patients, our findings of changing impact of ACLY and ME enzymatic fitting in NSCLC can be functionally supported. Diversity of NSCLC in different age populations is known for genetic aberrations and incidences of driver mutations [35,36]. Thus, our findings, that changes of metabolic enzymes can be of different influence on cancer biology in NSCLC, further support that lung cancer arising in young patients may be biologically and genetically different from those of older patients. In vitro and in vivo studies indicate that new pharmacological agents inhibiting ACLY can lead to significant decrease in cellular and tumor growth [22,33,37,38]. Since our results show that overexpression of ACLY and/or ME in patients older than 65 years of age tend to have a poorer prognosis, they suppose that older patients may profit most from these ACLY inhibitors. Concluding, expression patterns of the metabolic enzymes ACLY and ME are of different biological impact on survival in NSCLC patients. While in young patients overexpression of either ACLY or ME is indicative for a favorable overall survival, it tends to have the opposite effect in older patients. With the development of new inhibitory drugs directed against ACLY, our results support new treatment options with special focus on aged NSCLC patients. Supporting Information S1
3,712.8
2015-05-11T00:00:00.000
[ "Medicine", "Biology" ]
Advances in Interdisciplinary Researches to Construct a Theory of Consciousness The interdisciplinary researches for a scientific explanation of consciousness constitute one of the most exciting challenges of contemporary science. However, although considerable progress has been made in the neurophysiology of states of consciousness such as sleep/waking cycles, investigation of subjective and objective nature of consciousness contents raises still serious difficulties. Based on a wide range of analysis and experimental studies, approaches to modeling consciousness actually focus on both philosophical, non-neural and neural approaches. Philosophical and non-neural approaches include the naturalistic dualism model of Chalmers, the multiple draft cognitive model of Dennett, the phenomenological theory of Varela and Maturana, and the physics-based hypothesis of Hameroff and Penrose. The neurobiological approaches include the neurodynamical model of Freeman, the visual system-based theories of Lamme, Zeki, Milner and Goodale, the phenomenal/access hypothesis of Block, the emotive somatosensory theory of Damasio, the synchronized cortical model of Llinas and of Crick and Koch, and the global neurophysiological brain model of Changeux and Edelman. There have been also many efforts in recent years to study the artificial intelligence systems such as neurorobots and some supercomputer programs, based on laws of computational machines and on laws of processing capabilities of biological systems. This approach has proven to be a fertile physical enterprise to check some hypothesis about the functioning of brain architecture. Until now, however, no machine has been capable of reproducing an artificial consciousness. Introduction By separating the immaterial mind and the material body, Descartes (1596-1650) traced a dividing line between two incommensurable worlds: the spiritual and the material world [1].From this ontological dualism is born an epistemological dualism according to which the material must be known by science and the mind by introspection.During the 18 th century, the definition of the mind started to emerge with that of consciousness, recognized as instrument of knowledge of the world which surrounds us as also of our interiority.Hume (1711-1776), for example, defined the mind: "as an interior theatre on which all that is 'mental' ravels in a chaotic way in front of an interior eye whose eyelids would not blink" [2].On the contrary, the materialist philosophers of the 18 th century will give up the cartesian immaterial substance, but they will keep the metaphor of the body-machine and will extend this metaphor to all human functions, including thought.Far from showing the immateriality of the heart, indeed, the cartesian cogito would prove rather for La Mettrie (1709-1751), Diderot (1713-1784) and Holbach (1723-1789) that the matter can think.In this line of idea, transformist/proto-evolutionist and darwinian theories of 18 th and 19 th centuries, by introducing the idea of a biological origin of the man, introduced the vast field of investigation of the "living matter" and the problem of "when" and "how" of the mind [3]. However, it was necessary to await the arise of the psychology in the mid-nineteenth century, so that consciousness becomes the central object of a new discipline claiming science and asserting its independence with regard to philosophy.This introspective approach of mind dominated the investigations of researchers such as Herman von Helmholtz (1821-1894), Wilhelm Wundt (1832Wundt ( -1920) ) and William James (1842-1910) [4][5][6].In the end of the 19 th century, in his founder text for a phenomenology, Husserl (1859Husserl ( -1938) ) exposed his thesis on the nature of consciousness [7].Following Brentano's work, he adopted the concept of intentionality and gave it a central role concerning the transcendental ego.He taught that consciousness is never empty: it always aims an object and, more generally, the world.The ego always carries in it the relationship with the world as intensional aiming.The early 20 th century saw the eclipse of consciousness from scientific psychology [8,9].Without denying the reality of subjective experiences, the strategy of behaviorists was to draw aside consciousness of the direct field of investigation and to regard the brain as a "puzzle-box".They then put "between brackets" the irresolute problems which encumbered the psychology to study stimuli and answers, i.e. behaviors which reduce the analysis to rigid relations between inputs (stimuli) and outputs (movements).In the 1960s, the grip of behaviorism weakened with the rise of the cognitive psychology [10,11].However, despite this renewed emphasis on explaining cognitive capacities such as memory, perception and language comprehension, consciousness remained a largely neglected topic until the major scientific and philosophical researches in the 1980s.Researches on the correlations between the identifiable mental activities and the objectivable cerebral activities succeeded the traditional questioning on the relation between the mind and the brain.They were approached by neurophilosophers, neurobiologists and researchers in artificial intelligence as familiar with the epidemiologic and ontological questions as with the methodologic and empiric ones.My purpose here is to review the main attempts to provide an adequate explanatory basis for consciousness. Difficulties in Philosophically Tackling the Problem of Consciousness Traditionally, one supposed in philosophy of mind that there was a fundamental distinction between the dualist philosophers, for which there are two kinds of phenomena in the world, the mind and the body, and the monist philosophers, for which the world is made of only one substance.Although dualism can be of substance, as Descartes thought it, the majority of dualist philosophers currently adopt a dualism of property which admits that the matter can have both mental and physical properties.Likewise , if the monism can be an idealism (in the sense of the philosopher Berkeley), the totality of monist philosophers in the present time are materialists.Dualism of property asserts the existence of conscious properties that are neither identical with nor reductible to physical pro-perties, but which may nevertheless be instantiated by the very same things that instantiate physical properties.The most radical dualism is the "psychological parallelism", which seeks to account for the psychophysiological correlations while postulating that the mental state and the cerebral state correspond mutually without acting one on the other [12].A less radical version, "the epiphenomenalism", recognizes the existence of causal influences of brain on the mental states, but not the reverse [13].In contrast to dualism, the monism claims that everything that exists must be made of matter.A modern version of materialism is the physicalism.One type of physicalist materialism is the radical current of thought called "eliminativism", which rejects the very notion of consciousness as muddled or wrong headed.It affirms that the mental states are only temporary beliefs intended to be replaced by neurobiologic models of research [14].Another form of physicalist materialism is the thesis of the strong identity.It affirms the existence of an internal principle of control (the mental principle) which is anything else only the brain: the mental is reducible with the biological properties, which themselves are reducible with physics.However, the thesis most commonly adopted is that of functionalism.The functionalism has been proposed to answer an intriguing question: how to explain, if the theory of identity is admitted, that two individuals can have different cerebral states and nevertheless to have at one precise time exactly the same mental state?The response of functionalists is as follows: what there is identical in the two different cerebral occurrences from the same mental state, it is the function.Whatever the physical constitution of the cerebral state, several mental states are identical if their relations are causal.For the functionalists, the mental states are thus defined by their functional role inside the mental economy of the subject [15]. To accept traditional dualism it is agree to make a strict distinction between the mental and the physical properties.In other words, it is to give up the unified neuroscientific theory which one can hope for to obtain one day.But to accept the solutions recommended by the physicalist materialism is worse still, because those end up denying the obvious fact that consciousness has internal subjective and qualitative states.For this reason, certain philosophers tried to solve the problem, either by adopting a mixed theory, both materialistic and dualistic, or by denying the existence of subjective states of consciousness.David Chalmers, for example, adopted the first solution.Chalmers is known for his formulation of the distinction between the "easy" problem of consciousness and the single "hard" problem [16,17].The essential difference between the easy problem and the hard (phenomenal) problem is that the former is at least theoretically answerable via the standard strategy of functionalism.In support of this, in a thought experiment, Chalmers proposed that if zombies are conceivably complete physical duplicates of human beings, lacking only qualitative experience, then they must be logically possible and subjective personal experiences are not fully explained by physical properties alone.Instead, he argued that consciousness is a fundamental property ontologically autonomous of any known physical properties.Chalmers described therefore the mind as having "phenomenal" and "psychological" aspects.He proposed that mind can be explained by a form of "naturalistic dualism".That is, Chalmers accepted the analysis of functionalists while introducing the concept of consciousness irreducible into his system.According to him, since the functional organization provides the elements of mental states in their nonconscious forms, it is necessary that consciousness is added again to this organization. Other philosophers, on the contrary, claimed that closing the explanatory gap and fully accounting for subjecttive personal experiences is not merely hard but rather impossible.This position was most closely associated with that, for example, of Colin McGinn, Thomas Nagel [18,19] and Daniel Dennett [20].In particular, Dennett argued that the concept of qualia is so confused that it cannot be put to any use or understood in any non-contradictory way.Having related the consciousness to properties, he then declared that these properties are actually judgements of properties.That is, he considered judgements of properties of consciousness to be identical to the properties themselves.Having identified "properties" with judgements of properties, he could then show that since the judgements are insubstantial, the properties are insubstantial and thence the qualia are insubstantial.Dennett concluded therefore that qualia can be rejected as non-existent [20].For Dennett, consciousness is a mystifying word because it supposes the existence of a unified center of piloting of thoughts and behaviors.For him, psychism is a heterogeneous unit which combines a series of mental processes that one knows little about such as perception, production of the language, training, etc…: we allot, with the others and ourself, some "intentions", a "consciousness", because our behaviors are finalized [21,22].In other writings, Dennett defended an original thesis on the free will [23].For that, he rested on evolutionary biology, on cognitive sciences, on the theory of cellular automata and on memetics [24,25].Taking the opposite course to the argument of those which say that evolutionary psychology, together with memetics, implies necessarily a world deprived of any possibility of choice, Dennett estimated on the contrary that there is something of special in the human subject.According to him, the theory of evolution supports the selective advent of avoiding agents.These agents have the capacity to extract information of the environment to work out strategies in order to avoid the risks and to choose the judicious behaviors.Now, the performance of this capacity is much more important for the man than for the animal because each human individual memorizes an important quantity of social and cultural informations.It is therefore absurd to conceive a linear determinism similar to that of the "demon" of Laplace.For Dennett, the free will must rather be designed like the chaotic determinism of a vast neuromimetic network, in which informations received in entries are combined according to their respective weights to give completely unforeseeable and nonreproducible, but non random, outputs. Difficulties in Scientifically Tackling the Problem of Consciousness One of difficulties in scientifically tackling the problem raised by consciousness comes from the obvious fact that it is the originating principle from which are generated the categories of the interpersonal world and the subjectivity of each personal world.Consciousness is not a substance but an act in itself non objectivable; it escapes any from representation.What consciousness has disconcerting, notes Edelman, it is that it does not seem to raise of the behavior.It is, quite simply, always there [26].Another difficulty comes owing to the fact that consciousness is a Janus Bifron, in the sense that it has both ontology with the first-person perspective and ontology with the third-person perspective, irreducible one with the other [27].It has a first-person perspective because the mental states are purely subjective interior experiences of each moment of the life of men and animals.Principal sorts of first-person data include visual, perceptual, bodily and emotional experiences, mental imagery and occurent thoughts.But consciousness has also ontology with the third-person perspective that concerns the behavior and the brain processes of conscious systems [22].These behavioral and neurophysiological data relevant to the third-person perspective provide the traditional material of interest for cognitive psychology.Principal sorts of third-person data include perceptual discrimination of external stimuli, integration of information to do with the across sensory modalities, automatic and voluntary actions, levels of access to internally represented information, verbal reportability of internal states and differences between sleep and wakefulness.The problem to scientifically approach the consciousness phenomenon is much more complex still because the first-person experience is common for qualia (singular 'quale'), intentionality and self-consciousness.Introduced by Lewis [28], the term of qualia is either used in contemporary usage to refer to the introspectively accessible phenomenal aspects of our mental live [29], or used in a more restricted way so that qualia are intrinsic properties of experiences that are ineffable, nonphysical, and given incorrigibly [30], or still used by some philosophers such as Whitehead, which admits that qualia are fundamental components of physical reality and described the ultimate concrete entities in the cosmos as being actual "occasions of experience" [31].Experiences with qualia are involved, for example, in seeing green, hearing loud trumpets or tasting liquorice, bodily sensations such as feeling a twinge of pain, feeling an itch, feeling hungry, etc… They are also involved in felt reactions, passions or emotions such as feeling delight, lust, fear or love, and felt moods such as feeling elated, depressed, calm, etc… [32,33].Intentionality and qualia necessarily coexist in the generation of conscious states, but the aspect "qualia" may be distinguished from the aspect "intentionality" insofar as the perception of an object, the evocation of a memory or an abstract thought can be accompanied by different affective experiences (joy or annoys, wellbeing or faintness, etc…) [34]. Neurophysiological Studies of Neural Networks and Neuro-Mental Correlations There are two common, but quite distinct, usages of the term of consciousness, one revolving around arousal or states of consciousness and another around the contents of consciousness or conscious states.States of consciousness are states of vigilance (i.e., the continuum of states which encompasses wakefulness, sleep, coma, anesthesia, etc) with which attention is associated.They are cyclic and can last several hours.In contrast, conscious states (percept, thought, memory or subjective experiences such as qualia) are not cyclic nor reproductible and can last only a few minutes, seconds or sometimes milliseconds.Conscious states are situated in a spatiotemporal context.They may refer to the past or the future but are always experienced in the present.They introduce the conscious representations and, more or less explicitly, global models of self, alter-ego and world.States of consciousness and contents of consciousness offer therefore very unequal difficulties to the neuroscientific investigation [35][36][37]. Neurophysiological Studies of States of Consciousness Considerable progress has been made during the last decade in the neurophysiology of states of consciousness.In particular, impressive progress has been made in the neurophysiological investigation of states of vigilance.Notably, the molecular mechanisms of distinct sleep/wake cycles have been thoroughly studied [38].The neuroanatomical systems, the cellular and molecular mechanisms, and the principal types of neurotransmitters involved in these mechanisms for the most part have been identified.In particular, an important line of research has investigate the arousal in altered states of consciousness, for instance, in and after epileptic seizures, after taking psychedelic drugs, during global anesthesia or after severe traumatic brain injury.These studies demonstrate that a plethora of nuclei with distinct chemical signatures in the thalamus, midbrain and pons, must function for a subject to be in a sufficient state of brain arousal to experience anything at all.In particular, sleep/wake cycles essentially depend on anatomical system which comprises structures of brainstem, thalamic and hypothalamic nuclei, and the nucleus of Meynert.Awakening into the vigilant state correlates with a progressive increase in regional cerebral blood flow, first in the brainstem and thalamus, then in the cortex with a particularly important increase in prefrontal-cingulate activation and functional connectivity [38,39].Anesthetic, sleep, vegetative state and coma are all associated with modulations of the activity of this thalamocortical network.Although most dreams occur in paradoxical sleep, the neurobiological mechanisms of walkefulness and paradoxical sleep are identical at the thalamocortical level; the only difference between the two states is at the brainstem level.What differentiates these states is the relationship with the environment: walkefulness brings into play motor and sensory interactions with the external world, while the paradoxical sleep is relatively independent of the external world.Thus, deaming may be considered to be an incomplete form of consciousness, uncontrolled by the environment, mainly reflecting internal factors [38]. Neurophysiological Studies of Contents of Consciousness Contrary to the study of consciousness states, the study of consciousness contents raises very many problems.Much difficulties are due to the brevity and not very reproducible nature of subjective experiences.In addition, the mechanisms of conscious thoughts often result from processes at the same time conscious and unconscious which coexist and even interact (the language, for example, brings into play the unconscious use of linguistic routines).To explain the contents of consciousness of the thirdperson perspective, we need actually to specify the neural and/or behavioral mechanisms that perform the functions [27].The availability of behavioral data is reasonably straightforward, because researchers have accumulated a rich body of behavioral data relevant to consciousness.But the body of neural data that has been obtained to date is correspondingly more limited because of technological limitations [33,40].To study neural mechanisms, researchers use currently a variety of neuroscientific measure-ment techniques including brain imaging via functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) technology, single-cell recording through insertion of electrodes, and surface recording through electroencephalography (EEG) and magnetoencephalography (MEG) [27,41,42].However, though these approaches seem quite promising, many experimental findings proved to be not univocal results and must be compare and integrate with findings obtained from other approaches.For example, when one sees a face there be much activities (for example, on the retina and in the early visual cortex) that seem explanatorily redundant for the formation of the conscious percept of the face [37].To specify precisely the neuronal basis of such a conscious perception, psychologists have perfected a number of techniques (masking, binocular rivalry, continuous flash suppression, motion-induced blindness, change blindness, inattentional blindness) [43].In this design, one keeps as many things as possible constant, including the stimuli, while varying the conscious percept, so that changes in neural activation reflect changes in the conscious percept rather than changes in the stimuli.For instance, a stimulus can be perceptually suppressed for seconds of time: the image is projected into one of the observer's eyes but is invisible, not seen.In this manner, the neural mechanisms that respond to the subjective percept rather than the physical stimulus can be isolated, permitting the footprints of visual consciousness to be tracked in the brain.In some perceptual illusion experiments, on the contrary, the physical stimulus remains fixed while the percept fluctuates.A good example is the Necker cube whose 12 lines can be perceived in one of two different ways in depth. Contrarily to techniques used for studying the thirdperson experiences, methodologies for investigating firstperson experiences (in particular qualia) are relatively thin on the ground, and formalisms for expressing them are even thinner.The most obvious obstacle to the gathering of first-person data concerns the privacy of such data.Indeed, first-person data are directly available only to the subject having these experiences.To others, these first-person data are only indirectly available, mediated by observation of the subject's behavior or brain processes.In practice, the most common way of gathering data about the conscious experiences of other subjects is to rely on their verbal reports.However, verbal reports have some limits.Some aspects of conscious experience (e.g. the experience of emotion) are very difficult to describe.Moreover, verbal reports cannot be used at all in subjects without language, such as infants and animals.In these cases, one needs to rely on other behavior indicators.For example, if an individual presses one of two buttons depending on which of two ways the creature perceives a Necker cube at a time, this button pressing is a source of first-person data.A second obstacle is posed by the absence of general formalisms with which firstperson data can be expressed.Reseachers typically rely either on simple qualitative characterizations of data or on simple parameterization of ones.These formalisms suffice for some purposes, but they are unlikely to suffice for the formulation of systematic theories [27,33]. Approaches to Research Animal Consciousness There is now abundant and increasing behavioral and neurophysiological evidence consistent with, and even suggestive of, conscious states in some animals.This is due to the fact that human studies involving the correlation of accurate reports with neural correlates can provide a valuable benchmark for assessing evidence from studies of animal behavior and neurophysiology of some animals [44].Relevant forms of report included analysis of responses to binocular rivalry in the case of primates, vocalization in the case of birds such as african grey parrots, and coloration and body patterning in the case of cephalopods such as Octopus vulgaris.Rhesus macaque monkeys, for example, were trained to press a lever to report perceived stimuli in a binocular rivalry paradigm.The results from these studies were consistent with evidence from humans subjected to binocular rivalry and magnetoencephalography.They suggested that consciousness of an object in monkeys involves widespread coherent synchronous cortical activity.Likewise, similarities have been found among the functional circuitry underlying the organization and sequencing of motor behaviors related to vocalization in birds and mammals capable of vocal learning.Much of the neural basis for song learning in some birds was found to reside in an anterior forebrain pathway involving the basal ganglia, in particular a striatal neuronal area ressembling that present in the mammalian striatum.These homologies were strongly suggestive of neural dynamics that support consciousness in birds.The case of cephalopod mollusks is much less clear.Indeed, cephalopod such as Octopus possesses a large population of sensory receptors (they communicate with a nervous system containing between 170 and 500 million cells) and numerous nucleus-like lobes in its brain.Its optic lobe, containing as many as 65 million neurons, plays a critical role in higher motor control and establishment of memory.Moreover, a number of other lobes appear to be functionally equivalent to vertebrate forebrain structures, though their organization bears little resemblance to the laminar sheets of mammalian cortex.Recently, laboratory tests and observations in a natural environment showed that Octopus is heavily dependent on learning, and even might form simple concepts.This strongly suggested that cephalopod mollusks have a certain form of primary consciousness [45]. Approaches to Build Artificial Intelligence Systems Artificial Intelligence has not been sparing of metaphors concerning the functioning of the human mind [46].In 1950, Alan Turing tackled the problem of computationalism by proposing its famous test to establish whether one can consider a machine as intelligent as a human [47].So far, however, no computer has given responses totally indistinguishable from the human responses.It appears that this computational cognitivism is limited insofar as it is founded on the formal character of calculation (in this case, to think is equivalent to process data, i.e. to calculate, to handle symbols).This neuroscientific approach is not very different from that where computer simulations and mathematical models are used to study systems such as stomachs, planetary movements, tornadoes, and so on.In contrast, connectionism equates thinking with the operation of a network of neurons and argues that every cognitive operation is the result of countless interconnected units interacting among themselves, with no central control.The connexionnist networks are in general adaptive and allow the study of mechanisms of training [48,49].Such an approach is based on the view that the nervous system itself computes [50].In this topic, new research has particularly developed that consists in making work equations put in darwinian competition thanks to evolutionary algorithms inspired by the modeling of certain natural systems (for example, the competition between the social insects in the construction of the nest) [51][52][53].These systems, which are generic populationbased metaheuristic optimization algorithms, are able to solve problems using some mechanisms inspired by the biological evolution such as the reproduction, mutation, recombination and selection.The principle on which they are founded consists in initializing a population of individuals in a way dependent on the problem to be solved (environment), then to make move this population of generation in generation using operators of selection, recombination and change.Actually, a number of cyberneticians try to approach this difficult problem with systems multi-agents (SMA), or "Massively Multi-Agents" [54][55][56][57][58][59][60].An agent-based model is a class of computational models for simulating the actions and interactions of autonomous agents (both individuals or collective entities such as organizations or groups) with the view to assessing their effects on the system as a whole.This model simulates the simultaneous operations and interactions of multiple agents located in an environment made up of objects which are not agents and nonlikely of evolution, in an attempt to re-create and predict the appearance of complex phenomena.The agents can thus substitute for the programmer and even produce unexpected results.In this vein of approaches, a novel area of great interest is the construction of robotic organisms.One essential property of neurorobots is that, like living organisms, they must organize the unlabeled signals they receive from the environment into categories.This organization of signals, which in general depends on a combination of sensory modalities (e.g.vision, sound, taste, or touch), is a perceptual categorization, as in living organisms, makes object recognition possible based on experience but without a priori knowledge or instruction.Like the brain, neurorobots operate according to selectional principles: they form categorical memory, associate categories with innate values, and adapt to the environment [61]. An important problem arises however with these new perspectives of research: any constructive and exhaustive approach of artificial consciousness must define a system which has access, as the human primary consciousness, within the meaning of its own knowledge.To investigate this question, as Owen Holland said it, the system must be able simultaneously to produce an intentional representation and the perception in its own organization of this intentionally generated form; it must be self-aware [62].A question then becomes: when will a machine become self-aware?Although an answer to this question is hazardous, one can determine at least a plausible necessary precondition without which a machine could not develop self-awareness.This precondition derives from the assertion that, to develop self-awareness, a neural network must be at least as complex as the human brain.Recently, an enormous project (Human Brain Project) was given for objective, believing this could be achievable in as little as 10 years time, to succeed in simulating the functioning of the neocortex of mammals by means of the fastest supercomputer architecture in the world, the IBM's Blue Gene platform [63].For the moment, one modelled a single cortical column consisting of approximately 60,000 neurons and 5 km of interconnecting synapses, chosen from about 15,000 experiments carried out in laboratory.This Blue Brain project requires to reproduce in the future the equivalent of the million functional units that holds the neocortex.It should be however noted that this project refers only to a necessary but not sufficient condition for the development of an artificial consciousness.Even if the machine becomes as skilled as humans in many disciplines, one cannot assume that it has become selfaware.The existence of a powerful computer equipped with millions of gigabytes is not in itself sufficient to guarantee that the machine will be a self-awareness intelligence.Moreover, it remains to define the criteria which will make possible to recognize that an artificial entity has a conscious state.Indeed, the problem is not to know if a machine suffers, but if it behaves "as if" it suffered [64,65]. Explanatory Theories of Consciousness Explanatory theories of consciousness should be distinguished from the experimental approaches to phenomena of consciousnesss.While the identification of correlations between aspects of brain activity and aspects of consciousness may constrain the specification of neurobiologically plausible models, such correlations do not by themselves provide explanatory links between neural activity and consciousness.Models of consciousness are valuable precisely to the extent that they propose such explanatory links.To days, one can arbitrarily classify the various approaches to modeling consciousness into two categories: the theories making correspond certain functional modes of the brain to the conscious activity, and the theories binding the structure of neural networks to conscious activity . Theories Making Correspond Certain Modes of the Brain to the Conscious Activity In this category, one finds mainly the phenomenological approaches of Francisco Varela and Humberto Maturana, the physics-based hypothesis of Stuart Hameroff and Roger Penrose, and the neurodynamical model of Walter J. Freeman. Model of Varela and Maturana.Varella, and his mentor Humberto Maturana, developed a model based on the notion of autopoiesis [66][67][68][69][70]. Based on cellular life, autopoiesis attempts to define, beyond the diversity of all living organisms, a common denominator that allows for the discrimination of the living from the non-living.Inside the boundary of a cell, many reactions and many chemical transformations occur but, despite all these processes, the cell always remains itself and maintains its own identity.The consequence is that the interaction between a living autopoietic unit and a component of its environment is only dictated by the way in which this component is 'seen' by the living unit.To characterize this very particular nature of interaction, Maturana and Varela used the term of "cognition".In their theory, the environment has its own structural dynamics but it does not determine the changes in organism.Although it can induce a reaction in the organism, the accepted changes are determined by the internal structure of the organism itself.The consequence is that the environment brings to life the organism and the organism creates the environment with its own perceptory sensorium.It should be emphasize that this thinking is close to certain european philosophies, in particular to that of Merleau-Ponty [71].To express this process of mutual calling into existence, this co-emergence, this equivalence between life and cognition, Varela and Maturana used the word of "enaction".For Varela, the mind as a phenomenology in action, viewed from either the first-or the third-person perspective, can only be described as a behavior literally situated in a specific cycle of operation [72].For him, the mind is not in the head, it is in a non-place of the co-determination of inner and outer [73].There is no separation between the cognitive act and the organic structure of life, they are one ; the traditional cartesian division between matter and mind disappears at the level of human cognition at which the notion of consciousness appears.To signify that human consciousness has its counterpart in the organic structure, that there is no consciousness outside the reality of bodily experience, Varela used the term of "embodiment" [74].We are therefore global dynamic processes, in dynamical equilibrium, emerging and acting from interactions of constituents and interactions of interactions.In this thesis, the brain level is only considered to contribute to properties of conscious experiences.At the phenomenal level, the constitution of conscious moments implies a high temporal integration of multiple contents emerging in a transitory way.According to Varela, because of its biophysical organization (its organizational closure), the brain belongs to multistable dynamical systems, in which eigenbehaviors are constrained by a landscape of multiple non-stable attractors.There are however some methodological problems in this theory.The first is the old problem that the mere act of attention to one's experience transforms that experience.This is not too much of a problem at the start of investigation, because one has a long way to go until this degree of subtlety even comes into play, but it may eventually lead to deep paradoxes of observership.The second problem is that of developing a language or a formalism in which phenomenological data can be expressed.Indeed, the notorious ineffability of conscious experience plays a role here, because the language one has for describing experiences is largely derivative on the language one has for describing the external world.The third difficulty lies in the failure, or at least the limitations, of incorrigibility: judgments could be wrong. Model of Hameroff and Penrose.For Stuart Hameroff and Roger Penrose, neurons belong to world of traditional physics, are calculable and cannot explain as a such consciousness.They proposed therefore a new physics of objective reduction which appeals to a form of quantum gravity to provide an useful description of fundamental processes at the quantum/classical borderline [75][76][77][78].Within this scheme, consciousness occurs if an appropriately organized system is able to develop and maintain quantum coherent superposition until a specific "objective" criterion (a threshold related to quantum gravity) is reached; the coherent system then self-reduces (objective reduction).This type of objective self-collapse introduces non-computability, an essential feature of consciousness which distinguishes our mind from classical computers. Objective reduction is taken as an instantaneous event (the climax of a self-organizing process in fundamental space-time) [31].In this model, quantum-superposed states develop in microtubules subunit proteins (tubulins) within the brain neurons.They recruit more superposed tubulins until a mass-time-energy threshold (related to quantum gravity) is reached.At this point, self-collapse, or objective reduction, abruptly occurs.One equares the pre-reduction, coherent superposition phase with pre-conscious processes, and each instantaneous (and non-computable) objective reduction, or self-collapse, with a discrete conscious event.Sequences of objective reduction give rise to a "stream" of consciousness.Microtubule-associatedproteins can "tune" the quantum oscillations of the coherent superposed states.The objective reduction is thus self-organized, or "orchestrated".Each orchestrated objective reduction event selects (non-computably) microtubule subunit states which regulate synaptic/neural functions using classical signaling.The quantum gravity threshold for self-collapse is relevant to consciousness because macroscopic superposed quantum states each have their own superposed space-time geometries.However, when these geometries are sufficiently separated, their superposition becomes significantly unstable and reduce to a single universe state.Quantum gravity determines thus non-computably the limits of the instability.In sum, each orchestrated objective reduction event is a self-selection of space-time geometry coupled to the brain through microtubules and other molecules.This orchestrated objective reduction provides us with a completely new and uniquely promising perspective on the hard problem of consciousness [77,78].The model of Hameroff and Penrose has received serious criticism, notably from certains philosophers such as Rick Grush and Patricia Churchland [79].These authors pointed out that microtubules are found in all plant and animal cells, and not only in brain neurons.They also stated that some chemicals that are known to destroy microtubules do not seem to have any effects on consciousness and that, conversely, anaesthetics act without affecting the microtubules.Another objection addresses one of the strengths of Penrose and Hameroff's model, which is, according to its authors, that it can account for the unity of consciousness.Indeed, if this impression of human consciousness unity should prove to be an illusion, then explanations based on non-locality and quantum coherence would become irrelevant. Model of Freeman.The work of Walter J. Freeman was based mainly on electrophysiological recording of the olfactory system of awake and behaving rabbits [80][81][82][83][84]. Freeman found that the central code for olfaction is spatial.Although this had been predicted by others on the basis of studies in the hedgehog, certain aspects of his results were surprising.For example, Freeman showed that the information is uniformly distributed over the entire olfactory bulb for every odorant.By inference every neuron participates in every discrimination, even if and perhaps especially if it does not fire, because a spatial pattern requires both black and white.He discovered that the bulbar information does not relate to the stimulus directly but instead to the meaning of the stimulus.This means that the brain does not process information in the commonly used sense of the word.When one scans a photograph or an abstract, one takes in its import, not its number of pixels or bits.The brain processes meaning as in this example.He also found that the carrier wave is aperiodic: it does not show oscillations at single frequencies, but instead has wave forms that are erratic and unpredictable, irrespective of odorant condition.In the theory of Freeman, the chaotic activity distributed among the neuronal populations is the carrier of a spatial pattern of amplitude modulation that can be described by the local heights of the recorded waveform.When an input reaches the mixed population, an increase in the nonlinear feedback gain will produce a given amplitude-modulation pattern.The emergence of this pattern is considered to be the first step in perception: meaning is embodied in these amplitude-modulation patterns of neural activity, whose structure is dependent on synaptic changes due to previous experience.Through the divergence and convergence of neural activity onto the entorhinal cortex, the pulse patterns coming from the bulb are smoothed, thereby enhancing the macroscopic amplitude-modulation pattern, while attenuating the sensory-driven microscopic activity.Thus, what the cortex "sees" is a construction made by the bulb, not a mapping of the stimulus.Hence, perception is an essential active process, closer to hypothesis testing than to passive recovery of incoming information.The stimulus then confirms or disconfirms the hypothesis, through state transitions that generate the amplitude-modulation patterns.Finally, through multiple feedback loops, global amplitude-modulation patterns of chaotic activity emerge throughout the entire hemisphere directing its subsequent activity.These loops comprise feedforward flow from the sensory systems to the entorhinal cortex and the motor systems, and feedback flow from the motor systems to the entorhinal cortex and from the entorhinal cortex to the sensory systems.Such global brain states emerge, persist for a small fraction of a second, then disappear and are replaced by other states.It is this level of emergent and global cooperative activity that is crucial for consciousness.Freeman tackled also the enigmatic problem of the nature of the free will.He proposed that the man, entirely engaged in a project and in permanent relation with the other men and the world, takes his decisions in real time by implying all his body.The perceived will as conscious is informed only with a light delay.Not deciding a behavior in progress, it only takes action to modulate the various aspects of the voluntary decision and to legitimate it taking into consideration whole of significances which constitute the subject.There are however some problems in this interesting Freeman's account [85].The mechanisms of origine and the control of gamma oscillations are not yet entirely clear.As predicted by Freeman, during gamma oscillations an average lead/lag relation exists between local excitatory and inhibitory cells.Now, recent analyses of the cellular dynamics concluded that recurrent inhibition from fast spiking inhibitory cells is largely responsible for maintaining the rhythmic drive, but the role played by excitatory processes in modulating or driving the oscillations remains undetermined.Since cortical networks form a dilute and massively interconnected network, a satisfactory explanation for gamma activity should explain the onset and offset of gamma activity in relation to events at more distant sites and at larger scale in the brain.Without clarification of these mechanisms it remains difficult to define the link of gamma activity to storage, retrieval and transmission of information, and between thermodynamic and cognitive or informational perspectives. Theories Binding the Structure of Neural Pathways to Conscious Activity In this second category of models, a first strategy consists in being focused on the visual system.This approach has been mainly studied by Viktor Lamme, Semir Zeki, David Milner and Melvyn Goodale.A second stategy, based on a more global neurophysiological approach, has been mainly developed by Ned Block, by Francis Crick and Christof Koch, and by Rodolfo Llinas, Antonio Damasio, Jean-Pierre Changeux and Gerald Edelman.In spite of some "overwhelming commonalities" among these theories, as Chalmers said it [17], nearly all of them have given a major role to interactions between the thalamus and the cortex. First Strategy Focusing on the Visual System Model of Lamme.The Local Recurrence theory of Viktor Lamme distinguished between three hierarchical types of neural processes related to consciousness [86][87][88].The first stage involves a "feedforward sweep" during which the information is fed forward from striate visual regions (i.e., V1) toward extrastriate areas as well as parietal and temporal cortices, without being accompanied by any conscious experience of the visual input.The second stage involves the "localized recurrent processing" during which the information is fed back to the early visual cortex.It is this recurrent interaction between early and higher visual areas which leads to visual experience.The third and final stages consist of "widespread recurrent processing" which involves global interactions (as in the global workspace of Changeux) and extends toward the executive (i.e., the frontoparietal network) and language areas.This final step also involves the attentional, executive, and linguistic processes that are necessary for conscious access and reportability of the stimulus.An interesting aspect of this theory is that it offers an explanation for the difference between conscious and unconscious perception in mechanistic rather in architectural terms. Another interesting aspect of this theory, although provocative, is that consciousness should not be defined by behavioral indexes such as the introspective reports of the subject.Instead, one should rely on neural indexes of consciousness, one of which is neural recurrence.Indeed, Lamme is concerned with the question of defining phenomenological consciousness when a report is impossible.However, one main difficulty with this theory is that it fails to take into account the recurrent connections that exist between regions that are not associated with consciousness (for instance between the area V1 and the thalamus).Although it remains possible that consciousness involves local recurrence between some cerebral components, this processus cannot then be considered as a sufficient condition for consciousness since it requires the involvement of additional mechanisms for explaining why it only applies to a restricted set of brain regions. Model of Zeki. In the microconsciousness theory of Semir Zeki, it is assumed that instead of a single consciousness, there are multiple consciousness that are distributed in time and space [89,90].This theory reflects the large functional specialization of the visual cortex.For example, while the perception of colors is associated with neural activity in area V4, motion perception reflects neural activity in MT/V5.Zeki took these findings as evidence that consciousness is not a unitary and singular phenomenon, but rather that it involves multiple consciousness that are distributed across processing sites, which are independent from each other.Another evidence in favor of this hypothesis is that the conscious perception of different attributes is not synchronous and can respect a temporal hierarchy.For example, psychophysical measures have shown that color is perceived a few tens of milliseconds before motion, reflecting the fact that neural activity during perception reaches V4 before reaching MT/V5.One main difficulty with this theory is that any processing region in the brain should, at first glance, constitute a neural correlates of consciousness in the multiple-consciousness theory.As such, it remains unclear why conscious perception is not associated with activity in most brain regions, including the cerebellum and subcortical regions, especially those con-veying visual signals (e.g., the lateral geniculate nucleus).Another difficulty for this hypothesis is that visual regions can lead to the binding of several attributes in the absence of consciousness.This has been shown, for instance, in a patient with a bilateral parietal damage, suggesting that the binding mechanisms that are supposed to lead to microconsciousness can in fact operate in the absence of consciousness. Model of Milner and Goodale.The duplex vision theory proposed by David Milner and Melvyn Goodale postulates that visual perception involves two interconnected, but distinctive pathways in the visual cortex, namely, the dorsal and the ventral stream [91,92].After being conveyed along retinal and subcortical (i.e., geniculate) structures, visual information reaches V1 and then involves two streams.The ventral stream projects toward the inferior part of the temporal cortex and serves to construct a conscious perceptual representation of objects, whereas the dorsal stream projects toward the posterior parietal cortex and mediates the control of actions directed at those objects.The two streams also differ at the computational and functional levels.On the one side, the ventral stream conveys information about the enduring (i.e., long-lasting) characteristics that will be used to identify the objects correctly, and subsequently to link them to a meaning and classify them in relation to other elements of the visual scene.On the other side, the dorsal stream can be regarded as a fast and online visuomotor system dealing with the moment-to-moment information available to the system, which will be used to perform actions in real time.Recent evidence with visual masking has revealed unconscious neural activity in ventral regions, including the fusiform face area.This type of evidence is problematic for the duplex vision hypothesis since it predicts that conscious perceptions should be proportional to neural activity in the ventral system.Such a possibility of unconscious ventral processing can be nevertheless accommodated by assuming a threshold mechanism, as proposed in the model of Zeki [90].However, including this threshold leads the theory to lose it former appeal, since consciousness is "only partially correlated" with activity in the ventral system. Second Strategy Based on a Global Neurophysiological Approach Model of Block.One of the most influential issue in recent years has been the potential distinction between phenomenal and access consciousness proposed by Ned Block [93][94][95].According to Block: "phenomenal consciousness is experience; the phenomenally conscious aspect of a state is what it is like to be in that state.The mark of access-consciousness, by contrast, is availability for use in reasoning and rationally guiding speech and action".In short, phenomenal consciousness results from sensory experiences such as hearing, smelling, tasting, and having pains.Block grouped together as phenomenal consciousness the experiences such as sensations, feelings, perceptions, thoughts, wants and emotions, whereas he excluded anything having to do with cognition, intentionality, or with properties definable in a computer program.In contrast, access consciousness is available for use in reasoning and for direct conscious control of action and speech.Access consciousness must be "representational" because only representational content can figure in reasoning.Examples of access consciousness are thoughts, beliefs, and desires.A point of controversy for this attempt to divide consciousness into phenomenal and access consciousness is that some people view the mind as resulting from fundamentally computational processes.This view of mind implies that all of consciousness is definable in a computer program.In fact, Block felt that phenomenal consciousness and access consciousness normally interact, but it is possible to have access consciousness without phenomenal consciousness.In particular, Block believed that zombies are possible and a robot could exist that is "computationally identical to a person" while having no phenomenal consciousness.Similarly, Block felt that one can have an animal with phenomenal consciousness but no access consciousness.If the distinction of Block between phenomenal and access consciousness is correct, then it has important implications for attempts by neuroscientifists to identify the neural correlates of consciousness and for attempts by computer scientists to produce artificial consciousness in man-made devices such as robots.In particular, Block suggested that non-computational mechanisms for producing the subjective experiences of phenomenal consciousness must be found in order to account for the richness of human consciousnesss, or for there to be a way to rationally endow man-made machines with a similarly rich scope of personal experiences of "what it is like to be in conscious states".However, many advocates of the idea that there is a fundamentally computational basis of mind felt that the phenomenal aspects of consciousness do not lie outside of the bounds of what can be accomplished by computation.Indeed, some of the conflict over the importance of the distinction between phenomenal consciousness and access consciousness centers on just what is meant by terms such as "computation", "program" and "algorithm", because one cannot know if it is within the power of "computation", "program" or "algorithm" to produce human-like consciousness. Model of Llinas.Rodolfo Llinas proposed a model in which the notion of emergent collective activity plays a central role [96][97][98].He suggested that the brain is essentially a closed system capable of self-generated activity based on the intrinsic electrical properties of its component neurons and their connectivity.For Llinas, consciousness arises from the ongoing dialogue between the cortex and the thalamus.The hypothesis that the brain is a closed system followed from the observation that the thalamus input from the cortex is larger than that from the peripheral sensory system, suggesting that thalamocortical iterative recurrent activity is the basis for consciousness.A crucial feature of this proposal was the precise temporal relations established by neurons in the cortico-thalamic loop.This temporal mapping was viewed as a functional geometry, and involved oscillatory activity at different scales, ranging from individual neurons to the cortical mantle.Oscillations that traverse the cortex in a highly spatially structured manner were therefore considered as candidates for the production of a temporal conjunction of rhythmic activity over large ensemble of neurons.Such gamma oscillations were believed to be sustained by a thalamo-cortical resonant circuit involving pyramidal neurons in layer IV of the neocortex, relaythalamic neurons, and reticular-nucleus neurons.In this context, functional states such as wakefulness or sleep and other sleep stages are prominent examples of the breadth of variation that self-generated brain activity will yield.Since no gross morphological changes occur in the brain between wakefulness and dreamless sleep, the difference must be functional.Llimas proposed therefore that conscious processes of changes observed in the cycle sleep/dream/awakening rest on pairs of coupled oscillators, each pair connecting thalamus and the cortical region.He suggested also that temporal binding is generated by the conjunction of a specific circuit involving specific sensory and motor nuclei projecting to layer IV and the feedback via the reticular nucleus, and a nonspecific circuit involving non-specific intralaminar nuclei projecting to the most superficial layer of the cortex and collaterals to the reticular and non-specific thalamic nuclei.Thus, the specific system was supposed to supply the content that relates to the external world, and the nonspecific system was supposed to give rise to the temporal conjunction or the context.Together, they were considered as generating a single cognitive experience. Model of Crick and Koch.In their framework, Francis Crick and Christof Koch tried to find the neural correlates of consciousness and suggested the existence of dynamic coalitions of neurons, in the form of neural assemblies whose sustained activity embodies the contents of consciousness [99][100][101].By cortex they meant the regions closely associated with it, such as the thalamus and the claustrum.Crick and Koch began their theory with the notion of an "unconscious homunculus", which is a system consisting of frontal regions of the brain "looking at the back, mostly sensory region".They proposed that we are not conscious of our thoughts, but only of sensory representations of them in imagination.The brain presents multiple rapid, transient, stereotyped and unconscious processing modules that act as "zombie" modes.This is in contrast to the conscious mode, that deals more slowly with broader, less stereotyped thoughts and responses.In this model, the cortex acts by forming temporary coalitions of neurons which support each other activity.The coalitions compete among each other, and the winning coalition represents what we are conscious of.These coalitions can vary in how widespread they are over the brain, and in how vivid and sustained they are.Moreover, more than one coalition can win at a given time.Especially, there might be different coalitions in the back and in the front of the cortex, where the coalitions in the front represent feelings such as happiness.An important point in this model is that consciousness may arise from certain oscillations in the cerebral cortex, which become synchronized as neurons fire 40 times per second.Crick and Koch believed the phenomenon might explain how different attributes of a single perceived object (its color and shape, for example), which are processed in different parts of the brain, are merged into a coherent whole.In this hypothesis, two pieces of information become bound together precisely when they are represented by synchronized neural firing.However, the extent and importance of this synchronized firing in the cortex is controversial.In particular, it remains a mystery: why should synchronized oscillations give rise to a visual experience, no matter how much integration is taking place?It should be added that, in this model, the neurons that are a part of the neural correlate of consciousness will influence many other neurons outside this correlate.These are called the penumbra, which exists of synaptic changes and firing rates.It also includes past associations, expected consequences of movements, and so on.It is by definition not conscious, but it might become so.Also, the penumbra might be the site of unconscious priming. Model of Damasio.Antonio Damasio explored mainly the complexity of the human brain taking into consideration emotion, feeling, language and memory.According to him, the more important concepts are emotion, feeling, and feeling a feeling for the core consciousness [102,103].The substrate for the representation of an emotional state is a collection of neural dispositions in the brain which are activated as a reaction on a certain stimulus.Once this occurs, it entails modification of both the body state and the brain state.This process starts as soon as the organism perceives, in the form of simple or complex sensory messages, proprioceptive or behavioral indicators meaning the danger or, on the contrary, the welfare.For Damasio, the emergence of feeling is based on the central role played by the proto-self that provides a map of the state of the body [102].The neuronal patterns which constitute the substrate of feeling arise in two classes of changes: changes related to body state, and changes related to cognitive state.Thus, a feeling emerges when the collection of neural patterns contributing to the emotion leads to mental images.The feelings correspond to the perception of a certain state of the body to which the perception of the corresponding state of mind is added, i.e. thoughts that the brain generates taking into account what it perceives of the state of the body.The notion of feeling is based on the organism detecting that the representation of its own body state has been changed by the occurence of a certain object [102].Consciousness is thus built on the basis of emotions transformed into feelings and feeling a feeling ; it mobilizes and gathers constantly, in a workspace, a certain number of informations necessary to strategies of survival and decision making.In this theory, consciousness is defined explicitly as a state of mind in which there is knowledge of one's own existence and of the existence of surroundings.According to Damasio, the emotions generate three types of levels of consciousness during the darwinian evolution: protoself, core self, and oneself-autobiographical [102,104].The protoself is the oneself most primitive and most widespread within the alive species.It is constituted by special kinds of mental images of the body produced in body-mapping structures, below the level of the cerebral cortex.It results from the coherent but temporary interconnections of various cerebral charts of reentries of signals that represent the state of the organism at a given time.The protoself is an integrated collection of separate neural patterns that map moment by moment, the most stable aspects of the organism's physical structure.The first product of the protoself is primordial feelings.Whenever one is awake there has to be some form of feeling.On a higher level, the second form of the self, the core self, is about action.Damasio said that the core self unfolds in a sequence of images that described an object engaging the protoself and modifying that protoself, including its primordial feelings.These images are now conscious because they have encountered the self.On a still higher level there is the autobiographical self, constituted in large part by memories of facts and events about the self and about its social setting.It develops during social interactions to form what Damasio called "the wide consciousness".That is, the protoself and the core self constitute a "material me" whereas the autobiographical self constitutes a "social me".Our sense of person and identity is therefore in the autobiographical self. All emotions engage structures related to the representation and/or regulation of organism state as, for example, the insular cortex, the secondary somatosensory cortex, the cingulated cortex, and nuclei in brainstem tegmentum and hypothalamus [105].These regions share a major feature in that they are all direct and indirect recipients of signals from the internal milieu, viscera and musculoskeletal frame.Damasio considered that no man can make decisions that are independent of the state of his body and his emotions.The influences that he undergoes are so numerous, that the assumption of a linear determinism which would determine him is not defendable.Till now, most theories had addressed emotion as a consequence of a decision rather than as the reaction arising directly from the decision itself.On the contrary, Damasio proposed that individuals make judgements not only by assessing the severity of outcomes and their probability of occurrence, but also primarily in terms of their emotional quality [106].The key idea of Damasio was that decision making is a process that is influenced by signals that arise in bioregulatory processes, including those that express themselves in emotions and feelings.Decision making is not mediated by the orbito-frontal cortex alone, but arises from large-scale systems that include cortical and subcortical components.Like Dennett and Freeman, therefore, Damasio asserted that the individuals are under the influence of a chaotic causality of processes imprédictibles, indescribable, nonreproducible, but not random.Two main criticisms have been made about this theory [107].First, Damasio tried to give an account of the mind as a set of unconscious mapping activities of the brain, and this did not presuppose, or at least it did not obviously presuppose, that these activities are conscious.But it is hard to understand any of these divisions of the self, protoself, core self and autobiographical self without supposing that they are already conscious.Second, Damasio stumbled over dreaming.Although phenomenal consciousness can be very vivid in dreams even when the rational processes of self-consciousness are much diminished, Damasio described dreams as mind processes unassisted by consciousness.He claimed that wakefulness is a necessary condition for consciousness.He described dreaming as paradoxical since, according to him, the mental processes in dreaming are not guided by a regular, properly functioning self of the kind we deploy when we reflect and deliberate.However, contrary to this point of view, it should be noted that dreaming is paradoxical only if one adopts a model of phenomenal consciousness based on self-consciousness (on knowledge, rationality, reflection and wakefulness). Model of Changeux.The model of Changeux, developed with Stanislas Dehaene and collaborators [108,109], is founded on the idea that cerebral plasticity is mainly assured by a vast unit of interconnected neurons based on the model of "global workspace", historically related to Baar's cognitive model proposed in the 1980s [110].Recall that the global workspace model of Baar is a process which implies a pooling of information in a complex system of neural circuits, in order to solve problems that none of circuits could solve alone.This theory makes possible to explain how the consciousness is able to mobilize many unconscious processes, autonomous and localised in various cerebral regions (sensory, perceptuals, or imply in attention, evaluation, etc…), in order to treat them in a flexible and adjusted way [111].Based on this workspace theory, Changeux proposed that neurons distributed with long-distance connections are capable of interconnecting multiple specialized processors.He also proposed that, when presented with a stimulus at threshold, workspace neurons can broadcast signals at the brain scale in an all-or-none manner, either highly activated or totally inactivated [108,[112][113][114]. Workspace neurons thus allow many different processors to exchange information in a global and flexible manner.These neurons are assumed to be the targets of two different types of neuromodulatory inputs.First, they display a constantly fluctuating spontaneous activity, whose intensity is modulated by activating systems from cholinergic, noradrenergic and serotoninergic nuclei in the brain stem, basal forebrain and hypothalamus.These systems modify the state of consciousness through different levels of arousal.Second, their activity is modulated by inputs arising from the limbic system, via connections to the anterior cingulated, orbitofrontal cortex, and the direct influence of dopaminergic inputs [115]. In this global workspace model, highly variable networks of neural cells are selected and their activity is reinforced by environmental stimuli.These reinforced networks can be said to 'represent' the stimuli, though no particular network of neural cells exclusively represents any one set of stimuli.The environment does not imprint precise images in memory.Rather, working through senses, the environment selects certain networks and reinforces the connections between them.These processes connect memory to the acquisition of knowledge and the testing of its validity.Every evocation of memory is a reconstruction on the basis of physical relativel stable traces stored in the brain in latent form [116]. Changeux and Dehaene distinguished three kinds of neuronal representations: percepts; images, concepts, and intentions; and prerepresentations.Percepts consist of correlated activity of neurons that is determined by the outer world and disintegrate as soon as external stimulation is terminated.Images, concepts, and intentions are activated objects of memory, which result from activating a stable memory trace (similar to the "remembered present" of Edelman).Prerepresentations are multiple, spontaneously arising unstable; they are transient activity patterns that can be selected or eliminated [117].In this line of idea, thoughts can be defined in terms of "calculations on the mental objects" [118].To be capable to be mobilized in the conscious workspace, a mental object must meet three criteria: 1) The object must be represented as a firing pattern of neurons; 2) The active workspace neurons must possess a sufficient number of reciprocal anatomical connections, particular in prefrontal, parietal and cingulated cortices; 3) At any moment, workspace neurons can only sustain a single global representation, the rest of workspace neurons being inhibited.This implies that only one active cortical representation will receive the appropriate top-down amplification and be mobilized into consciousness, the other representations being temporarily unconscious [119]. When a new situation happens, selection would occur from an abundance of spontaneously occurring prerepresentations, namely those that are adequate to new circumstances and fit existing percepts and concepts.The free will is based on this adaptation process called 'resonance', because planning and decision making result from this selective adaptation process [117].As in evolution, this could be a random recombination of representations.The meaning of representations involved would be given by their proper functions.Plans generated in this way would be intelligible because they could be appropriate for the situation.Naturally, linguistically coded representations could therein also be of central importance.It results from this global neurophysiological model that, as Changeux said it, the identity between mental states and physiological or physicochemical states of the brain is essential in all legitimacy and that it is enough with the man to be a neuronal man [118,120].One important aspect that follows from this theory is that once a set of processors have stated to communicate through workspace connections, this multidomain processing stream also becomes more and more automatized through practice, resulting in direct interconnections, without requiring the use of workspace neurons and without involving consciousness.Another important aspect of this theory is that information computed in neural processors that do not possess direct long-range connections with workspace neurons, will always remain unconscious.This global workspace model has however been considered by some authors as a restrictive theory of access consciousness sacrificing some phenomenological aspects of consciousness.In other terms, it has been criticized for confounding the subjective experience of consciousness with the subsequent executive processes that are used to access its content. Model of Edelman.To explain the primary consciousness, Edelman argued that living organisms must organize into perceptual categorization the unlabeled signals they receive from the environment.He started therefore his model by proposing a hypothesis which describes the fundamental process of categorization.This highly articulated hypothesis is mainly based on the theory of neu-ronal group selection implying a basic process termed 'reentry', a massively recursive activity between neuronal groups.It is also based on the fundamental concept called "degeneracy", whereby elements that are structurally different can perform the same function.The first idea of Edelman was to use the assumption of tying, imagined by Francis Crick for visual perception [22], to explain how the categories go from the form, color, texture and movement of external stimulations to the constitution of objectivity.His theory is based on the concept of neuronal chart.A chart is a sheet of neurons whose points are systematically connected, on the one hand, at points on sheets of receiving cells (skin, retina of the eye, etc…) to satisfy entries of signals and, on the other hand, at points located on other charts to satisfy reentries of signals functioning in the two directions.His second idea was to admit that the brain, genetically equipped at the birth with a superabundance of neurons, develops thanks to the death of a number of these neurons due to a process of darwinian selection.Lastly, his third idea was that parallel reentries of signals occur between the charts to ensure an adaptive answer.It is the ongoing recursive dynamic interchange of signals occurring in parallel between maps that continually interrelates these maps to each other in space and time.Categorization, therefore, is ensured by the dynamic network of a considerable number of entries and reentries selected by a multitude of charts [26,121,122].Although distributed on many surfaces of the brain, the inputs, at the same time connected between them and connected to charts of reentries, can thus react the ones on the others in a mechanism of functional segregation for finally giving a unified representation of the object.In other words, categorization is a global "encartage" which does not need a program a priori, a "homoncule", to produce it.For Edelman, this mechanism is a plausible working hypothesis because his research group has shown that brain-based devices are capable of perceptual categorization and conditioned behavior [123].Over the last 14 years, a series of robotic organisms (called Darwin series) have indeed been used to study perception, operant conditioning, episodic and spatial memory, and motor control through the simulation of brain regions such as the visual cortex, the dopaminergic reward system, the hippocampus, and the cerebellum [124].However, to reach the major stage of conscious perceptive experience, Edelman thought that at least six further conditions should be met [26,122,125,126].Firstly, the brain must have memory.Memory, according to Edelman, is an active process of recategorization on the basis of a former categorization.The perceptible category, for example, is acquired by an experience by means of charts of reentries.But if there is a perceptive entry by seeing, for example, a table again, one recategorizes this entry by improving preceding ca-tegorization, and so on; one reinvents the category continuously.In other terms, memory generates "information" by construction: it is creative but nonreplicative.Secondly, the brain must have a system of training.The training implies changes of behavior which rest on categorization controls by positive or negative values; an animal can choose, for example, what is clear or dark, hot or cold, etc, to satisfy its needs for survival.Thirdly, although self-awareness appears only with the higher consciousness, the brain must have a system of diffuse feelings which enables it to discriminate between oneself and not oneself.Fourthly, the brain must be able to categorize the succession of events in time and to form concepts prelinguistics.Edelman thought that the categorization of successive events and the formation of these concepts have a common neurobiological substrate.Fifthly, there must be continuous interactions between conditions 1, 2, 3 and 4 so that a particular memorial system for the values is formed.Sixtly, the brain needs also to have reentrant connections between this particular memorial system and the anatomical systems.That is, according to Edelman, to formulate a model of primary consciousness it is necessary to have the notions of perceptual categorization, concept formation, and value-category memory in hand. As Changeux, Edelman considered that it is the thalamo-cortical system, called "dynamic core", which contributes directly to the conscious experience [125,126].It is a dense fibre network inter-connected in the two directions, in which connections occur unceasingly and come in competition so that the most effective fibres carry it (according to a process of darwinian competition).This reentrant dynamic core is thus a process, not a substance.It is in perpetual rebuilding (with periods of time of a few hundred milliseconds), in wide space but formed nevertheless of unified and highly integrated elements.As spatial process, it ensures the coherence of consciousnesses states ; it is the principal dynamic driving core of the constitution of self.The key structures for the emergence of consciousness are the specific thalamic nuclei, modulated by the reticular nucleus in their reentrant connectivity to cortex, the intralaminar nuclei of the thalamus, and the grand system of corticocortical fibers.What emerges from these interactions is the ability to construct a scene.This activity is influenced by animal's history of rewards and punishments accumulated during its past.It is essential for constructing, within a time window of fractions of seconds up to several seconds, the particular memory of animals, that is the "remembered present" [126].Higherorder consciousness is consciousness to be conscious, self-awareness.It is accompanied by a sense of self and the ability to construct past and future scenes in the waking state.It is the activity of the dynamic core which converts the signals received from the outside and inside of the body into oneself, what Edelman called "phenomenal transformation" [125].This higher consciousness requires, at the minimum, a semantic capability and, in its most developed form, a linguistic capability which makes possible to symbolize the relations of the past, present and future and to do representations.On the other hand, Edelman asserted that the linkage proceeds from the underlying neural activity to the conscious process and not the reverse, but that consciousness is not directly caused by the neural processes, it accompanies them.Rather, they are the neural processes and the entire body which are causal.Consciousness is "reincorporated" in the body, insofar as it is one of the modulations by which the body expresses its decisions; it amplifies the effects of the body to produce the free will [125].Since the body "is decided" by many nonlinear and chaotic determinisms (described by Alain Berthoz [127]), consciousness is thus also "decided" by these complex body determinations.However, consciousness adds to the body an additional level of complexity, because it is able to make differences within a vast repertory of informations.What makes indeed informative a state of human consciousness is the fact that the man is able to make a difference between billion states different of things.The intentional understanding of consciousness to reduce uncertainty is always singular and gives the character subjective and private to mental-lived of each individual at every moment.Like Dennett, Freeman and Damasio, Edelman thus admitted that the free will is a nonlinear chaotic causality, generally circular, which results from the complex interactions between the body and the brain in the local environmental contexts.The model of Edelman offers an interesting tentative of unifying the hard and easy problems.Indeed, since the reentrant dynamic core theory considers consciousness as an emergent property of any system sharing specific core mechanisms of differentiated integration, the most reliable index of consciousness should reflect the quantification of this core mechanism.However, a conceptual difficulty of this hypothesis lies in the paradox of trying to prove that neural indexes are more respectable because they supposedly probe phenomenal and not access consciousness.Indeed, neural indexes of any sort have to be validated by confronting them at one point with some kind of report, hence with access and not phenomenal consciousness.Even if it turns out that one of these neural indexes turns out to be perfectly correlated with consciousness, and thus becomes a perfectly reliable measure of consciousness, then one might still ask whether we have made any progress. Conclusions A great variety of specific theories have been proposed, whether cognitive, phenomenal, physics, neural or based on Artificial Intelligence, to explain consciousness as a natural feature of the physical world.The most prominent examples of philosophical and non-neural approaches included the naturalistic dualism model of David Chalmers, the multiple draft cognitive model of Daniel Dennett, the phenomenological theory of Franscisco Varela and Humberto Maturana and the physics-based approaches of Stuart Hameroff and Roger Penrose.The major neurophysiological theories included the emerging spatiotemporal patterns of Walter Freeman, the visual system-based approaches of Viktor Lamme, Semir Zeki, David Milner and Melvyn Goodale, the phenomenal/access consciousness model of Ned Block, the synchronous cortico-thalamic rhythms of Rodolfo Llinas, the synchronized firing cortical oscillations of Francis Crick and Christof Koch, the emotive somatosensory processes of Antonio Damasio, the neuronal global workspace of Jean-Pierre Changeux and Stanislas Dehaene, and the reentrant cortical loops of Gerald Edelman and Giulio Tononi.It is foreseeable that the range of these models will extend in the future.For example, it is supposed now that consciousness associated with the dream would be a rebuilding at the time of the waking, related to relaxations of specific neuromodulators on very short intervals of time [128].Such a hypothesis might generate new models.On the other hand, new perspectives of Artificial Intelligence have attempt to check the hypothesis of the functioning of brain architecture and simulate consciousness using both the methods and laws of informational machines and the processing capabilities of biological systems.For example, series of brain-based robots have been tested over the last decade to study perception, operant conditioning, episodic and spatial memory, and motor control.Moreover, a supercomputer program, called Human Brain Project, was given for objective to attempt to reproduce an artificial consciousness in the future. One can be skeptic, like Paul Ricoeur, as for the possibility of obtaining a true theory which makes a complete synthesis between a speech "neuro" and a "psychological" speech.The body is this object which is both me and mien, but there is no passage from one speech to another [120].Any attempt to confuse the figures of the body-object and body-lived of consciousness, was it founded on the principles and methods of neuroscientific approaches more innovating, is obviously impossible for ontological reasons. However, like Edelman says it, to deplore that one cannot build a scientific model of consciousness without being able to theoretically explain the quality of a quale (what one feels, for example, by seeing the red of a flower) is a non-problem; it is a problem of the same order as to want to answer the famous philosophical question: "why there is something and why not rather nothing?" [46].The subjective aspect of consciousness is a pure obviousness of the real world.It is a pure sensitive which is given of only one blow of the medium of itself [129], which it is impossible to analyze it without causing its disappearance immediately.It does not remain about it less than the neurobiological understanding of relationships between the brain and the objective aspects of consciousness is the great challenge which will animate researches of future decades.Certainly, the phenomenological model of Varela is compatible with some experimental neurobiological approaches.Likewise, the physics-based model of Hameroff and Penrose could be open to some researches of mental-brain correlations via the study of signaling mechanisms of synaptic functions.Moreover, Artificial Intelligence has proven to be a fertile physical enterprise to check some hypothesis about the functioning of brain architecture, although no machine has been capable of reproducing an artificial consciousness.However, only the advances in neurosciences seems actually capable of taking into account the constraints of a wide range of complex properties to provide an adequate explanatory basis for consciousness.Of these properties, several stand out as particular challenges to theoretical effort, in particular the property of intentionality, the problem of explaining both the first-person and thirdperson aspects of consciousness, and the question of plausible necessary preconditions to develop self-awareness.As attests it analysis and the myriad of neurophysiological data already available, the states (arousal) of consciouness, the behavior, the brain processes of conscious systems, and a number of subjective contents of consciousness (some qualia) are objectivable and can be experimentally studied.In spite of innumerable difficulties which one will still meet to study and understand a phenomenon which is the originating base of categorical representations of the intersubjective and subjective world, it is therefore possible that one will manage one day to build a naturalistic satisfying explanation of consciousness that links together molecular, neuronal, behavioral, psychological and subjective data in an unified, coherent scientific theory.
17,500
2011-11-16T00:00:00.000
[ "Philosophy" ]
Determining the Capacity Limit of Inverter-Based Distributed Generators in High-Generation Areas Considering Transient and Frequency Stability The responses of an inverter-based distributed generator (IBDG) to abnormal voltage and frequency are different from those of a conventional generator owing to the difference in the operating modes. In particular, the momentary cessation (MC) mode deteriorates the transient stability of normal power systems by ceasing to provide active and reactive power to the grid. However, in a high-generation area, where a significant amount of generation is concentrated and where transient instability exists under a contingency, MC operation is conducive to the transient stability because the electrical output of critical generators increases to cover the local loads under this condition. This effect can cause frequency instability if a sizeable portion of the IBDG output is lost owing to the operating modes. To ensure transient and frequency stability, this study analyzed the effects of operating modes and generator tripping on the high-generation area. A method for determining the capacity limit of the IBDGs in the high-generation area was then developed to ensure power system stability. The effectiveness and feasibility of the proposed method were verified by conducting a case study on the Korean power system. I. INTRODUCTION Renewable energy generators are increasingly preferred over coal-fired power plants owing to environmental concerns such as the rising levels of greenhouse gases and pollution. Unlike large-scale conventional generators, individual photovoltaic (PV) and wind turbine generators are generally integrated into the distribution system because they have a relatively small capacity. The power generation characteristics of inverter-based distributed generators (IBDGs) such as PV and wind turbine systems differ from those of conventional generators owing to uncertain electricity generation and use of power conversion systems. These characteristics affect the power system stability [1]- [3]. The associate editor coordinating the review of this manuscript and approving it for publication was Ning Kang . The overall system inertia is reduced if the number of IBDGs is increased because an IBDG has very low or no rotational inertia. Various studies have been conducted to investigate the influence of IBDGs on the power system stability. The effects of distributed generators that are not connected through power electronics in a power system on the stability of transient and small signals have been analyzed [4]. This study was focused on the effects of power system stabilizers and exciters. The effects of high PV penetration on the steady-state and transient stability have been investigated [5]. The correlation between the bus voltage magnitude and PV penetration level was studied to investigate the steady-state stability. Additionally, the positive and negative influences of PV penetration on the transient stability were examined. In [6], the effects of synchronous, asynchronous, and IBDG units on the voltage and transient stability were analyzed based on a time-domain simulation. The IBDGs were found to negatively affect the rotor angle stability. The rotor speed deviation of a conventional generator was analyzed to assess the transient stability [7]. This work considered various levels of IBDG penetration under varying contingency and demonstrated the importance of branch representation in the distribution system. The concept of a virtual synchronous generator has been proposed to improve the frequency stability in a power system with high IBDG penetration [8]. In [9], a key performance indicator was proposed to measure the frequency stability limit in a system with high penetration of power-electronic-interfaced generators. This study improved the frequency nadir (NADIR) using the fast frequency response of wind turbine generators. The effects of the penetration level of micro-grids, including IBDGs, on the frequency stability were investigated to determine the maximum allowable penetration of micro-grids in large-scale power systems [10]. In addition, the influence of variable generation of a PV system on the frequency stability was analyzed by long-term dynamic simulations [11]. However, none of these studies considered the effect of inverter operating modes on the power system stability. The Canyon 2 Fire, Blue Cut Fire, Angeles Forest disturbance, and Palmdale Roost disturbance are some representative cases that show the negative effects of inverter operating modes on the power system stability [12]- [14]. The first two fire events caused transmission system failures in Southern California. The events resulted in the loss of a significant amount of PV generation because of the inverter operating modes, particularly the momentary cessation (MC) mode. Similarly, the Angeles Forest disturbance and Palmdale Roost disturbance led to the loss of a significant amount of PV output in Southern California in 2018. These effects were also induced by the MC of the PV systems. MC is a temporary cessation applied to energize an electric power system (EPS), while connected to the area EPS, in response to a disturbance in the applied voltage or system frequency, with the capability of immediately restoring the output when the applied voltage and system frequency return to the defined ranges [15]. According to [15], an IBDG will enter the MC mode if its terminal voltage drops below the MC voltage, which is 0.5 pu. In addition, IBDGs will trip if they remain in the MC mode for more than 1 s. The actual setting value can be different from the value specified in [15], because the value is flexible. Recently, some studies have been performed on the effect of MC. A model of an IBDG with operating modes was introduced for reliability studies [16]- [19]. In [20], the influence of MC capability on the New England power system was investigated by a time-domain simulation. It involved composite load models with IBDGs and a system protection model. This work reported the sensitivity to voltage tripping and system response under different MC voltage levels. The effect of the MC mode of IBDGs on the power system transient stability was analyzed [21]. In addition, the MC influence index based on the critical MC operating point was proposed to evaluate the severity of transient effects. However, few studies have investigated the effect of MC on power system stability in high-generation areas. A high-generation area represents the sending end, where a significant amount of generation is concentrated, resulting in transient instability under transmission system contingency. This area includes critical generators in terms of transient stability. Therefore, special protection schemes (SPSs), such as tripping of generators and shedding of loads, are usually implemented in the high-generation area to ensure power system stability [22]- [24]. The remainder of this paper is organized as follows: Section II investigates the effects of generator tripping and MC of IBDGs on the transient stability of the power system in the high-generation area using the single-machine equivalent (SIME) method, as described in [25]- [29]. Section III analyzes the effects of generator tripping and MC on the frequency stability of the power system by comparing the NADIRs. In Section IV, a method to determine the capacity limit of IBDGs considering both transient and frequency stability is introduced to ensure power system stability. Section V presents a case study conducted to verify the feasibility and effectiveness of the proposed method. Section VI concludes the paper. II. EFFECTS OF GENERATOR TRIPPING AND MC OF IBDGS ON TRANSIENT STABILITY IN HIGH-GENERATION AREAS This section analyzes the effects of generator tripping and MC of IBDGs on the transient stability of the power system based on the SIME method, which is a hybrid-direct method for transient stability assessment considering accuracy and flexibility [25]- [29]. To assess the transient stability, the generators in a system are divided into critical and noncritical groups in accordance with the SIME rules [27], [29]. In actual large-scale power systems, the critical generators are fewer than non-critical generators because only the generators located near the fault area are generally considered as critical. Furthermore, the MC capability affects a critical generator more than a non-critical generator because the IBDGs located near the critical group will enter the MC mode. Therefore, the one-machine infinite bus (OMIB) system reflecting the effect of MC is more strongly affected by the critical generators. The elements of the OMIB system in SIME are as follows. Here, δ is the center of angle (COA) of the OMIB system; δ C and δ N are the COAs of the critical and non-critical groups, respectively; P m and P e are the mechanical power and electrical power of the OMIB system, respectively; M is the inertia coefficient of the OMIB system; M C and M N are the inertia coefficients of the critical and non-critical groups, respectively; P mi and P mj are the mechanical power of machines i and j, respectively; P ei and P ej are the electrical powers of machines i and j, respectively. The transient stability can be assessed by comparing the maximum deceleration area A dec and the acceleration area A acc of the OMIB system [25]. The areas represent the release of stored and accumulated kinetic energies, respectively. If A acc is greater than A dec , the system will be unstable; otherwise, the system can return to a stable condition after the contingency. A. EFFECT OF CRITICAL GENERATOR TRIPPING ON SIME The tripping of generators realized by the SPS is usually for restoring the system stability after a contingency. The OMIB system in SIME is affected more strongly by the tripping of the critical generator than by the tripping of the non-critical generator because the number of critical generators is lower than the number of non-critical generators in a large-scale power system. When the critical generators are tripped for ensuring transient stability, the elements of the OMIB system are changed as follows: Here, P m and P e are the mechanical and electrical powers of the OMIB system after the tripping of the critical generator, respectively; k is the number of critical machines excepting the tripped generators; M is the inertia coefficient of the OMIB system after the tripping of the critical generator; M T is the inertia coefficient of the tripped generator. The above equations can be used to construct the P-δ curve of the OMIB system considering critical generator tripping, as shown in Fig. 1. The generator tripping realized by the SPS operation is activated to ensure transient stability of the power system after the fault is cleared. Thus, the acceleration area is not affected by the tripped generators. However, the mechanical power of the OMIB system is reduced in the postfault period owing to the tripped critical generators, and the maximum deceleration area increases in the transient state. The direction and length of the red line in Fig. 1 depend on how the tripped critical generators affect the OMIB system. Therefore, the deceleration area can also change because of this effect. B. EFFECT OF MC OF IBDG ON SIME IN HIGH-GENERATION AREA Because the MC is activated when the terminal voltage of the IBDG drops severely owing to a large disturbance, most IBDGs located near the fault area will enter the MC mode [12]- [15]. In the MC mode, the IBDGs temporarily cease to energize the power grid, and they may get tripped if they remain in the MC mode for more than 1 s [15]. The highgeneration area is the generation side with large power generation; this area is susceptible to the transient stability problem under a transmission system fault. An SPS operation, such as generator tripping, is generally applied in such cases to prevent transient instability. Fig. 2 schematically illustrates the condition where the IBDGs in the high-generation area enter the MC mode or are tripped by the transmission system fault. The power transfer capability of transmission systems from a sending-end bus to a receiving-end bus decreases when the transmission line (TL) is tripped by a fault. Thereafter, the power system stability is affected by this decrease as well as by the fault. The electrical output of a critical generator in the highgeneration area is significantly affected by the MC operation and IBDG tripping, as shown in Fig. 3. If the IBDGs enter the MC mode or trip owing to severe voltage drop, the amount of electricity generated by the IBDGs in the area is lost. The critical generators supply more electric power to cover the local loads under this condition. Here, P e is the electrical output of a critical generator when the IBDGs enter the MC mode or trip; P trans is the active power transferred through the transmission systems to the remaining area; P load,IBDGs represents the local loads considering the generation of IBDGs. Note that P load,IBDGs is positive in the direction of delivery from the critical generator to the local loads with IBDGs in the high-generation area. According to (8), the electrical output of a critical generator increases as P load,IBDGs increases when the IBDGs enter the MC mode or trip. This implies that MC operation and IBDG tripping in the high-generation area will increase the electrical output of the critical generators, thereby improving the transient stability, as shown in Fig. 4. Unlike the SPS, the effect of MC on the transient stability appears at the same time as the occurrence of a fault because the operating modes of the DG depend on the terminal voltage. More specifically, in the during-fault period, the dominant factor affecting the power system stability is the fault, and the output of the IBDG decreases to almost zero, regardless of the MC function because of the maximum converter current limit. Thus, the MC capability significantly affects the transient stability in the post-fault period. Consequently, if the IBDGs are in the high-generation area, the deceleration area increases because of MC and IBDG tripping, leading to a positive effect of the MC function on the transient stability. The MC voltage is 0.5 pu, which is defined in [15]. In this case, the IBDGs should inject active and reactive current into the power grid unless the voltage falls below 0.5 pu due to the contingency. Because the operating mode is determined by the transient voltage level, the number of IBDGs that enter the MC mode or trip is closely related to the MC voltage. When the MC voltage increases from 0.5 to 0.9 pu, which is the continuous operating point, more IBDGs enter the MC mode even if the same fault occurs. In the high-generation area, the MC capability has a positive effect on the transient stability. Therefore, setting the MC voltage to 0.9 pu rather than 0.5 pu is advantageous for ensuring transient stability. III. EFFECTS OF GENERATOR TRIPPING AND MC OF IBDGS ON FREQUENCY STABILITY This section describes the negative effect of critical generator tripping and MC of IBDGs on the frequency stability of the power system. A. EFFECT OF GENERATOR AND IBDG TRIPPING ON FREQUENCY STABILITY The NADIR and rate of change of frequency (ROCOF) are generally used as metrics for evaluating the frequency stability [9], [30], [31]. The ROCOF of a generator and system inertia can be obtained as follows: Here, H sys and S n,sys are the inertia constant and rated apparent power of the power system, respectively; f i is the frequency of generator i, and f n is the normal frequency; S ni and H i are the rated apparent power and inertia constant of generator i, respectively; P mi and P ei are the mechanical and electrical powers of generator i, respectively. The ROCOF of a system can be calculated according to (9), (10), and (11), as follows [9]: Here, P sys (t) is the change in the system active power; E kin,sys and E kin,lost are the kinetic energy before generator tripping and the kinetic energy lost because of generator tripping of the system, respectively. Under severe contingency, the critical generators will be tripped by the SPS operation; the IBDGs can also trip owing to a large voltage drop. In this case, the system frequency is affected, which changes the NADIR and ROCOF, as shown in Fig. 5. To avoid load shedding, the criterion of the NADIR limit can be determined as expressed in (13). The load shedding frequency is generally defined in the grid codes. Here, f NADIR is the minimum value of the frequency in the time-domain, and f UFLS is the load shedding frequency to balance the power generation and load. Because the NADIR depends on the number of critical generators and IBDGs that are tripped following the contingency, the installed capacity of the IBDGs significantly affects the frequency stability. If several IBDGs are tripped due to a large disturbance, the under-frequency relay (UFR) will be operated because the NADIR decreases significantly. In addition, if the MC voltage changes from 0.5 to 0.9 pu, the frequency stability deteriorates because more IBDGs are tripped under the same contingency. Therefore, the capacity limit of the IBDGs should be determined considering the frequency stability owing to the negative effect of IBDG tripping on the frequency stability. B. EFFECT OF REPETITIVE MC ON FREQUENCY STABILITY Because the MC depends on the transient voltage, the IBDG can enter the MC mode repeatedly so that the active and reactive current outputs can vary continuously between zero and a maximum value. The variation in the IBDG output affects the total electrical power provided by the generators. Thus, the fluctuation in the IBDG output affects the system frequency, as shown in Fig. 6. When the MC voltage is set close to the continuous operating voltage and the IBDGs are not tripped by the MC, this effect is intensified because several IBDGs enter the MC mode repeatedly owing to the voltage disturbance. IV. DETERMINING THE CAPACITY LIMIT OF IBDGS CONSIDERING BOTH TRANSIENT AND FREQUENCY STABILITY As discussed in Sections II and III, in the high-generation area, IBDGs have both positive and negative effects on the power system stability. More specifically, the IBDGs in the high-generation area positively affect the transient stability owing to MC and IBDG tripping. They negatively affect the frequency stability because IBDG tripping decreases the NADIR, and the repetitive MC affects the system frequency. Therefore, to ensure power system stability, a method to determine the IBDG capacity limit in the high-generation area is introduced in this section. Fig. 7 shows a flowchart for calculating the IBDG capacity limit. Here, S IBDG is the installed capacity of the IBDGs, and V MC is the MC voltage. To accommodate as many IBDGs as possible in the high-generation area, it is preferable to set V MC close to 0.9 pu instead of 0.5 pu. Because more IBDGs in the highgeneration area enter the MC mode in the transient state, the transient stability is improved under the same contingency. The transient stability can be assessed in a few seconds because it is determined by the first swing, whereas assessing VOLUME 8, 2020 the frequency stability requires more time because of the time taken to identify the NADIR. Even S IBDG , which ensures transient stability, cannot ensure frequency stability if too many IBDGs are tripped. Therefore, a frequency stability analysis must be performed after transient stability analysis to confirm the NADIR. If f NADIR is lower than f UFLS , the power system should not accommodate S IBDG because load shedding will be initiated to prevent the power system from collapse. S IBDG is determined by following the proposed method until f NADIR is greater than f UFLS . V. CASE STUDY OF KOREAN POWER SYSTEM The analysis of the effect of MC and generator tripping was performed in the Gangwon area (eastern provinces of Korea), which is the high-generation area of the Korean power system. At peak demand, the total load consumption in Gangwon is estimated to be approximately 3.9 GW in 2024. In addition, there exist thirty-seven generators, including large-scale thermal and nuclear power plants. In 2024, the total installed capacity of generators in service is expected to be approximately 21.7 GVA, and the amount of generation at peak demand is estimated to be approximately 18.1 GW. Because the generation is much greater than the load, a significant portion of the generated power is transferred from Gangwon to the Metropolitan area through high-voltage transmission systems, such as the Gangneunganin-Singapyeong TLs and high-voltage DC transmission systems. A large-scale blackout can occur if the Gangneunganin-Singapyeong TLs, which are the tie lines between Gangwon and the metropolitan, are tripped following a contingency. Therefore, the critical generators are tripped using the SPS to ensure power system stability under this contingency. The total amount of the tripped generation is approximately 2.5 GW. Fig. 8 shows the configuration of the study area. A simulation was performed using PSS/E (Version 34.5.0). Aggregated IBDG models were connected to the distributed systems in Gangwon, which is a generation side. Economic dispatch was utilized to balance the supply and demand considering the increased IBDG capacity. In addition, the IBDGs were set to the voltage control mode and operated at unity power factor. The distributed energy resource generator/converter model provided by PSS/E was used to consider the MC and tripping of the IBDG in the timedomain simulation. The parameters can be referred from [19]. The transient and frequency stabilities were evaluated under Gangneunganin-Singapyeong TLs contingency, which is a severe event in the study area. Fig. 9 shows the procedure of the time-domain simulation. A three-phase fault occurs at the sending-end bus of the Gangneunganin-Singapyeong TLs; this is normally cleared within five cycles. Thereafter, the Hanul and Sinhanul NPs are tripped through the SPS to ensure transient stability. A. EFFECT OF IBDG IN HIGH-GENERATION AREA ON TRANSIENT STABILITY In the time-domain simulation, the generators in Gangwon are classified as a critical group during the Gangneunganin-Singapyeong TLs contingency. Fig. 10 shows the angles of the OMIB system corresponding to each MC voltage under the contingency. In this case, the total installed capacity of the IBDGs in the high-generation area is 2.0 GVA, and the SPS is applied to ensure transient stability. When the MC voltage is set close to 0.5 pu, the transient stability deteriorates because many IBDGs remain in the high-generation area even when the critical tie lines are tripped. Therefore, in this case, if the MC voltage is set to less than 0.6 pu, the power system will become unstable even if the critical generators are tripped for transient stability. To connect as many IBDGs as possible in the highgeneration area, the MC voltage is set to 0.9 pu, and the capacity limit of the IBDGs in terms of transient stability is determined. Fig. 11 shows the angles of the OMIB system corresponding to each installed capacity of the IBDGs in the high-generation area under the contingency. Because the noncritical generators are turned off owing to the increased IBDG capacity, the elements in the OMIB system are changed for balancing the supply and demand. Thus, the initial angles of the OMIB system differ from each other. When the installed capacity of the IBDGs increases in the high-generation area, the transient stability deteriorates because the surplus amount of electricity generated by the IBDG remains on the generation side when the power transfer capability of the transmission system is decreased because of the fault. Based on the simulation results, the capacity limit of the IBDGs in the high-generation area was determined to be 3.3 GVA through the transient stability assessment. B. EFFECT OF GENERATOR AND IBDG TRIPPING ON FREQUENCY STABILITY Time-domain simulations were conducted on each IBDG capacity in terms of the frequency stability to determine the capacity limit of the IBDGs in the high-generation FIGURE 13. System frequencies corresponding to each case specified in Table 2. area by following the method established in Section IV. Fig. 12 presents the system frequencies corresponding to the installed capacity of each IBDG. Table 1 lists the simulation results for NADIRs corresponding to each IBDG capacity. Despite being subjected to the same SPS operation, the NADIR decreases as the IBDG capacity increases. This is because the amount of total generation loss is affected by the tripping of the IBDGs. If the total installed capacity of the IBDGs in the critical area is 3.3 GVA, as determined by the transient stability assessment, the NADIR should be greater than the load shedding frequency. Therefore, the capacity limit of the IBDGs in the high-generation area considering transient and frequency stability is determined to be 3.3 GVA. C. DISCUSSION Tripping several critical generators to increase the capacity of the IBDGs in the high-generation area may ensure transient stability but not frequency stability, as shown Fig. 13 and Table 2. In all the cases, transient stability is ensured under the contingency. However, in case 4, the frequency drops below the UFR operating point, because of which the system cannot accommodate the IBDGs. In this case, the frequency will drop continuously if a stronger SPS is not applied to the system. The ROCOFs in each case are different owing to the disposal of non-critical generators to increase the IBDG capacity. Case 4 has the highest installed capacity of the IBDGs and large tripped generation; therefore, the ROCOF value is the highest. VI. CONCLUSION This study analyzed the effects of critical generator tripping and operating modes of IBDGs on the transient and frequency stability of a power system in a high-generation area. Overall, the MC capability negatively affected the transient stability, whereas the MC of the IBDGs located in the high-generation area was found to have a positive effect, similar to the case of critical generator tripping by SPS. In contrast to the positive effects on transient stability, the MC had a negative effect on the frequency stability, as indicated by NADIR and ROCOF. This is because several IBDGs and critical generators were lost due to MC operation and SPS scheme. Therefore, this study proposed a method to determine the capacity limit of the IBDGs in the high-generation area considering both the positive and negative effects. The feasibility and effectiveness of the proposed method were validated through a case study conducted on the Korean power system. The capacity limit of the IBDGs in Gangwon, where a large number of generation sources are concentrated and where transient instability exists under TL contingency, was determined using the proposed method. Finally, we confirmed the effects of MC and generator tripping on the stability of the Korean power system. BYONGJUN LEE (Senior Member, IEEE) received the B.S. degree in electrical engineering from Korea University, Seoul, South Korea, in 1987, and the M.S. and Ph.D. degrees in electrical engineering from Iowa State University, Ames, in 1991 and 1994, respectively. From 1995 to 1996, he was a Senior Researcher at Mitsubishi Electric Corporation, Kobe, Japan. Since 1996, he has been a Professor with the School of Electrical Engineering, Korea University. VOLUME 8, 2020
6,168.4
2020-02-17T00:00:00.000
[ "Engineering" ]
Probabilistic Voxel-Fe model for single cell motility in 3D Background Cells respond to a variety of external stimuli regulated by the environment conditions. Mechanical, chemical and biological factors are of great interest and have been deeply studied. Furthermore, mathematical and computational models have been rapidly growing over the past few years, permitting researches to run complex scenarios saving time and resources. Usually these models focus on specific features of cell migration, making them only suitable to study restricted phenomena. Methods Here we present a versatile finite element (FE) cell-scale 3D migration model based on probabilities depending in turn on ECM mechanical properties, chemical, fluid and boundary conditions. Results With this approach we are able to capture important outcomes of cell migration such as: velocities, trajectories, cell shape and aspect ratio, cell stress or ECM displacements. Conclusions The modular form of the model will allow us to constantly update and redefine it as advancements are made in clarifying how cellular events take place. Background Cell motility has gained increasing prominence due to its major role in several physiological and pathological processes, e.g., morphogenesis, the inflammatory response, wound healing and tumor metastasis [1]. The way cells migrate and respond to their 3D micro-environment is a multiscale process that results from the integrated effect of the properties of the tissue extracellular matrix (ECM) and the sub-cellular constituents of the cell, mediated by the cytoskeleton (CSK). This integration process depends on multiple mechanical, chemical and biological factors [2][3][4]. For instance, the influence of ECM stiffness and topography (Durotaxis) has been widely investigated [5][6][7][8], showing that cells prefer to migrate toward stiffer zones of the ECM, where the focal adhesions are more stable allowing to exert higher forces [9,5,10]. Cells also respond to spatial chemical gradients (Chemotaxis) in the surrounding fluid or tissue [11,12], moving towards or away from the source of chemical variation. Variations of potential gradients (Galvanotaxis), fluid conditions and ligand adhesion gradients (Haptotaxis) are additional clues for cell migration guidance currently under study [13][14][15][16][17]. In fact, over the past few years, immense progress has been made in understanding cell migration, largely thanks to the active interaction between experiments, mathematical and computational modeling [18]. Due to cell motility complexity, models are taking a leading role in future developments, permitting researches to run complex biophysical and biochemical scenarios without the difficulties, time and resource consumption inherent to in vitro investigations. Many of these studies have usually focused on 2D migration, not only for simplicity but due to the lack of high quality data of cell movement in 3D. This deficiency is, however, becoming increasingly overridden especially by recent advances in microfluidic technologies which allow high resolution imaging and provide enormous flexibility in controlling the critical biochemical and biomechanical factors that influence cell behavior [19,20]. Hence, the number of 3D migration models has been gradually increasing, although focused on different aspects of cell motility. Some of them predict individual cell migration [21][22][23], while others simulate collective behavior [24,25]. In addition, different levels of detail are described, with time and length-scales varying significantly. Rangarajan and Zaman [18] reviewed some type of models according to their main assumptions and grouped them in: (i) Force based dynamics models, (ii) Stochastics, (iii) Multi-Cell Spheroid migration, (iv) Monte Carlo studies. In the former ones, migration dynamics are accounted for by the traction forces at both the front and rear end of the cell and forces due to viscous drag and cell protrusion into the ECM [21]. Imbalances of these forces produce cell migration. The drawback of these models is that they only predict migration of single cells, not taking into account changes in cell shape or ECM properties due to degradation. On the other hand, stochastic models of persistent random walks are able to predict population behavior [26,22]; however, they don't include dynamic effects such as traction or drag, nor incorporate the ECM properties. Multi-cell spheroid migration models are mainly based on pressure gradients produced by proliferation and death of cells [27]. Combining random walks, pressure and chemotactic activity of cell aggregates make these models suitable to study tumours, but fail to take into account mechanical cues such as ECM density, porosity or stiffness. Finally, Monte Carlo models using square lattices and a set of simple rules allow faster simulations thus providing long-term migration patterns [28,29]. The main handicap is the qualitative nature of the studied parameters such as cell-matrix interface, cell polarization or ECM mechanical effects. In this work we develop a probabilistic FE 3D migration model for individual cells, presenting features from several of the previous mentioned types. With this model we are able to study the influence of multiple external stimuli (namely ECM stiffness, chemistry, flow and boundary conditions) estimating important features of cell migration such as: velocities, trajectories, cell shape and aspect ratio, cell stress, ECM displacements etc. Finally, we qualitatively and quantitatively compare our results with recent experiments, finding a good agreement and showing the consistency and the adaptability of the model to simulate different conditions. Therefore, the final goal of this work is to provide a versatile and modular tool capable of predicting migration phenomena under different environmental stimuli, reducing the number and helping in the design of new experiments. Methods The macroscale conditions evaluated at the cell surface influence its behavior, changing its morphology and thus determining the migration. With this in mind, several approaches could have been valid to model cell motility in 3D or other related phenomena, such as the classical FEM [30] or the more specific surface finite element method (SFEM) [31]. However, for simplicity and due to the advantages of lattice-based models, a FE approximation using voxels was chosen for the simulations as described below. Numerical implementation This work describes a probabilistic voxel-FE model for 3D migration at the cell-scale level, influenced by chemical and flow conditions coming from a microfluidics simulation and the mechanical conditions of the environment. For this purpose, the ECM as well as the embedded cell are discretized with voxels, each of them corresponding with the component of a three-dimensional mathematical matrix of data (M) which contains relevant information for the simulation. For instance, M stores the centroid of each voxel and whether a specific component corresponds to ECM or cell, therefore determining its mechanical properties. Also, this matrix M includes the flow and chemical conditions interpolated from the microfluidic simulation, therefore containing all the necessary input factors used in the probability/cell-dynamics functions. At this point it is useful to present the iterative scheme ( Figure 1) which can be described as follows: (i) mechanical, chemical and flow conditions are collected from the corresponding FE analysis. These data serve as input for (ii) the cell-dynamics functions which determine the probability of whether an ECM-type voxel becomes a cell-type voxel or vice versa. (iii) A random-number generator checks the probability corresponding to each voxel so the cell shape is updated. Note that only ECM voxels in contact with the cell may become cell, and that only voxels of the cell surface may become ECM. It is also important to clarify, that the cell-voxel distribution (cell shape) is essential for the mechanical analysis since the cell forces are the only ones taken into account. Hence, the mechanical problem is computed at each step, whereas the fluid chemical analysis is computed only once at the beginning. This choice saves computational time and it is justified by the fact that the cell volume is much smaller than the problem domain (collagen). Therefore, assuming steady state at the microdevice, it is considered that the cell shape does not affect the fluid-chemical analysis carried out in the first step. Nevertheless, to test this simplification, a specific fluid-chemical simulation with a random cell shape embedded in a porous matrix was performed. The results confirmed that its effects on the stationary solution are negligible (Additional file 1). Hence, the fluid-chemical conditions are considered constant through the simulation. Mathematical modeling So far the general iteration scheme has been described, but not how the fluid chemical and mechanical problems are solved. As explained below, these problems are computed separately although interacting via changes in cell shape and position which depend, through the probability functions, on several environmental input factors as described in next section. Modeling chemotaxis and flow through a porous medium-A complete 3D microfluidic device is simulated, the geometry and boundary conditions of which are taken from a recent experiment [13]. This device consists of two channels separated by a region containing single cells suspended in collagen I gel (Figure 2). Applying a hydrostatic pressure gradient across the gel region a consistent flow field is generated. In addition, different chemical concentrations are established up and downstream, generating a linear chemical gradient, which, although difficult to obtain experimentally, is useful in the simulations to test the model. Note that this gradient is different from the gradient in the experiments [13], which is autocrine and arises from cells secreting chemoattractant. Finite element software (COMSOL Multiphysics) is used to compute the flow through collagen and the transport of diluted species: (1) where c is the concentration of the diluted species, D is the diffusion coefficient, R is a production or consumption rate expression (for simplicity, 0 in the simulations) and u is the solvent velocity field. The flow in porous media is governed by a combination of the continuity equation and momentum balance equation, which together form the Brinkman equations: (2) (3) In these equations, μ denotes the dynamic viscosity of the fluid, u is the velocity vector, ρ is the density of the fluid, P is the pressure, e p is the porosity, κ is the permeability of the porous medium, and Q br is a mass source or mass sink (which has been considered 0 in all the simulations). Influence of gravity and other volume forces can be accounted for via the force term F, although they are neglected, as well as the inertial term ((u · ∇)u/e p ), in the current simulations. With all this, and assuming incompressible flow (∇u = 0), equations 1,2 and 3 are drastically simplified. Values of these main parameters are listed in Table 1. Since the purpose of this work is to study the migration of a single cell, which volume is negligible in comparison with the whole microdevice domain, the steady state simulation is performed only once, not considering the embedded cell body. Then the results from a central box-like region are extracted to compute the mechanical analysis and the cell migration ( Figure 2, right). Note that no chemical species secreted by the cell are considered here for simplicity. Hence, the chemical concentration and flow direction at each point of the box-like domain remain unalterable regardless cell position in the subsequent steps of the migration simulation. As pointed before, the effect of a 3D body embedded in the centre of the gel is analysed to support this assumption, finding that its influence was practically null except at points very close to the body surface (Additional file 1). Modeling mechanotaxis-The steady-state solution from small box-like domain the fluid simulation is extracted and interpolated into an organized mesh and stored in M. Specifically the domain is discretized with voxels of 3 μm, some of them assigned to model cell behavior (from now called cell-voxels) and forming an initially spherical-like shape embedded in the ECM (Figure 2, right). This size is adequate to roughly mimic cell-like morphologies without increasing too much the computational cost. Smaller sizes, that would improve the accuracy of the cell surface, would produce an excessively refined mesh of the domain which would lead in turn to heavier and slower simulations. For simplicity, the ECM is considered linear elastic, whereas cell-voxels have their own mechanical properties. In similar fashion to previous works [22], the mechanosensing behavior of each cell-voxel is simplified to two springs representing the actin stiffness (K act ) and the passive components (K pas ) of the cytoskeleton, and an active actuator representing the myosin machinery (AM), each of them assumed to independently act in the x,y,z directions ( Figure 3). The stress exerted by this actuator depends upon the sliding between actin filaments and myosin arms (ε c ), which is limited by a maximum contraction parameter (ε min ). This sliding depends in turn on the cell strain (ε cell ) and therefore on the ECM stiffness. Hence, cell stress transmitted to the matrix by each voxel in each direction "i" can be expressed as a function of cell strain: (4) The main difference with respect to the approach used in [22], is that the polarization term is not explicitly included in the stress tensor (which is now isotropic), since the polarization direction emerges from the cell morphology. Also note that in the probability functions (explained in next section) only one value of stress is used, in particular the volumetric stress of each voxel . In the present model, three different zones of the cell body are considered: cortex, cytoplasm and nucleus (Figure 3, right). In a first approach, the only difference between the cortex zone and the cytoplasm is the exertion of higher stress, therefore assigning higher σ max to the cortex-voxels (2.5 kPa compared with 1.5 kPa at the cytoplasm). This is a first approximation to reflect the higher forces exerted by the cells at their perimeter, mainly due to the increased presence of focal adhesions [32][33][34][35][36]. On the other hand, the nucleus presents no contractile behavior, so only its passive resistance (K pas ) is considered (acto-myosin actuator and actin branch are therefore disabled in the corresponding voxels). All of these parameters are listed in Table 1. The mechanical problem is computed at each step, taking into account the redistribution of voxels belonging to each zone of the cell or to the ECM. To solve that, a user-subroutine of the software ABAQUS together with a MATLAB script are employed. Once the FE subroutine computes the mechanical equilibrium at each step, the script comes into action to compute the probabilities of voxel addition/removal according with the mechanical, flow and chemical conditions. In this process, the cell shape is updated as well as all the necessary variables of M. These data act as an input for the FE subroutine in the next step, repeating the process until the end of the simulation. Note that the mechanical analysis only corresponds to the cell-matrix interactions, and not to the flow-ECM or flow-cell interactions which are not considered in this first approach. Probability functions: external stimuli and cell dynamics determine cell migration-In this model, four different factors are considered to account for the mechanical, chemical and flow conditions surrounding the cell and driving cell migration. Namely these factors are: cell stress magnitude, maximum stress direction, chemical concentration at the ECM and flow direction. The volumetric cell stress (σ v ) due to cell contraction is computed at each voxel following the previous mechanosensing model [22]. Here, the maximum stress direction (d Δσ ) is defined as the direction in the cell body where the cell is exerting maximum stress. In other words, it is the direction joining the cell centroid (computed geometrically) with the element of maximum stress ( Figure 4). The chemical concentration (C c ) is a scalar field coming from the fluid chemical analysis, having each voxel an associated value. Similarly, d f stores the flow direction corresponding to each voxel of the ECM. To define the addition/removal of voxels depending on the stimuli, these factors are introduced into the cell-dynamics or probability functions following the classical cumulative distribution [37]: (5) where * represents addition (+) or removal (−) of voxels. p 0 and p max are the minimum/ maximum values bounding the probability. k 0 is a temporal rate affecting all the factors and dt is the time step. In addition λ's are sensitivity constants permitting to control the weight of each factor (F). All these parameters are adjusted to obtain cell speeds within a biological range. In addition, the values of these parameters are held constant during the simulation. Their values are listed in Table 2. On the other hand, F's are variable parameters describing the environment conditions, different for each voxel and depending on the aforementioned stimuli. Each F ranges from 0 to 1 and they are described in the subsequent sections. A sensitivity analysis of the cell-dynamics functions was performed to study the global influence of each separate factor (Additional file 1). The parameter representing the cell stress magnitude (F σ ) measures the stress born in a specific voxel compared with the maximum possible cell stress (σ max ) (eq.6), which value comes intrinsically from the mechanosensing model. The probabilities of adding/removing voxels, increase with the stress to reflect that cells embedded in stiffer substrates exert higher forces and move at faster speeds [2,5,8,38], This parameter also takes into account the voxel orientation. When adding a voxel, θ represents the angle between the direction of the possible new voxel (relative to the current voxel) and the direction of the voxel with maximum cell stress ( Figure 4). In contrast, when removing a voxel, θ stands for the angle between the direction of maximum stress and the direction connecting the current voxel centroid with the cell centroid. Using this criterion, the probabilities of adding/removing voxels in the direction where the cell exerts maximum stress are higher/lower so the cell body tends to polarize, as suggested in experiments [10]. The alignment with stress is included in addition and separately with the parameter F Δσ in order to independently control the weights of the stress magnitude and stress gradient factors (eq.6): (6) To further clarify this point, a simple 2D representation of the voxel addition process is shown in Figure 4. When checking a specific voxel of the cell surface (current), the corresponding value of stress and the position of its neighbours (possible new cell-voxels) are used to compute p + . In the illustration, the top voxel (which is currently part of the ECM) may become cell because θ 1 is lower than 90° so and take a positive value depending on the stress and the alignment. On the other hand, the voxel on the right will not likely appear since θ 2 is higher than 90° so and are 0 and hence . Taking all this into account, the cell tends to migrate to stiffer zones of the ECM (higher cell stress) and in the direction of maximum stress. It is well known that cells sense the ECM interstitial flow and respond to the concentration of a wide variety of chemical species [11][12][13]39]. To reflect this, both factors are included into the probability functions. The necessary inputs come from the fluid chemical analysis previously described. The parameter representing the chemical concentration (F C ) compares the chemical gradient between adjacent voxels (ΔC) and it is normalized by the maximum value of concentration of a particular species (C max ). (7) With this definition, the voxels tend to be added in the direction of maximum chemical concentration, appearing at a faster rate the more pronounced the gradient is. Similarly, the voxels tend to be removed more readily at the positions of lower concentration. In sum, the cell body advances in the direction of the chemical gradient. Obviously, in case of repellent species, F C could be easily reversed to account for opposite effects. The dependence of cell migration on flow conditions have been recently investigated [13]. It was found that small populations of cells tend to migrate downstream and parallel to the flow direction. Actually, very high flow velocities acting on isolated cells or blocking of some specific receptors may reverse this response, although these effects are not considered here for simplicity. The flow parameter F F is then defined as: (8) where φ establishes the alignment of the voxel with the flow direction array at a specific position. Therefore, φ is also calculated following the procedure shown in Figure 4, but using d F instead of d Δσ . Results and discussion It has been shown that multiple combined factors drive cell migration through 3D ECMs, the properties of which influence the cell-matrix interactions and determine cell movements and orientation. This model focuses on three of these factors: fluid flow, chemistry and mechanical conditions. First, flow and chemical conditions of a real 3D microfluidic device [13] are simulated obtaining pressure distribution, chemical gradients and stream lines through a collagen ECM (porous matrix). Then, since the distance magnitudes that a single cell is able to migrate in a few hours (simulated time) are much shorter than the microdevice size, a central region of the gel is selected to compute the mechanical analysis. Hence, this section is divided in three main parts. The first one summarizes the results from the microfluidic system simulation, showing the flow velocity field, the streamlines and the pressure gradient across the gel. The second part shows the effect of the ECM stiffness on the cell stress distribution and cell morphology. Finally, the results focus on cell migration, describing trajectories, speeds and directionality for different situations. Specifically, input factors (mechanics, flow or chemistry) are activated or deactivated in different combinations, thus altering the probability functions, and boundary conditions such as gradient directions are varied. Microfluidic simulation A full 3D microfluidic device is simulated with the conditions described in the FE analysis Methods section. The fluid passes by two input channels and flows through a porous medium (collagen gel) transporting a certain diluted specie, and achieving its peak speed (2.96 μm/s) at the central zone of the gel, between the micropilars, where the cross section is smaller ( Figure 5A). The velocity field matches quantitatively the results obtained both computational and experimentally by Polacheck et al. [13], which found a maximum speed of about 3 μm/s. The pressure drop presents a linear decrease through the gel and constant values at the inlet (40 Pa) and outlet (0 Pa) ( Figure 5B). Similarly, the chemical concentration at the gel decreases linearly from a normalized value of 1 mol/m3 at the inlet, to 0 mol/m 3 at the outlet (not shown). which allows testing the migration model using this additional factor. Future development of the model could incorporate the transport of different species or autocrine gradients produced by the cell, although they were not considered in the present simulation. Effects of ECM stiffness To test the direct effects of ECM stiffness on cell morphology and stress distribution, a boxlike domain (300 × 300 × 120 μm) with constrained displacements at the boundaries (far enough from the cell to avoid influencing the mechanosensing process described in the methods section) and different ECM stiffness conditions was used. Up to 10 simulations were performed for each set of conditions with mechanical stimulus acting alone (flow and chemical inputs deactivated). These simulations presented some differences due to the stochastic nature of the model, but overall all the results were consistent. For clarity, only one simulation of each set of conditions is presented. For all the cases shown here, the cell was assumed to have an initially spherical shape of ~30 μm of diameter and started the simulation in the domain centre ( Figure 2). Time simulated was 500 min (100 steps) which is in the usual range of cell migration experiments [8,13]. Model parameters were adjusted to predict speeds similar to migrating fibroblasts observed in experiments [5,8,38,40,41]. First, the cell is embedded in a homogeneous ECM with constant elastic modulus of 50 kPa. This value is larger than the modulus corresponding with the 2 mg/ml collagen gel used in the simulated microdevice [13]. Nevertheless, we used this higher value to show the effects of stress saturation with stiffness, as we explain later. With no stiffness anisotropy, the ECM displacements are homogeneously distributed, pointing radially to the cell centroid. Similarly, the cell stress is mostly homogeneous, with higher values at the cortex zone (~1.2 kPa) and slightly lower ones in the cytoplasm (Figure 6, left). These values are in the order of magnitude of cell stresses found in experiments [32][33][34][35]. In addition, considering the surface of each voxel face (9 μm 2 ), the magnitude of cell forces would be in the correct range (up to few hundreds of nN) of experimental data [42][43][44]. Note how the nucleus (assumed passive), is being stretched by the surrounding contracting elements. With such homogeneity, the chance of adding/removing elements at the cell surface is similar in all directions (see methods) and consequently, the cell migrates in a random fashion (Figure 6, middle). Also note that the migration speed depends on the ECM stiffness through the probability functions since higher stiffness lead to higher cell stress (until saturation) and thus to higher migration speeds. In this case, results show ~0.4 μm/min of mean speed and ~0.024 μm/min of effective speed (Figure 6, right). Mean speed is calculated as the average cell speed at each step, whereas the effective speed takes into account only the initial and final cell location at a certain time. Low effective speed reflects high randomness. Secondly, two cases with different stiffness conditions are simulated. In case 1, the elastic modulus of the ECM increases linearly with x-coordinate, whereas in case 2, the increase is exponential ( Figure 7A). The cell centroid at each step is tracked and the 3D and x-y projected trajectories are shown in Figure 8A. Overall, in both cases, cell migration pathways were random with a higher net advance in the direction of the gradient stiffness (xdirection). However, cell response was different, moving slightly faster but much more directed in case 2, especially during the first steps. In this case, the stiffness variation (and thus, cell stress) between the front and the back part was very pronounced. According with the probability functions, this corresponds with much higher probability of voxel appearance in + x-direction and of voxel removal in −x-direction, resulting in fast forward advance. This was reflected on the mean and effective speeds of cell migration ( Figure 8B). For short times, the mean speeds were similar in both cases (~0.3 μm/min), but the effective speed was much higher in case 2 (0.25 μm/min compared with 0.04 μm/min in case 1), as expected from the trajectory analysis. However, for long-term, both case 1 and 2 presented similar mean (~0.42 μm/min) and effective (~0.06 μm/min) speeds, and the trajectories were mostly random. This is due to cell stress dependence on ECM stiffness. According to the mechanosensing model, cell stress increases with ECM stiffness, swiftly for compliant substrates but saturating for higher rigidities ( Figure 7B). As stated before, pronounced differences between front and rear stress would cause fast and straight movements, whereas small differences would lead to randomlike migration. In case 1, cell moved between stiffness of 45-65 kPa, always close to the saturation zone, which explains its non-directional motion. On the other hand, in case 2 the cell started in a compliant zone (1 kPa), but quickly found much stiffer surroundings (100 kPa) which highly increased cell stress, decreasing back and rear differences and thus producing stochastic migration. Figure 9 shows the stress distribution for both cases at t = 80 min which is approximately the time at which the cell arrived to a very stiffer zone, reaching force saturation and thus migrating more randomly. In case 1, cell stress is homogeneously distributed, although the voxels with higher stress corresponded with surface (cortex) elements preferentially oriented in + x-direction. Cell shape is mainly regular but generally polarized with the gradient direction, and the ECM displacements point radially to the cell centroid. In case 2, however, there exist a clear gradient of cell stress following the ECM stiffness. The cell shown in Figure 9 presents a shape which is broader at the front, exerting higher stress, and very thin at the rear. Nevertheless, due to the pronounced stiffness gradient, displacements are much higher at the rear and the ECM is mainly stretched in the x-direction. Overall, cell aspect ratio or shape factor (major axis divided by minor axis) ( Figure 10A) was similar for both cases, as well as the spreading area ( Figure 10B), presenting case 2 slightly higher values. This likely happens for the same reasons explained above. The probability functions tend to saturate at high stresses and hence the voxel appearing/ disappearing probability is high in all directions. Therefore the aspect ratio is noisy and relatively low, from roundish-like shapes to somewhat elongated (2:1) cells. ECM degradation-The matrix metalloproteinases (MMPs) are a family of ECM degrading enzymes which play a major role on cell behaviors such as migration, differentiation or angiogenesis. In fact, localized matrix degradation is thought to contribute to cellular invasiveness in physiological and pathological situations [45]. This degradation modifies the morphology and mechanical properties of the ECM, therefore affecting the cell behavior. Computational modeling of such a complex phenomenon requires specific and focused research [29]. Nevertheless, the possibility of ECM degradation was added into the codes for possible future development. As a first approximation, a very simple rule was incorporated: whenever an ECM-voxel (i) is in contact with the cell perimeter it becomes degraded, losing a certain percentage (d) of its original Young's modulus . To test the effect of such simplification, case 1 (linear stiffness gradient in x-direction) was computed again activating ECM degradation (using d = 0.01). Results after 80 minutes of simulated time show that both the effective and mean speeds increase when the ECM is degraded (Figure 11 left). The reason is that the degradation of the ECM mechanical properties (lower E) decreases the probabilities of adding cell elements at the trailing edge. Thus, the cell tends to migrate faster leaving a degraded path on its way (Figure 11 right). Further development of a degradation model might be interesting in the future, although the degradation option was deactivated in the main simulations for simplicity, to isolate the effects of the rest of phenomena. Migration To study the resulting patterns depending on input environmental factors by activating/ deactivating mechanics, flow or chemistry, and using different combinations of gradient directions, 500 min (100 steps) of cell migration were simulated. Five specific cases were distinguished ( Figure 12): (A) only mechanical inputs activated, applying a linear stiffness gradient (same as case 1 in previous section) on the x-direction, (B) migration is only driven by fluid flow in x-direction, (C) flow and a chemical gradient are both applied in x-direction, (D) flow is applied in x-direction whereas there is a stiffness gradient in y-direction, (E) flow and a chemical gradient are applied in x-direction and a stiffness gradient acts in ydirection. Down panel of Figure 12 shows the 3D trajectories and the x-y projection. Mean and effective velocities at the end of simulation are plotted for each condition. Although the mean or averaged speed (V m ) was similar for all the cases (~0.4 μm/min), the effective speed (V eff ) was strongly influenced by the boundary conditions. For each case, the directionality of the migration as the angle of each turn in the track relative to the x-direction was determined. Results reflect the sensitivity of the model when applying single or combined factors. Stiffness or flow gradients acting alone (cases A,B), produced more random migration with ~40% of backward movements, which is reflected on effective speeds under 0.1 μm/min. Introducing a second factor on the x-direction (case C), even when another gradient was acting in the y-direction (case E), substantially decreased the randomness. In these cases, only ~10% of the turns went away from the "correct" path, overall achieving effective speeds of ~0.25 μm/min. Interestingly in case D, where the gradients are applied in x and y-directions, the effective speed (~0.16 μm/min) was greater than in cases A or B, probably due to the fact that random deviations were combined with either the direction of the stiffness or the flow gradient. Modeling a porous ECM So far, all the simulations have considered a continuum matrix through which the cell is able to migrate, completely neglecting morphology or geometrical effects of the ECM. In this section, a porous mesh is simulated to compute cell migration through the matrix pores. The domain size is the same as used in previous simulations (300 × 300 × 120 μm with voxels of 3 μm) but the mesh is performed randomnly obtaining a porosity of ~0.9 and average pore size ~20 μm ( Figure 13A). This pore size is large, especially for physiologic matrices, however, since we are not introducing hindrance or other phenomena related with the cell advance through little pores, a bigger pore size is more adequate to study morphological changes of the cell body. The cell is initially placed at the domain center (note that cell's volume is taken into account when building the mesh) ( Figure 13B). The ECM is still considered as linear elastic for simplicity with homogeneous Young's modulus of 5 kPa, and the cell behavior follows the mechanosensing model. In addition, the flow field in x-direction is interpolated from the microfluidic simulation. The observed cell behavior was similar to that found in previous simulations using continuum ECM's, presenting, however, some peculiarities. Developed stress was similar to previous cases (~1-1.3 kPa) although ECM displacements were significantly higher (up to 0.9 μm) due to the pores ( Figure 14). Interestingly, the cell tends to adhere to the pore surface, where the stiffness (and therefore the stress) is higher (Figure 14 bottom left). Moreover, the cell contracts its body toward that surface, presenting high displacements at the non-adhered voxels ( Figure 14, bottom right). Mean and effective speeds were similar and high (above 0.35 μm/min), indicating a directional migration. In fact, both the trajectory and the angle distribution confirm that the cell moved mainly in x-direction, adhering to the pore surfaces but following the flow lines ( Figure 15, right plots). Cell shape factor and spreading area present noisy behaviors due to the irregular ECM geometry, although the values are similar to those obtained in a continuum domain. Discussion In this work, a phenomenological probabilistic voxel FE model for single cell migration in 3D has been described. Through a set of probability functions and combining different software, the model is able to compute cell migration taking into account different environmental factors evaluated at the cell surface such as mechanical properties of the ECM, chemical gradients, flow and boundary conditions, capturing important migrationrelated features such as cell speed, cell stress, ECM-displacements, spread area, cell aspect ratio etc. To study the fluid-chemical environment, a full 3D microfluidic device whose geometry and conditions were taken from a recent experiment [13] is simulated, in which the fluid passes by the input channels and flows through a porous medium. On the other hand, to analyze the mechanical environment, the mechanical equilibrium is solved by using a specific mechanosensing model. The macroscopic behavior of the cell emerges naturally from the definition of probabilities at each voxel (based on the conditions at the macroscale), allowing the study at the micro and cell scales. Overall, the model predicts cell migration toward stiffer zones of the ECM [5][6][7][8], downstream and parallel to the flow [13,39] and oriented with chemical gradients [11,12]. The parameters of the dynamic functions were adjusted to obtain migration speeds in the range 0-1 μm/min [5,8,38,40,41] and cell stresses of the order of few kPa as reported experimentally [32][33][34][35]. In addition, the effects of combined factors were investigated, confirming that the model responds accordingly in random but controlled fashion. This approach joins together features from different kind of existing migration models. For instance, similarly to the force-based dynamic approaches, the mechanical equilibrium is locally established taking into account the cell contraction depending on ECM conditions following a mechanosensing model [22]. Note that although this approximation is sensitive to external loads (e.g. hydrostatic pressure or ECM pre-strains), only stress and strain caused by cell contraction are taken into account. Additionally, a 3D lattice is used, like in Monte Carlo studies, which usually permits faster simulations at the expense of quantitative results. Nevertheless, since the cell body is discretized with voxels, this handicap is skipped and the model is able to qualitatively and quantitatively study different aspects of cell migration. Obviously, this simplification implies other disadvantages such as the accuracy loss at the cell surface. In fact, it is important to remark the commitment between voxel and cell sizes. The number of voxel elements must be large enough to represent the cell perimeter but small enough to maintain a reasonable computational cost. The expected cell speed should also be taken into account. For instance, to simulate the migration of a slow cell, you the global size of the ECM could be decreased, and smaller elements can be used to increase the accuracy. Hence, in terms of computational cost, the best case would be a large and slow cell, and the worst a fast small cell (e.g. a bacterium). Unfortunately, a mathematical law to define the optimal voxel-size does not exist, although we found that one tenth of the global cell size was overall a good choice. Finally this approach is based on probabilities. However, unlike purely stochastic models, ECM properties or cell stress can be included to drive migration. In fact, this first approach focuses on fluid direction, chemical gradients and mechanical cues as the main inputs driving cell migration through the probability functions. It is worth mentioning that the initial cell shape (assumed spherical at t = 0), would only affect the first migration steps. For instance, an initially elongated or polarized cell would steadily reorient according to the external inputs due to the probability functions, and therefore the general trend would be maintained. These tunable functions allow controling the relative weight of each input parameter (by varying the corresponding λ's), as well as including new factors that affect cell migration. For instance, some experiments [13,39] suggest that cells polarize with the interstitial flow direction and migrate downstream due to a flow-induced gradient of an autocrine chemotactic signal that is detected by specific chemokine receptors. When those receptors are blocked or when the cell population grows (thus disrupting the signalling processes), the migration trend is reversed. This effect could be easily introduced in the model by simply switching the values of F F or including a signalling function regulating that specific parameter. Also, the model predicts increasing speed migration (higher probabilities) with ECM stiffness, not considering hindrance or drag effects that may appear in dense ECMs. To account for the biphasic behavior of cell speed versus ECM stiffness, as found in experiments and used in previous models [21,22,40,46,47], F σ could be modified so that the probability of adding/removing voxels decreased as a function of drag (σ v / (σ max f(drag))), or a specific F drag with negative values could be defined. Adding new input factors or enhancing current assumptions is thus possible and easy, although increasing complexity may complicate the interpretation of the results. Nevertheless, with the activation/deactivation of input factors, the model serves as a suitable platform for investigating a wide variety of migration-related phenomena. In fact, in a future development, it will be possible to deep further into some important aspects which are now oversimplified. For instance, ECM degradation could be easily included in the model to study differences between proteolytic and non-proteolytic migration. Additionaly, the ECM architecture could be further explored, studying the effects of porosity and pore size, including features of contact guidance or even reconstructing the geometry from real images. Furthermore, in this kind of environments, blebbing migration usually plays an important role as an alternative mode of migration [48]. Although the current model is based on the mechanosensing assumption (which implies cell-matrix adhesions) and internal pressure driving independent cell protrusions could be also incorporated. Another simplification is the assumption of a constant difference of maximum stress between the cortex and the cytoplasm. However, the complex reality could be better represented by making the maximum stress magnitude dependent on myosin activation or protein concentration along the different cell parts. Similarly, the stiffness of active cell components (K act ) could rely on actin polymerization and cytoskeletal reorganization. These and other phenomena could be incorporated to better reflect the dynamics of cell migration. Nevertheless, it is important to bear in mind the main handicap when working at different scales (microdevice vs. gel vs. cell), which is the computational cost. To solve this, different FE software (COMSOL Multiphysics) including a specific microfluidics module is used, and the steady-state solution of the fluid-chemical problem is computed. Then, this solution is interpolated into a finer mesh of the central part of the porous gel, where the mechanical analysis and cell migration are computed. Since the model simulates single cell motility, the cell volume does not affect the macro-scale results of the fluid-chemical simulation, and thus it can be neglected permitting considering the stream lines and chemical gradient constant during simulation. In spite of this assumption, the scripts require up to 30 GB of RAM memory, too high for a common personal computer. Furthermore, in case of extending the model to compute collective cell migration, the mentioned simplification would not be valid, making thus necessary a new approach and considerably increasing the computational cost. With all this, another limitation of the current model is the extended use of commercial software (ABAQUS, MATLAB, COMSOL) which restricts the sharing possibilities, although it is intended to remove this dependence in the near future by creating specific hand-coded routines. Conclusions In sum, this work establishes a methodology for testing and designing new experiments; being in particular useful for simulating ongoing microfluidic systems and the study of several basic biological functions such as cell migration, angiogenesis, or organ formation. With all this, it has been developed not just a migration model but a workbench to investigate cell response to a wide variety of external stimuli. Furthermore, with its modular form, the model can be constantly updated and redefined as advancements are made in clarifying how cellular events take place. Supplementary Material Refer to Web version on PubMed Central for supplementary material. Figure 1. Scheme of the iterative loop At each temporal step the fluid chemical and mechanical conditions determine the probability of adding/deleting voxels to/from the cell. At the end of the step, the cell shape is updated. Note that to save computational time, chemistry and flow conditions are considered constant through the simulation, performing the corresponding FE analysis only once at the beginning and not at each time step. Borau Cell material is modeled using two springs in parallel representing the actin stiffness (K act ) and the passive components (K pas ) of the cytoskeleton, in series with an active actuator representing the myosin machinery (AM) Left plot shows the stress exerted by the AM as a function of the sliding between actin filaments and myosin arms (ε c ). Cell-voxels (right) are divided in three zones: cortex (light gray), cytoplasm (medium gray) and nucleus (dark gray). The nucleus plays only a passive role and is modeled as an elastic material. The cortex and cytoplasm, however, present a contractile behavior depending on ECM stiffness, following the mechanosensing model. Voxel addition example taking only the stress direction and magnitude into account. When checking a specific voxel (current element), the volumetric stress that it bears (σ v ) and the angle (θ) that its neighbours form with the direction of maximum stress (d Δσ , red arrow) determine the probability of appearance (p + ). In the illustration, the top voxel (currently part of the ECM) would have a higher probability than the right one of becoming cell since θ 1 is lower than 90 degrees whereas θ 2 is higher. Note that this is a simplified 2D scheme. In 3D, 6-connectivity is used to compute the voxel addition. Left plot shows a cut of the cell body. Cell stress is distributed homogeneously (red cellvoxels) along the cell surface and slightly decreases in the cytoplasm zone. Note that the plot only represents the active stress exerted by the cell elements and not the stress transmitted to the ECM or the nucleus. The nucleus is considered a passive material, thus appearing in blue. ECM displacements are distributed homogeneously, pointing radially to the cell centroid (left legend and white arrows). Middle plot shows cell migration trajectory. Having no guidance, cell moves randomly, which is reflected in the low effective speed.
10,353
2014-10-01T00:00:00.000
[ "Biology", "Computer Science", "Engineering" ]
Flame-Retardant Performance of Transparent and Tensile-Strength-Enhanced Epoxy Resins In this study, a flame-retardant additive with 9,10-dihydro-9-oxa-10-phosphaphenanthrene-10-oxide (DOPO) groups denoted DSD was successfully synthesized from DOPO, 4,4′-diaminodiphenyl sulfone (DDS), and salicylaldehyde. The chemical structure of DSD was characterized by FTIR–ATR, NMR, and elemental analysis. DSD was used as an amine curing agent, and the transparent, tensile strength-enhanced epoxy resins named EP–DSD were prepared via thermal curing reactions among the diglycidyl ether of bisphenol A (DGEBA), 4,4′-diaminodiphenylmethane (DDM), and DSD. The flame-retardancy of composites was studied by the limiting oxygen index (LOI) and UL-94 test. The LOI values of EP–DSD composites increased from 30.7% for a content of 3 wt % to 35.4% for a content of 9 wt %. When the content of DSD reached 6 wt %, a V-0 rating under the UL-94 vertical test was achieved. SEM photographs of char residues after the UL-94 test indicate that an intumescent and tight char layer with a porous structure inside was formed. The TGA results revealed that EP–DSD thermosets decomposed ahead of time. The graphitization degree of the residual chars was also investigated by laser Raman spectroscopy. The measurement of tensile strength at breaking point shows that the loading of DSD increases the tensile strength of epoxy thermosets. Py-GC/MS analysis shows the presence of phosphorus fragments released during EP–DSD thermal decomposition, which could act as free radical inhibitors in the gas phase. Owing to the promotion of the formation of intumescent and compact char residues in the condensed phase and nonflammable phosphorus fragments formed from the decomposition of DOPO groups, EP–DSD composites displayed obvious flame-retardancy. Introduction Epoxy resins are used in a broad variety of fields, such as electrical, electronics, and adhesive, as well as casting applications and construction because of their high adhesion, outstanding electrical and mechanical properties, and good alkali and moisture resistance [1][2][3]. However, the flammability and low transparency of epoxy resin composites are major disadvantages in their application, especially when used in circuit boards and coatings [4,5]. Several approaches have been utilized to enhance the thermal properties of epoxy resins, the most widely used method of which is the addition of flame-retardant additives, such as halogenated compounds, nitrogenous compounds, organophosphorus compounds, and inorganic materials [6][7][8][9]. Nanocomposites have taken a particular place in the preparation of transparent and flame-retardant coatings due to their thermal, light transmittance, and mechanical properties [10,11]. Morteza et al. prepared a new class of transparent epoxy-based nanocomposite coatings containing starch-modified nano-zinc oxide (ZnO-St), which can be applied as top-coats because of their transparency. They studied The intermediate imine coded as DS was synthesized by a typical condensation reaction; 4,4′diaminodiphenyl sulfone (0.04 mol, 9.92 g) and THF 100 mL were fed into a 250 mL three-necked flask equipped with a magnetic stirrer and a reflux condenser, and the mixture was then heated to 40 °C. Subsequently, salicylaldehyde (0.08 mol, 9.77 g) dissolved in THF (60 mL) was added dropwise for 30 min under the protection of nitrogen. After this, the solution was heated to reflux temperature and maintained for 6 h. After cooling, the orange precipitate was filtered, washed with THF thoroughly, and dried at 60 °C for 18 h in a vacuum oven (17.62 g, yield: 96.5%). The DSD was synthesized by an addition reaction. DS (0.02 mol, 9.13 g) and THF (80 mL) were added to a three-necked flask (250 mL) equipped with a thermometer, a reflux condenser, and a stirrer. Under the protection of nitrogen, DOPO (0.04 mol, 8.64 g) dissolved in THF (80 mL) was added for 1 h. With stirring, the reaction solution was heated and maintained at reflux temperature for 12 h. Then, the reaction solution was cooled, filtered by vacuum filtration, washed thoroughly with anhydrous alcohol and distilled water, and dried at 100 °C for 24 h until reaching a constant weight. A white solid was obtained (16.97 g, yield: 95.3%). Preparation of Flame-Retardant Epoxy Resin Flame-retardant epoxy resin was prepared via thermal curing reactions among DGEBA, DDM, and DSD. The formulations of modified epoxy resin are listed in Table 1, where the sum amounts of phenolic and amine protons are equal to the number of epoxy groups in DGEBA. DSD and DDM were blended at 90 °C for 30 min under a mechanical stirrer. When the mixture became transparent and uniform, the DGEBA was introduced and mixed for 5 min under a vacuum pump. Then, the mixture was poured into PTFE molds and cured at 120 °C for 2 h, followed by heat maintenance at The intermediate imine coded as DS was synthesized by a typical condensation reaction; 4,4 -diaminodiphenyl sulfone (0.04 mol, 9.92 g) and THF 100 mL were fed into a 250 mL three-necked flask equipped with a magnetic stirrer and a reflux condenser, and the mixture was then heated to 40 • C. Subsequently, salicylaldehyde (0.08 mol, 9.77 g) dissolved in THF (60 mL) was added dropwise for 30 min under the protection of nitrogen. After this, the solution was heated to reflux temperature and maintained for 6 h. After cooling, the orange precipitate was filtered, washed with THF thoroughly, and dried at 60 • C for 18 h in a vacuum oven (17.62 g, yield: 96.5%). The DSD was synthesized by an addition reaction. DS (0.02 mol, 9.13 g) and THF (80 mL) were added to a three-necked flask (250 mL) equipped with a thermometer, a reflux condenser, and a stirrer. Under the protection of nitrogen, DOPO (0.04 mol, 8.64 g) dissolved in THF (80 mL) was added for 1 h. With stirring, the reaction solution was heated and maintained at reflux temperature for 12 h. Then, the reaction solution was cooled, filtered by vacuum filtration, washed thoroughly with anhydrous alcohol and distilled water, and dried at 100 • C for 24 h until reaching a constant weight. A white solid was obtained (16.97 g, yield: 95.3%). Preparation of Flame-Retardant Epoxy Resin Flame-retardant epoxy resin was prepared via thermal curing reactions among DGEBA, DDM, and DSD. The formulations of modified epoxy resin are listed in Table 1, where the sum amounts of phenolic and amine protons are equal to the number of epoxy groups in DGEBA. DSD and DDM were blended at 90 • C for 30 min under a mechanical stirrer. When the mixture became transparent and uniform, the DGEBA was introduced and mixed for 5 min under a vacuum pump. Then, the mixture was poured into PTFE molds and cured at 120 • C for 2 h, followed by heat maintenance at 160 • C for another 2 h. Finally, the modified epoxy resin samples of a fixed size were cooled to room temperature slowly. The neat epoxy resin samples were also prepared under similar processing conditions, but without DSD. Characterization The chemical structure of DSD was characterized by FTIR-ATR using a PerkinElmer Spectrum 2 (PerkinElmer, Akron, OH, USA), equipped with an attenuated total reflectance (ATR) accessory. 1 H-NMR and 31 P-NMR were performed on an Avance spectrometer (Bruker, Rheinstetten, Germany) at room temperature, and DMSO-d 6 was used as the solvent. TGA was performed on a TG 209 F1 thermal analyzer (NETZSCH, Selb, Germany), both in nitrogen and air atmosphere, with an operating temperature of 30 to 800 • C and heating rate of 10 • C/min. Py-GC/MS was conducted on a Frontier PY-2020iD pyrolyzer connected to a Shimadzu GC-MS QP-2010 Ultra (SHIMADZU, Kyoto, Japan). The chromatographic separation was carried out in a capillary column (HP DB-5MS, 30 m length, 0.25 mm diameter, 0.25 mm thickness, Agilent, Valtbrone, Germany), and the carrier gas (helium) flow rate was 1 mL/min. The temperature of the chromatographic column was firstly held for 3 min at 40 • C, then heated to 300 • C at the rate of 15 • C/min, and finally kept at 300 • C for 10 min. The limiting oxygen index (LOI) was measured on an oxygen index flammability gauge (ATS FAAR, Milan, Italy) based on ASTM D2863. The specimens were 130 mm in length, 6.5 mm in width, and 3 mm in thickness. The UL-94 test was carried out according to the UL-94 standard, with specimen sizes of 130, 12.7, and 3 mm. SEM was used to investigate the internal and external morphology of residue chars, which showed a raised appearance in the UL-94 test and was carried out on a TM-1000 scanning electron microscope (Hitachi, Tokyo, Japan), with 300 times magnification. The residue chars were stripped off by a needle and treated with gold sputtering on a copper station. Raman spectra measurement was implemented on an SPEX-1430 laser Raman spectrometer (SPEX, Metuchen, NJ, USA), with an argon laser and a size of 633 nm. The tensile strength was tested on an H10K-S material universal testing machine (Tiniius Olsen, Philadelphia, PA, USA), according to ASTM D638-08, at a stretching speed of 5 mm/min. The specimens were 75 mm long, 10 mm wide, and 2.1 mm thick, with a gage length of 30 mm. The results were averaged from eight tests. Characterization of DSD The synthesis route of DSD is illustrated in Scheme 1. The intermediate DS was synthesized through aldimine condensation between DDS and salicylaldehyde. DSD was prepared from the addition reaction between the C=N groups of DS and the P-H groups of DOPO. The structure of DSD was characterized by FTIR and NMR. The FTIR spectra of DOPO and DSD are shown in Figure 1. In the FTIR spectra of DOPO, the absorption peak at 2436 cm −1 was assigned to the stretching vibration of P-H. The peaks at 1591 and 1475 cm −1 were attributed to the stretching vibrations of P-C Ar , and stretching vibration peaks of 1278 and 1230 cm −1 (P=O), 1045 and 1014 cm -1 (P-O-C), and 1195 and 900 cm −1 (P-O-C Ar ) could be observed. As for DS FTIR spectra, the peak at 1628 cm −1 could be attributed to the stretching vibrations of C=N, suggesting that DS was synthesized successfully. In the FTIR spectra of DSD, the absorption peaks were detected at 3269 and 1512 cm −1 (N-H), 1332 cm −1 (C-N), 1591 and 1475 cm −1 (P-C Ar ), 1285 and 1228 cm −1 (P=O), 1042 and 1004 cm −1 (P-O-C), and 1195 and 919 cm −1 (P-O-C Ar ). In addition, the peak of P-H, which was observed at 2436 cm −1 in DOPO, was not detected in the spectrum of DSD. This phenomenon means that the -P-H bond in the molecule of DOPO completely participated in the additive reaction with the -C=N-bond in DS. The 31 P-NMR spectrum of DSD is presented in Figure 2. Obviously, there was a double-peak between 28.51 and 31.17 ppm, corresponding to the two phosphorus atoms in DSD. It was considered that the bulky sulphone and rigid DOPO group existed in the molecular structure of DSD, and their steric hindrance effects caused the unequal phosphorus peaks [2,31]. Elemental analysis was also executed to confirm the chemical structure of DSD. Elemental analysis values were as follows: C, 66.29%; H, 4.60%; N, 2.99%; and P, 7.04%. Calculated values were as follows: C, 67.56%; H, 4.31%; N, 3.15%; and P, 6.97%. It is clear that the elemental analysis values of DSD were in accordance with the calculated values. All of these results indicated that the DSD was synthesized successfully. The 31 P-NMR spectrum of DSD is presented in Figure 2. Obviously, there was a double-peak between 28.51 and 31.17 ppm, corresponding to the two phosphorus atoms in DSD. It was considered that the bulky sulphone and rigid DOPO group existed in the molecular structure of DSD, and their steric hindrance effects caused the unequal phosphorus peaks [2,31]. The 31 P-NMR spectrum of DSD is presented in Figure 2. Obviously, there was a double-peak between 28.51 and 31.17 ppm, corresponding to the two phosphorus atoms in DSD. It was considered that the bulky sulphone and rigid DOPO group existed in the molecular structure of DSD, and their steric hindrance effects caused the unequal phosphorus peaks [2,31]. Elemental analysis was also executed to confirm the chemical structure of DSD. Elemental analysis values were as follows: C, 66.29%; H, 4.60%; N, 2.99%; and P, 7.04%. Calculated values were as follows: C, 67.56%; H, 4.31%; N, 3.15%; and P, 6.97%. It is clear that the elemental analysis values of DSD were in accordance with the calculated values. All of these results indicated that the DSD was synthesized successfully. Elemental analysis was also executed to confirm the chemical structure of DSD. Elemental analysis values were as follows: C, 66.29%; H, 4.60%; N, 2.99%; and P, 7.04%. Calculated values were as follows: C, 67.56%; H, 4.31%; N, 3.15%; and P, 6.97%. It is clear that the elemental analysis values of DSD were in accordance with the calculated values. All of these results indicated that the DSD was synthesized successfully. Flame-Retardant Properties of Epoxy Thermosets The flame-retardant properties of EP and EP-DSD thermosets were evaluated by the LOI and UL-94 test. The relevant data from these tests are listed in Table 2. As shown in Table 2, the LOI value of neat EP was only 24.8%, which continued burning with fire drippings after the first ignition, and the drippings could ignite the cotton. In contrast, the EP/DSD composites showed an improved flame-retardancy, and the LOI values of the EP-DSD thermosets obviously increased from 30.7% to 35.4% with the increasing of the DSD content from 0 to 9 wt %. When the loadings of DSD were more than 6 wt %, the composites passed V-0 ratings in UL-94 tests. These results demonstrate that DSD significantly improved the flame-retardancy of epoxy thermosets with a high efficiency. The combustion of neat EP and EP-DSD thermosets during the UL-94 test was recorded by a digital camera. The relevant screenshots are shown in Figure 3. Obviously, the neat EP combusted vigorously with flaming drips after the first ignition, which showed that it was highly flammable. In regard to EP-DSD thermosets, the combustion behaviors differed from those of neat EP, and they all extinguished spontaneously. With the increasing of the DSD content, the burning times after the first and second ignition decreased gradually. Moreover, the pyrolytic gases were found to be continuously ejected from the interiors of samples during combustion, and this ejection of gas could even blow the flame out at times. Some studies have also reported this phenomenon, called "blowing-out", which helps to reduce the combustion time and improve the flame-retardancy of thermosets [32]. Morphology and Structure of Residual Char after the UL-94 Test To know more about the combustion behavior of neat EP and EP-DSD thermosets in the UL-94 test, the photographs recorded by a digital camera and SEM micrographs of char residues are shown in Morphology and Structure of Residual Char after the UL-94 Test To know more about the combustion behavior of neat EP and EP-DSD thermosets in the UL-94 test, the photographs recorded by a digital camera and SEM micrographs of char residues are shown in Figure 4. As presented in Figure 4a, neat EP possessed a loose and cotton-like char layer with distinct cracks after the UL-94 test. Unlike neat EP, the residual char of EP-DSD thermosets maintained a cuboid morphology, which the UL-94 test required for samples, and the char layer of EP-DSD thermosets was relatively tight and smooth. Furthermore, for EP-DSD thermosets, it should be noted that there were several raised stomates marked with the blue box on the surface of residual char, where pyrolytic gases were found to be ejected during the UL-94 test. It is possible that the loading of DSD enhanced the strength of the char layer, and the strength was strong enough to maintain the shape of raised stomates due to the high internal pressure caused by the accumulation of pyrolytic gases during combustion. The SEM micrographs of char residues corresponding to the red box in Figure 4a are shown in Figure 4b,c. It is clear that the char residue of neat EP displayed a discontinuous morphology with some cracks. For EP-DSD thermosets, the external char layer showed a tight and intumescent morphology with small holes, and the internal char layer possessed a smooth morphology with some honeycombed cavities. It is presumable that this tight and intumescent char layer with a honeycombed cavity inside acted as an insulation layer, and could block the fluxion of air and heat transfer during combustion [33]. In addition, this insulation layer could also prevent the pyrolytic gases generated inside the samples from escaping. As the amount of pyrolytic gases increased, the inside pressure increased. Under sufficient pressure, the pyrolytic gases broke through the char layer and blew out the flame due to the quenching effect of phosphorus radicals and the diluting effect of nonflammable gases. All of these effects mean that the EP-DSD thermosets achieved a short after-flame time. Morphology of Residual Char after Thermal Degradation under Nitrogen Atmosphere To learn more about the morphology of the char layer, neat EP and EP-DSD thermosets (12.7 mm × 12.7 mm × 3.0 mm) were heated to 600 °C at a heating rate of 10 °C/min under nitrogen atmosphere in a muffle furnace, and naturally cooled to room temperature. A digital photograph of epoxy thermosets before and after thermal degradation is presented in Figure 5. As presented in Figure 4a, neat EP possessed a loose and cotton-like char layer with distinct cracks after the UL-94 test. Unlike neat EP, the residual char of EP-DSD thermosets maintained a cuboid morphology, which the UL-94 test required for samples, and the char layer of EP-DSD thermosets was relatively tight and smooth. Furthermore, for EP-DSD thermosets, it should be noted that there were several raised stomates marked with the blue box on the surface of residual char, where pyrolytic gases were found to be ejected during the UL-94 test. It is possible that the loading of DSD enhanced the strength of the char layer, and the strength was strong enough to maintain the shape of raised stomates due to the high internal pressure caused by the accumulation of pyrolytic gases during combustion. The SEM micrographs of char residues corresponding to the red box in Figure 4a are shown in Figure 4b,c. It is clear that the char residue of neat EP displayed a discontinuous morphology with some cracks. For EP-DSD thermosets, the external char layer showed a tight and intumescent morphology with small holes, and the internal char layer possessed a smooth morphology with some honeycombed cavities. It is presumable that this tight and intumescent char layer with a honeycombed cavity inside acted as an insulation layer, and could block the fluxion of air and heat transfer during combustion [33]. In addition, this insulation layer could also prevent the pyrolytic gases generated inside the samples from escaping. As the amount of pyrolytic gases increased, the inside pressure increased. Under sufficient pressure, the pyrolytic gases broke through the char layer and blew out the flame due to the quenching effect of phosphorus radicals and the diluting effect of nonflammable gases. All of these effects mean that the EP-DSD thermosets achieved a short after-flame time. Morphology of Residual Char after Thermal Degradation under Nitrogen Atmosphere To learn more about the morphology of the char layer, neat EP and EP-DSD thermosets (12.7 mm × 12.7 mm × 3.0 mm) were heated to 600 • C at a heating rate of 10 • C/min under nitrogen atmosphere in a muffle furnace, and naturally cooled to room temperature. A digital photograph of epoxy thermosets before and after thermal degradation is presented in Figure 5. Figure 5a presents digital photos of neat EP and EP-DSD thermosets, and all EP thermosets exhibited a transparent appearance. As a part of the curing agent, DSD could participate in the curing action with DGEBA and DDM. After curing, EP-DSD thermosets could be considered as whole in the chemical structure. Unlike some inorganic materials, such as montmorillonite (MMT) and nanofillers, DSD is not dispersed in EP, so the presence of stacks and phase separation can be avoided in EP-DSD thermosets [11,34]. He et al. have reported a flame-retardant EP/MMT composite with an opaque and dark brown appearance, which resulted from the visible MMT particles in the EP composite [33]. In addition, when heated to melting in the preparation of EP-DSD composites, the mixture of DDM, DSD, and DGEBA also presented a transparent liquid. It could be deduced that the absence of stacks derived from completely participating in the curing action and good compatibility with the EP matrix, resulting in the transparent appearance of EP-DSD thermosets [35]. As shown in Figure 5b,c, the neat EP presented a fragile and lamellar char layer, while EP-DSD thermosets showed an intumescent and three-dimensional structural morphology. The surface char layers were carefully cut open by a razor blade, showing that the interior of residual chars for EP-DSD thermosets was hollow and presented a huge cavity, which could gather a certain amount of pyrolytic gases. This indicates that the loading of DSD could enhance the char layer cohesion, which acted like a more efficient physical barrier to pyrolysis gas mass transfer. When heated, with the increasing amount of pyrolytic gases generated and accumulated under the char layer, an intumescent morphology with a porous structure inside was finally exhibited. Figure 5a presents digital photos of neat EP and EP-DSD thermosets, and all EP thermosets exhibited a transparent appearance. As a part of the curing agent, DSD could participate in the curing action with DGEBA and DDM. After curing, EP-DSD thermosets could be considered as whole in the chemical structure. Unlike some inorganic materials, such as montmorillonite (MMT) and nanofillers, DSD is not dispersed in EP, so the presence of stacks and phase separation can be avoided in EP-DSD thermosets [11,34]. He et al. have reported a flame-retardant EP/MMT composite with an opaque and dark brown appearance, which resulted from the visible MMT particles in the EP composite [33]. In addition, when heated to melting in the preparation of EP-DSD composites, the mixture of DDM, DSD, and DGEBA also presented a transparent liquid. It could be deduced that the absence of stacks derived from completely participating in the curing action and good compatibility with the EP matrix, resulting in the transparent appearance of EP-DSD thermosets [35]. As shown in Figure 5b,c, the neat EP presented a fragile and lamellar char layer, while EP-DSD thermosets showed an intumescent and three-dimensional structural morphology. The surface char layers were carefully cut open by a razor blade, showing that the interior of residual chars for EP-DSD thermosets was hollow and presented a huge cavity, which could gather a certain amount of pyrolytic gases. This indicates that the loading of DSD could enhance the char layer cohesion, which acted like a more efficient physical barrier to pyrolysis gas mass transfer. When heated, with the increasing amount of pyrolytic gases generated and accumulated under the char layer, an intumescent morphology with a porous structure inside was finally exhibited. For further investigation, the residual chars for neat EP and EP-DSD9 thermosets after thermal degradation under nitrogen atmosphere were examined by laser Raman spectroscopy. The corresponding Raman spectra of residue char are presented in Figure 6. Both of the two spectra displayed two peaks at 1355 cm −1 (D band) and 1582 cm −1 (G band). Generally, the D band corresponds to carbons in a disorder arrangement, while the G band relates to carbons in graphite layers [9,36,37]. The intensity of D and G bands was represented by the relevant peak area, and the ratio of the intensity of D and G bands (ID/IG) of neat EP and EP-DSD9 thermosets was calculated by the ratio of peat areas. As shown in Figure 6, the ID/IG values of neat EP and EP-DSD9 thermosets were 1.68 and For further investigation, the residual chars for neat EP and EP-DSD9 thermosets after thermal degradation under nitrogen atmosphere were examined by laser Raman spectroscopy. The corresponding Raman spectra of residue char are presented in Figure 6. Both of the two spectra displayed two peaks at 1355 cm −1 (D band) and 1582 cm −1 (G band). Generally, the D band corresponds to carbons in a disorder arrangement, while the G band relates to carbons in graphite layers [9,36,37]. The intensity of D and G bands was represented by the relevant peak area, and the ratio of the intensity of D and G bands (I D /I G ) of neat EP and EP-DSD9 thermosets was calculated by the ratio of peat areas. As shown in Figure 6, the I D /I G values of neat EP and EP-DSD9 thermosets were 1.68 and 1.33, respectively. This means that the residual char of EP-DSD9 thermosets possessed a higher graphitization degree, and thus had a higher heat stability. Polymers 2019, 11, x FOR PEER REVIEW 10 of 15 1.33, respectively. This means that the residual char of EP-DSD9 thermosets possessed a higher graphitization degree, and thus had a higher heat stability. Thermal Stability of EP and EP/DSD Thermosets The thermal stability of neat EP and EP/DSD thermosets was investigated by TGA under air and nitrogen atmosphere, and the TG and DTG curves of epoxy thermosets are shown in Figures 7 and 8, respectively. The values of the initial degradation temperature for 5% weight loss (T5%), the maximum weightlessness rate (Vmax), the temperature at Vmax (Tmax), and the char yield at Tmax and 700 °C (CYTmax and CY700) are listed in Tables 3 and 4. As Figure 7 shows, there were two rapid descents in the TG curve, which corresponded to the two peaks in the DTG curve. These indicated that there were two weight-loss processes in the thermal degradation of all EP thermosets under air atmosphere. However, it was clear that the TG and DTG curves of EP-DSD thermosets were different from those of neat EP. Although the loading of DSD did not change the two-stage thermal degradation of EP thermosets under air atmosphere, it advanced the decomposition time. As shown in Table 3, compared with neat EP, the loading of DSD resulted in a lower T5% for EP/DSD thermosets, and it decreased from 369.7 °C for neat EP to 346.4 °C for EP- Thermal Stability of EP and EP/DSD Thermosets The thermal stability of neat EP and EP/DSD thermosets was investigated by TGA under air and nitrogen atmosphere, and the TG and DTG curves of epoxy thermosets are shown in Figures 7 and 8, respectively. The values of the initial degradation temperature for 5% weight loss (T 5% ), the maximum weightlessness rate (V max ), the temperature at V max (T max ), and the char yield at T max and 700 • C (CY Tmax and CY 700 ) are listed in Tables 3 and 4. Polymers 2019, 11, x FOR PEER REVIEW 10 of 15 1.33, respectively. This means that the residual char of EP-DSD9 thermosets possessed a higher graphitization degree, and thus had a higher heat stability. Figure 6. Raman spectra of neat EP and EP-DSD9. Thermal Stability of EP and EP/DSD Thermosets The thermal stability of neat EP and EP/DSD thermosets was investigated by TGA under air and nitrogen atmosphere, and the TG and DTG curves of epoxy thermosets are shown in Figures 7 and 8, respectively. The values of the initial degradation temperature for 5% weight loss (T5%), the maximum weightlessness rate (Vmax), the temperature at Vmax (Tmax), and the char yield at Tmax and 700 °C (CYTmax and CY700) are listed in Tables 3 and 4. As Figure 7 shows, there were two rapid descents in the TG curve, which corresponded to the two peaks in the DTG curve. These indicated that there were two weight-loss processes in the thermal degradation of all EP thermosets under air atmosphere. However, it was clear that the TG and DTG curves of EP-DSD thermosets were different from those of neat EP. Although the loading of DSD did not change the two-stage thermal degradation of EP thermosets under air atmosphere, it advanced the decomposition time. As shown in Table 3, compared with neat EP, the loading of DSD resulted in a lower T5% for EP/DSD thermosets, and it decreased from 369.7 °C for neat EP to 346.4 °C for EP- Table 3. TGA data of epoxy thermosets under air atmosphere. As Figure 7 shows, there were two rapid descents in the TG curve, which corresponded to the two peaks in the DTG curve. These indicated that there were two weight-loss processes in the thermal degradation of all EP thermosets under air atmosphere. However, it was clear that the TG and DTG curves of EP-DSD thermosets were different from those of neat EP. Although the loading of DSD did not change the two-stage thermal degradation of EP thermosets under air atmosphere, it advanced the decomposition time. As shown in Table 3, compared with neat EP, the loading of DSD resulted in a lower T 5% for EP/DSD thermosets, and it decreased from 369.7 • C for neat EP to 346.4 • C for EP-DSD9. Besides, with the increasing of DSD content, higher char yields at T max and 700 • C could be observed when compared with neat EP. This suggests that the introduction of DSD could reduce the weight loss and promote the formation of char residues, and hence improve the thermal stability. DSD9. Besides, with the increasing of DSD content, higher char yields at Tmax and 700 °C could be observed when compared with neat EP. This suggests that the introduction of DSD could reduce the weight loss and promote the formation of char residues, and hence improve the thermal stability. Under nitrogen atmosphere, the TG curves of all EP thermosets displayed a single decomposition process, as shown in Figure 8, which corresponded to the only peak in the DTG curve. With the increase of DSD content from 0 wt % to 9 wt %, the char residue at 700 °C was gradually increased from 16.58% to 23.45%. Similar to the results in air atmosphere, with the dosage of DSD, the char residual weights were much larger than those of neat EP, proving that an enhanced performance was exhibited by char forming under nitrogen atmosphere. In addition, the T5%, Tmax, and Vmax of EP-DSD thermosets were lower than those of neat EP with the increasing content of DSD. In summary, EP-DSD thermosets possessed a higher char yield than neat EP in both air and nitrogen atmosphere. The loading of DSD lowered the T5%, Tmax, and Vmax of thermosets. These changes indicated that the loading of DSD contributed to char formation in combustion, which means a better enhancement of the flame-retardancy of epoxy thermosets. Py-GC/MS Analysis of DSD, EP, and EP-DSD Thermosets As shown in Figure 9, there were 14 kinds of pyrolysis products for DSD. However, the main pyrolytic products of DSD were benzenamine (c, relative area: 21.98%), diphenyl (e, relative area: 8.29%), dibenzofuran (f, relative area: 11.18%), o-phenylphenol (g, relative area: 36.07%), and fluorene (h, relative area: 1.17%). In consideration of the molecular structure and their relative area, it is reasonable that the breaking of C-N and C-P in DSD resulted in the main pyrolytic products above, and the diphenyl (e), dibenzofuran (f), and o-phenylphenol (g) were decomposition products of DOPO-groups. This suggest that phosphorus fragments existed in the gas phase [38]. Figure 10 shows the pyrograms of neat EP and EP-DSD thermosets; they were similar, but the relative area of EP-DSD thermosets was large. It should be noted that diphenyl (e), dibenzofuran (f), and o-phenylphenol (g) were also observed in EP-DSD pyrograms, which related to decomposition of the DOPO-groups, indicating that phosphorus fragments released in the decomposition of EP-DSD thermosets could inhibit the free radical reaction in the flame in the gas phase. Under nitrogen atmosphere, the TG curves of all EP thermosets displayed a single decomposition process, as shown in Figure 8, which corresponded to the only peak in the DTG curve. With the increase of DSD content from 0 wt % to 9 wt %, the char residue at 700 • C was gradually increased from 16.58% to 23.45%. Similar to the results in air atmosphere, with the dosage of DSD, the char residual weights were much larger than those of neat EP, proving that an enhanced performance was exhibited by char forming under nitrogen atmosphere. In addition, the T 5% , T max , and V max of EP-DSD thermosets were lower than those of neat EP with the increasing content of DSD. In summary, EP-DSD thermosets possessed a higher char yield than neat EP in both air and nitrogen atmosphere. The loading of DSD lowered the T 5% , T max , and V max of thermosets. These changes indicated that the loading of DSD contributed to char formation in combustion, which means a better enhancement of the flame-retardancy of epoxy thermosets. Py-GC/MS Analysis of DSD, EP, and EP-DSD Thermosets As shown in Figure 9, there were 14 kinds of pyrolysis products for DSD. However, the main pyrolytic products of DSD were benzenamine (c, relative area: 21.98%), diphenyl (e, relative area: 8.29%), dibenzofuran (f, relative area: 11.18%), o-phenylphenol (g, relative area: 36.07%), and fluorene (h, relative area: 1.17%). In consideration of the molecular structure and their relative area, it is reasonable that the breaking of C-N and C-P in DSD resulted in the main pyrolytic products above, and the diphenyl (e), dibenzofuran (f), and o-phenylphenol (g) were decomposition products of DOPO-groups. This suggest that phosphorus fragments existed in the gas phase [38]. Figure 10 shows the pyrograms of neat EP and EP-DSD thermosets; they were similar, but the relative area of EP-DSD thermosets was large. It should be noted that diphenyl (e), dibenzofuran (f), and o-phenylphenol (g) were also observed in EP-DSD pyrograms, which related to decomposition of the DOPO-groups, indicating that phosphorus fragments released in the decomposition of EP-DSD thermosets could inhibit the free radical reaction in the flame in the gas phase. Mechanical Properties of Neat EP and EP-DSD Thermosets When used as a curing agent, the influence of DSD on tensile properties was investigated by measurement of the tensile strength and elongation. The results are exhibited in Figure 10. Neat EP displayed outstanding tensile properties, with a tensile strength of 77.29 MPa and elongation of 8.39% at breaking point. The addition of DSD improved the tensile strength of EP-DSD composites from 84.37 MPa for EP-DSD3 to 88.83 MPa for EP-DSD9. Cross-linking degree values were calculated and are listed in Table 5. They increased from 1958 mol/m 3 for neat EP to 2105 mol/m 3 for EP-DSD9, and thus raised the Tg from 171.2 to 176.5 °C, complying with the increasing of tensile strength. This could be attributed to the larger cross-linked network structure, which was formed by the cross-linking of OH and -NH-groups in DSD with DGEBA and DDM. However, the cross-linked network structure and the rigid DOPO group constrained the molecular chain motion [39], and the elongation of EP-DSD thermosets decreased with the increasing dosage of DSD. Mechanical Properties of Neat EP and EP-DSD Thermosets When used as a curing agent, the influence of DSD on tensile properties was investigated by measurement of the tensile strength and elongation. The results are exhibited in Figure 10. Neat EP displayed outstanding tensile properties, with a tensile strength of 77.29 MPa and elongation of 8.39% at breaking point. The addition of DSD improved the tensile strength of EP-DSD composites from 84.37 MPa for EP-DSD3 to 88.83 MPa for EP-DSD9. Cross-linking degree values were calculated and are listed in Table 5. They increased from 1958 mol/m 3 for neat EP to 2105 mol/m 3 for EP-DSD9, and thus raised the Tg from 171.2 to 176.5 °C, complying with the increasing of tensile strength. This could be attributed to the larger cross-linked network structure, which was formed by the cross-linking of OH and -NH-groups in DSD with DGEBA and DDM. However, the cross-linked network structure and the rigid DOPO group constrained the molecular chain motion [39], and the elongation of EP-DSD thermosets decreased with the increasing dosage of DSD. Mechanical Properties of Neat EP and EP-DSD Thermosets When used as a curing agent, the influence of DSD on tensile properties was investigated by measurement of the tensile strength and elongation. The results are exhibited in Figure 10. Neat EP displayed outstanding tensile properties, with a tensile strength of 77.29 MPa and elongation of 8.39% at breaking point. The addition of DSD improved the tensile strength of EP-DSD composites from 84.37 MPa for EP-DSD3 to 88.83 MPa for EP-DSD9. Cross-linking degree values were calculated and are listed in Table 5. They increased from 1958 mol/m 3 for neat EP to 2105 mol/m 3 for EP-DSD9, and thus raised the T g from 171.2 to 176.5 • C, complying with the increasing of tensile strength. This could be attributed to the larger cross-linked network structure, which was formed by the cross-linking of OH and -NH-groups in DSD with DGEBA and DDM. However, the cross-linked network structure and the rigid DOPO group constrained the molecular chain motion [39], and the elongation of EP-DSD thermosets decreased with the increasing dosage of DSD. Conclusions In this study, a novel DOPO-based flame retardant (DSD) was successfully synthesized, and its chemical structure was confirmed by FTIR, 1 H-NMR, and 31 P-NMR. TGA analysis showed that the loading of DSD improved the thermal stability of EP composites, lowered the T 5% and T max values of EP thermosets, and increased the char residue to 3.98% and 23.45% under air and nitrogen atmosphere, respectively. The results from the LOI and Ul-94 test releveled that the addition of DSD improved the flame retardancy of EP composites. Digital photos and SEM micrographs illustrated that a compact and stable intumescent char layer with a honeycombed cavity inside was formed after combustion, which was responsible for the enhanced flame-retardant properties. Furthermore, the I D /I G values of neat EP and EP-DSD9 thermosets, which were calculated according to Raman spectra, indicate that the residual char of EP-DSD9 thermosets possesses a higher graphitization degree, and thus has a higher heat stability. Py-GC/MS analysis showed that phosphorus fragments were released in the decomposition of EP-DSD thermosets, which could inhibit the free radical reaction in the gas phase. The formation of a larger cross-linked network structure due to the cross-linking of the OH and -NH-group in DSD with DGEBA and DDM resulted in an enhancement of the tensile strength for EP-DSD composites.
8,943.2
2020-02-01T00:00:00.000
[ "Materials Science" ]
METROLOGICAL ANALYSIS OF DIFFERENT TECHNIQUES FOR MEASURING INTERFACE TENSION BETWEEN TWO FLUIDS BASED ON SPINNING DROP METHOD . The spinning drop method foundations of measuring interface tension between two immiscible liquids are considered. Different techniques of the spinning drop method and their metrology evaluation are compared. The dimensionless parameters of spinning drop are calculated using the fourth-order Runge–Kutta procedure and they are approximated by the seventh-order polynomial dependence. The relative errors of the different techniques and the approximate dependence are obtained. modeling processes; processing interpretation measured data. Introduction Interface tension (IT) at the interface of two insoluble liquids is a significant parameter of the technological processes where surface characteristics at the interface are essential. This is especially important in the oil production methods with the help of reservoir pressure maintenance using surfactants (SAA) [3]. It should also be noted that IT can vary in the range of 0.01÷20 mN/m. Measurement of such IT values is usually carried out with the help of the devices that implement the spinning drop method (SD) [5]. The essence of the SD method consists in the following: a horizontally placed glass tube is filled with such a heavier fluid under study as aqueous surfactant solution; after that a drop of such a lighter fluid under investigation as oil is injected into this fluid; then the tube is revolved around its horizontal axis with a certain angular velocity  . Both the appropriate SD dimensions (for example, its largest diameter, length, and volume) and the density difference of the interfacial fluids are measured depending on the selected techniques for determining IT; the IT values  [4,[6][7][8] are calculated with the help of the corresponding dependencies [4,[6][7][8]. Among such dependencies, regardless of the date when their authors published them, the following are wide spread now B. Vonnegut's dependence [1]: where r dimensionless parameter which is determined on the basis of the appropriate J. Slattery's Theoretical Part Let us conduct theoretical calculation of the SD geometrical dimensions in order to evaluate method errors of the abovementioned techniques. Let us consider the horizontal rotating tube, inside of which there is fluid 2 with higher density same time, we neglect the gravitational force, which allows us to suggest that the rotation axes of the tube 1 and drop 3 coincide. Fig. 1. Rotating tube with investigated heavier and lighter fluids Then the pressure 1 A P inside the drop in pt. А is as the following: ydistance from pt. А to the х axis. Correspondingly, the pressure outside the drop in pt. А is as follows: Hence, the pressure difference along the interface of two fluids in pt. А is as the following: In case there is gravitational force, the drop rotation axis shifts in relation to the tube rotation axis by the value which is equal to , where ggravitational acceleration, dynamic viscosity of the heavier fluid. However, the SD form doesn't change therewith. On the other hand the pressure difference along the interface in pt. A will be as follows: Rcurvature radii of the drop surface in pt. А in the plane of fig. 1 and in the plane that is perpendicular to the plane of Fig. 1 respectively [2]. Besides, the pressure difference 0 P  along the interface on the level of the horizontal rotation х axis in pt. О will be as the following [1]: where 0 Rcurvature radius of the SD interface surface in pt. О (Fig. 1). Then, when we take into account dependencies (8) and (9), dependence (7) will be as the following: Equation (10) Having multiplied both the left and the right parts of (12) by a , we will obtain an equation in a dimensionless form that describes the SD surface:   When solving (13) and (14) for different specified values of 0 / Ra at the moment when the angle reaches  = 90°, we find the corresponding SD geometrical parameters. The initial boundary conditions are the following: (15) and the final boundary conditions are as follows: RR (16) When the final conditions of (16) are reached, there isn't any further increase in the parameters according to (16) and the SD surface becomes strictly cylindrical, i. e. Results and Discussion Some of the results of the SD dimensionless parameters ( The results of such error calculation are provided in table 3. (18), have a small method error in the indicated range of values / 2R l . However, when implementing B. Vonnegut and J. Slattery's techniques there is a necessity to measure the largest SD radius 2R , which is significantly influenced by the optical zoom factor Ì of the tube with the fluids under study that can vary in the range from 1.332 to 1.34 [1]. Calculation of a certain Ì value depends on many factors and it can lead to significant additional errors of the obtained results. Therefore, it is advisable to use the techniques that do not involve measurement of the largest SD diameter 2R (S. Torza and H. Princen's techniques and approximate dependence (18)). However, S. Torza and H. Princen's techniques are characterized by significant method errors. Therefore, it is recommended to use approximate dependence (18) given that modern means for IT  measurement are equipped with computer aids. This allows to easily develop the appropriate software that would consider dependence (18).
1,243.2
2016-08-01T00:00:00.000
[ "Physics" ]
BRST Ghost-Vertex Operator in Witten's Cubic Open String Field Theory on Multiple $Dp$-branes The Becchi-Rouet-Stora-Tyutin (BRST) ghost field is a key element in constructing Witten's cubic open string field theory. However, to date, the ghost sector of the string field theory has not received a great deal of attention. In this study, we address the BRST ghost on multiple $Dp$-branes, which carries non-Abelian indices and couples to a non-Ablelian gauge field. We found that the massless components of the BRST ghost field can play the role of the Faddeev-Popov ghost in the non-Alelian gauge field, such that the string field theory maintains the local non-Abelian gauge invariance. In a recent study [1], we extended the Witten open string field theory [2,3] on a single D25brane to a cubic open string field theory on multiple Dp-branes, p = −1, 0, · · · , 25. On multiple Dp-branes, both the string field and gauge parameters carry non-Abelian group indices. We expect that the Faddeev-Popov ghost [4] structure originates from the low-energy sector of the BRST ghost field. Siegel [5,6] pointed out that the massless component of the BRST ghost field could be the Faddeev-Popov ghost of gauge theory. However, his discussion was limited to the U (1) gauge field [7], which is the low-energy sector of open string field theory on a single D25-brane. In non-Abelian gauge theory, which describes the low-energy sector of an open string on multiple Dp-branes, the Faddeev-Popov ghost field interacts with the non-Abelian gauge field. Therefore, it is necessary to examine cubic string coupling to confirm that the Faddeev-Popov ghost structure is consistent with the non-Abelian gauge symmetry of the low-energy sector of strings on multiple Dp-branes. For this purpose, we construct the BRST ghost-vertex operator for Witten's cubic open string field theory on multiple Dp-branes. Because the BRST ghost fields transform nontrivially under conformal transformation, it is not easy to find a propagator on the string world sheet to evaluate the Polyakov string path integral, which leads us to the cubic string vertex in the ghost sector. In this study, we construct the three-string vertex operator in the ghost sector explicitly for Witten's cubic open string field theory for multiple Dp-branes. In the next section, we define Witten's cubic open string field theory on multiple Dp-branes with the overlapping functions of the string and BRST ghost coordinates. We could directly convert the overlapping functions into their Fock space representations [8][9][10][11][12][13] to obtain the vertex operators. However, this procedure requires inverting infinite-dimensional matrices that could possibly not be uniquely defined. Moreover, the obtained vertex operators could possibly not correctly reproduce the scattering amplitudes represented by the Polyakov string path integrals. These problems were identified earlier by Cremmer and Gervais [14]. To avoid these problems, we apply a different strategy, called the Mandelstam procedure [15][16][17][18], advocated for the light-cone string field theory by Mandelstam where Q is the BRST operator. Here, the star product * with the BRST ghost coordinates is defined as This action is invariant under the BRST gauge transformation On multiple Dp-branes, both string field Ψ and gauge parameter field can carry U (N ) group indices. A. BRST ghost coordinates of open string field theory The ghost part of the Polyakov string path integral and action are given by The ghost coordinates b and c are Grassmann-odd fields on the two-dimensional space of conformal dimensions (2, 0) and (−1, 0). To evaluate the Polyakov string path integral and construct the vertex operators, we must know the propagators of the ghost fields on the string world sheet. However, it is difficult to construct the propagators of the ghost fields of open strings directly on the string world sheet. We know that on the complex plane (closed string), the holomorphic and anti-holomorphic parts of the propagator are given as Thus, we must identify how the BRST ghost fields of an open string transform under conformal transformation. To understand the conformal transformation of open string BRST ghost fields, it is convenient to consider an open string as a folded closed string [19]. The folding condition is as follows (on a cylindrical surface): which are equivalent to b n =b n , c n =c n . Considering these folding conditions, we can define the ghost fields of open strings as follows: If the folding condition is imposed, the string world sheet of the closed string, which is a cylindrical surface for the free string, is folded into a strip, which corresponds to the string world sheet of an open string. The conformal mapping from the complex plane onto the cylindrical surface (the world sheet of a free closed string) is given by Under this conformal mapping, the b-c ghost fields transform as It follows from the conformal transformation of the b-c ghost coordinates and folding construction that the free propagator of the b-c ghost fields of an open string on a strip must be given as Details of the construction of Green's function and the calculations of the Neumann functions of the b-c ghost fields are given in the Appendix. III. CONSTRUCTION OF VERTEX OPERATORS FOR BRST GHOST Assuming that we found Green's function of the ghost fields on the string world sheet, in this section, we construct the vertex operator for the BRST ghost fields. The world sheet of the three-open-string interaction is displayed in Fig. 1. [20] 0 We can use Green's function for the ghost field on the string world sheet as G(η r , ξ r ; η s , ξ s ) = δ rs n≥−1 cos nη r cos nη s + n,m≥−1Ḡ rs nm e |n|ξr+|m|ξs cos nη r cos mη s . With the boundary values of the ghost fields fixed by {b (r) , c (r) , r = 1, 2, 3}, we can evaluate the Polyakov string path integral, which defines the scattering amplitude as where ∂M (r) and r = 1, 2, 3 are the temporal boundaries of the string world sheet. Using Green's function, Eq. (15), we can rewrite F ghost as Now, we rewrite this expression of F ghost into an operatorial form. It is useful to consider a set of anticommuting operators We can construct a coherent state for these anticommuting operators with a set of eigenvalues θ and χ as follows: By comparing the coherent states in Eq. (19) with F ghost , the first line of Eq. (17) must correspond to the normalization factor of the corresponding coherent state. Using the coherent state r |b (r) , c (r) , we can rewrite the Polyakov string path integral in the ghost sector in an operatorial form: Here,b (r) 0ĉ IV. FADDEEV-POPOV GHOST OF NON-ABLEIAN GAUGE FIELD THEORY AND BRST GHOSTS IN OPEN STRING THEORY We expand the string state in the ghost sector to identify the Faddeev-Popov ghost in the asymptotic region (in cylindrical space) as follows: Note that the component fieldsη 1 (x) and χ 1 (x) are massless. From the kinetic term of the string field Ψ gh |Q|Ψ gh , the component fields obtain their kinetic terms χ 1 ∂η 1 +η 1 ∂χ 1 . If we collect the massless component field terms in the vertex, Eq. (23), we obtain As we demonstrate in the Appendix, Thus, the BRST ghost field terms in Eq. (25) reduce to the following term: To calculate the gauge-ghost field coupling, we choose the string state as follows: Now the gauge-ghost field coupling terms can be written as It can be further rewritten as Here, we make use of the previous results on open string Neumann functions [1] From calculations ofḠ rs 11 given in the Appendix, we havē ,Ḡ 21 00 = 0, where g = 2 6 3 5 5. This is the cubic interction part of the Faddeev-Popov ghost action V. CONCLUSIONS AND DISCUSSIONS In this study, we investigated the BRST ghost coordinate fields for open strings on multiple Dpbranes. On multiple Dp-branes, the string fields carry non-Abelian group indices and the massless components become non-Abelian gauge fields. In string field theory, the local gauge symmetry appears to be fixed covariantly; hence, the role of the ghost field, the massless component of the Abelian gauge symmetry is maintained in the low-energy region. However, the coupling constants of the ghost gauge fields and those of the cubic gauge fields do not agree. This could be due to the conic singularity of Witten's string field theory. It has also been noted before that the coupling of cubic gauge fields and that of quartic gauge fields are not in agreement with each other [22,23]; this was in a study of Witten's cubic string field theory using the level truncation method. This motivates us to study the ghost sector of cubic string field theory in a proper-time gauge [20,[24][25][26][27][28][29]. In a proper-time gauge, the coupling of cubic gauge fields and that of quartic gauge fields agree with each other [28]. The second issue this paper brings is the extension of this work to closed string field theory. It would be interesting to determine if the Kawai-Lewellen-Tye (KLT) relation [30] holds for the ghost sector, such that the relationship between the general and gauge covariances could be understood at a deeper level. This extension would complete our recent work on closed cubic string field theory [21]. I implicitly assume the Siegel gauge, which fixes the BRST invariance in a way compitable with the Lorentz gauge for gauge field. It may be interesting to examine the effect of choosing different gauge such as the Schnabl gauge [31,32], which simplifies the star product drastically. This work could also shed light on the double copy theory [33][34][35][36], which is based on the proposal "gravity = gauge × gauge". A classical solution to Einstein gravity could possibly be obtained as a product of two copies of the non-Abelian gauge theory. In this approach, the ghost sector could help us understand the relationship between the general covariance of gravity theory and local gauge invariance of non-Abelian gauge theory. These issues will be discussed in subsequent papers. (A3) For Witten's BRST ghost field, the conformal mapping from the string world sheet, described by ζ r , r = 1, 2, 3, onto the upper half of the complex plane, denoted by z r , is defined by the following two consecutive mappings: where the local coordinates of the three patches are given as ζ r = ξ r + iη r , r = 1, 2, 3. At the interaction point, B is mapped to the origin of the disk and the external strings are located at e − 2πi 3 , 1, and e 2πi 3 , respectively. In a compact form, Then, each local coordinate patch on the unit disk is mapped onto the upper half plane by the following conformal transformation: The external strings are mapped to three points on the real line: Z n = tan n−2 3 π , n = 1, 2, 3, or explicitly, The Schwarz-Christoffel mapping from the local coordinate patch on the string work sheet to the upper (lower) half complex plane is expressed by series expansions [21] , c Here, we note that as z r → Z r and z s → Z s , Thus, a comparison of Eq. (B1) with Eq. (A2) yields To be explicit, •Ḡ rs n0 , n ≥ 0: Differentiating Eq. (A1) and Eq. (A2) with respect to ζ r , Taking the limit where z s → Z s (ω s → 0) (the leading term is proportional to 1/ω s ), Performing a contour integral around ω r = 0 (z r = Z r ), for n ≥ 0, Performing a contour integral dω r dω s (ω r ) −n−3 (ω s ) −m around ω r = 0 (z r = Z r ) and (B12) Note that this equation does not determine when m = 1,Ḡ rs n1 .
2,790.2
2022-02-11T00:00:00.000
[ "Physics" ]
Comparison between percutaneous and laparoscopic microwave ablation of hepatocellular carcinoma Abstract Background Based on patient and tumor characteristics, some authors favor laparoscopic microwave ablation (LMWA) over the percutaneous approach (PMWA) for treatment of hepatocellular carcinoma (HCC). We compared the two techniques in terms of technique efficacy, local tumor progression (LTP) and complication rates. Study design A retrospective comparative analysis was performed on 91 consecutive patients (102 HCC tumors) who underwent PMWA or LMWA between October 2014 and May 2019. Technique efficacy at one-month and LTP at follow-up were assessed by contrast-enhanced CT/MRI. Kaplan–Meier estimates and Cox regression were used to compare LTP-free survival (LTPFS). Results At baseline analysis, LMWA group showed higher frequency of multinodular disease (p < .001) and average higher energy delivered over tumor size (p = .033); PMWA group showed higher rates of non-treatment-naïve patients (p = .001), patients with Hepatitis-C (p = .03) and BCLC-A1 disease (p = .006). Technique efficacy was not significantly different between the two groups (p = .18). Among effectively treated patients, 75 (83 tumors) satisfied ≥6 months follow-up, 54 (57 tumors) undergoing PMWA and 21 (26 tumors) LMWA. LTP occurred in 14/83 cases (16.9%): 12 after PMWA (21.1%) and 2 after LMWA (7.7%). At univariate analysis, technique did not correlate to LTPFS (p = .28). Subgroup analysis showed a trend toward worse LTPFS after PMWA of subcapsular tumors (p = .16). Major complications were observed in six patients (6.6%), 2 after PMWA and 4 after LMWA (3.2% vs 14.3%, p = .049). Conclusions Technical approach did not affect LTPFS. Complications were reported more frequently after LMWA. Despite higher complication rates, LMWA seems a valid option for treatment of subcapsular tumors. Introduction Hepatocellular carcinoma (HCC) is the fifth most common cancer worldwide and the third most common cause of cancer-related death [1] and therapeutic approach is often challenging. In a struggle toward less invasive therapies, thermal ablation has been recognized by EASL-EORTC guidelines as elective curative treatment and as a bridge to transplant of early HCC in patients who are not candidates for resection due to poor liver function, portal hypertension or elevated bilirubin [2]. Thermal ablation techniques are also strongly recommended for treatment of very early HCCs (single lesion, up to 2 cm) even in patients amenable for surgery [3]. In the past decade, microwave ablation (MWA) has emerged as a novel promising technique versus the conventionally used radiofrequency ablation (RFA) [4]. The main advantages are more predictable and larger ablation zones over a shorter time with comparable complication rate [5,6]. A meta-analysis even suggested better outcomes after MWA of larger neoplasms [7]. As for RFA, MWA has been used in both percutaneous and intraoperative approaches [8,9]. To date, few studies have compared laparoscopic and percutaneous ablation of HCCs, mostly using RFA [10,11] and no comparative data are available regarding the use of MWA. No clear indications are available regarding tumor or patient characteristics to guide the choice toward percutaneous or laparoscopic approach. The decision is therefore left to local multidisciplinary teams (MDT), resulting in great heterogeneity based on local expertise. The aim of this retrospective study was to investigate differences between percutaneous and laparoscopic MWA in a cohort of patients with early stage HCC in terms of safety, technique efficacy and local tumor progression. Study design A retrospective analysis was performed on 91 consecutive patients with HCC who underwent MWA between October 2014 and May 2019. Following EC approval, data regarding patients and procedures were collected from an Institutional database (IRCCS Ospedale San Raffaele, Milan and Fondazione Poliambulanza Istituto Ospedaliero, Brescia). The population under study had to fulfill the following inclusion criteria: clinical and imaging evidence of HCC (radiological diagnosis of tumors on pre-operative dynamic contrast-enhanced CT or MRI with a liver-specific acquisition protocol); disease stage 0, A, B deemed amenable of curative treatment (ablation alone or ablation combined with surgery); ablation within one month of last imaging. Patients were divided into those who underwent percutaneous ablation (PMWA group) and those treated with laparoscopic microwave ablation (LMWA group). Tumors undergoing effective treatment and imaging follow-up of at least 6 months were included in the survival analysis. Preoperative planning The indication for ablation was decided during weekly MDT, which included at least one radiologist, hepatobiliary surgeon, hepatologist, medical oncologist and radiation oncologist. Approach, percutaneous or laparoscopic, was agreed upon based on evaluation and consensus of both the interventional radiologist (IR) and surgeon on a case-by-case basis. The intraoperative approach was generally favored in presence of one or more of the following conditions: multifocal disease, sub-diaphragmatic location, subcapsular location, proximity to high-risk areas [12] (adjacent to large vessels or extrahepatic organs). Ablation technique Percutaneous approach was free-hand ultrasound-guided, either under deep sedation or general anesthesia as per institutional protocol. Laparoscopic procedures were performed under general anesthesia using two 12-mm trocars for the laparoscope and ultrasound probe. The needle was inserted percutaneously through a different access to target the liver lesion under intraoperative US guidance. All ablations were performed by experienced IRs (>100 ablation procedures) using a 2450 MHz/100 W Microwave generator (Emprint, Medtronic). Ablation protocol (power and time) was tailored to tumor size according to manufacturer instructions (Instructions for Use, Emprint TM percutaneous Antenna with Thermosphere TM technology, Ablation Zone Charts, R0065469). Follow-up Institutional follow-up defines CT/MRI using a liver-specific acquisition protocol 1 month after the procedure, then every 3 months for the first year and every 6 months thereafter. Technique efficacy was defined as absence of pathological enhancement at the ablation zone (residual tumor) on imaging at 1 month after ablation; Local Tumor Progression (LTP) was defined as appearance of foci of vital disease at any of the follow-up time points [13]. Complications and side effects were defined according to the SIR classification system [14] based on a combination of outcome, clinical severity, effect on hospitalization and presence of long-term sequelae based on clinical and radiological follow-up. Data considered For each patient, data regarding gender, age, history of previous HCC and liver interventions, child pugh class, etiology of cirrhosis, BCLC stage, number of lesions and all imaging (pre-and post-procedure) were collected. For each tumor, data regarding tumor size, location, amount of delivered energy over tumor size (W Â mm/s), technique efficacy, LTP were collected. Actual tumor size was re-determined on the day of procedure by real-time US. Statistical Analysis Continuous variables were calculated as mean and standard deviation (SD), categorical variables as frequencies. Statistical analyses were performed using a commercially available software (SPSS v25, IBM). Distribution of categorical tumor-and patient-related variables between the two study groups was assessed through Chi-square analysis; continuous variables were compared through Mann-Whitney U-test. LTP free survival (LTPFS) was analyzed with Kaplan-Meier curves, comparing variables with log-rank analysis. Since occurrence of events did not reach 50% of the study sample, median LTPFS was not obtained. Univariate and multivariate Cox proportional hazard regression models were also performed. Hazard ratios (HR) and 95 percent confidence intervals (95% CI) were calculated. The significance level for all parameters was set at p .05 . Five patients underwent ablation of two tumors in a single session (one in the PMWA group, four in the LMWA group), three patients underwent ablation of three tumors in a single session (one in the PMWA group, two in the LMWA group). Eight patients in the LMWA group underwent concomitant hepatic resection. Technique efficacy At one-month follow-up, 94/102 tumors were effectively treated (technique efficacy rate ¼ 92.2%); of these, 34/35 were effectively treated laparoscopically (97.1%) and 60/67 percutaneously (89.6%), with a statistically not significant trend toward better results in the LMWA group (p ¼ .18). Tumors which did not achieve effective treatment were retreated: four cases were re-ablated (three in the PMWA group, one in the LMWA group) and 4 cases were converted to TACE (two in the PMWA group, two in the LMWA group). Local tumor progression free survival Among patients with imaging-proven technique efficacy at 1 month, 75 patients with 83 tumors satisfied imaging follow-up of at least 6 months. Mean follow-up time, albeit groups, was 18.2 months (SD 10.7; range: 6-55 months). Of these, 57 tumors (54 patients) were in the PMWA group (follow-up 6-55 months, mean 18.9, SD 11.3) and 26 tumors (21 patients) in the LMWA group (follow-up 6-40 months, mean 16.8 SD 9.5). Overall LTP rate was 14/83 (16.9%) with 1-and 2-year LTPFS of 95.3% and 74.9%, respectively. Twelve LTPs occurred in PMWA group (21.1%) and 2 in the LMWA (7.7%). LTPs were re-ablated in five cases (four in the PMWA group, one in the LMWA group). The remaining cases showed disease progression outside the ablation zone and not deemed fit for re-ablation and thus required different therapeutic approaches (in the PMWA group 6 cases underwent TACE, 2 cases refused further treatment; one case in the LMWA group was scheduled to receive sorafenib). Univariate Cox regression analysis (Figures 1 and 2; Table 3) showed that the operative approach was not correlated to LTPFS (PMWA: mean time to LTP 14.9 months; LMWA: mean time to LTP 14 months, p¼.26). In a multivariate model comprising technique, subcapsular location of tumors, energy delivered and tumor size (Table 4), subcapsular location of tumors was the only independent predictor of worse LTPFS (HR ¼ 4.727, p¼.009). Since no deaths occurred during the follow-up period, overall survival was not analyzed. Subgroup analysis Considering PMWA and LMWA separately (Table 3), subcapsular tumors had worse LTPFS compared to deep tumors in the PMWA group (subcapsular: mean time to LTP 15.1 months; deep: mean time to LTP 14.5 months; p ¼ .005) (Figure 3), whereas no statistically significant differences were seen in the LMWA group (subcapsular: mean time to LTP 17 months; deep: mean time to LTP 11 months p ¼ .9). Comparing the two techniques according to a 'per single variable stratification', no significant differences in LTPFS were observed. Subcapsular tumors ( Complications and side effects No fatal events occurred. All complications were major, with significantly different distribution between the two groups: 2 in the PMWA group (bilio-bronchial fistula, hematoma) and 4 in the LMWA group (pneumothorax, respiratory failure, hematoma, thrombosis of the right main portal branch) (3.2% vs 14.3%, p ¼ .049). Pneumothorax and bilio-bronchial fistula required percutaneous drainage, respiratory failure was managed conservatively with noninvasive ventilation. Both cases of hematoma were self-limited but required prolonged hospitalization with conservative treatment. Portal thrombosis required anticoagulant treatment (LMW Heparin 100 IU/kg bid); although no recanalization was seen on follow-up imaging, a cavernoma developed with preserved parenchymal portal vascularization. Regarding side effects, postablation syndrome, i.e., unexplained fever after ablation [13], was reported after a single laparoscopic procedure. Discussion Based on the data collected in our cohort, despite differences in patient and tumor characteristics in the two study groups, the two approaches did not significantly differ in terms of technique efficacy and local tumor control. Complications were more frequently reported following LMWA. On the other hand, LMWA seems to offer greater reliability in treatment of subcapsular tumors. Already at baseline analysis, some substantial differences between the two groups were observed which are most likely linked to decisions strategies within the MDTs. Firstly, the higher frequency of multinodular disease in the LMWA group. The reason lies mostly because MDTs tend to agree on a combined ablative/surgical treatment of multinodular HCC with curative intent. The second finding at baseline was the higher number of non-treatmentnaïve-patients in the PMWA group; most patients with previous treatment history for HCC, in particular hepatic resection, were deemed not amenable for surgery during multidisciplinary discussions and scheduled to receive the less invasive approach. Tumor size did not significantly differ between the two groups. Tumor size does not affect the choice of which approach to Bold text implies p<.05. use and therefore, rightly so, did not show any significance. Despite general preference of LMWA over the PMWA when dealing with tumors in difficult areas of the liver, no significant differences were observed in variables regarding tumor location. In our study, technique efficacy of the two different approaches was comparable and consistent with what reported in literature [15]. The choice of the technical approach did not impact significantly on LTPFS. However, better local tumor control at follow-up was observed following LMWA of subcapsular tumors, consistent with previously published literature [10]. Traditionally, thermal ablation is performed using a percutaneous approach with results, in terms of local disease control, comparable with resection [16,17]. However, tumors located closer to the liver surface are known to exhibit a higher tendency to recurrence [18], as confirmed by our data. In these cases, literature suggests an intraoperative approach, either during laparotomy or laparoscopy, may be applied to improve outcome. An interesting finding was the average higher energy delivered over tumor size observed in the LMWA group, particularly in subcapsular tumors. A better visualization and monitoring of the ablation zone through laparoscopic guidance enables the IR to radically ablate tumors which would go untreated with the percutaneous approach due to their position. Even though the number of tumors treated in the laparoscopic group was nearly half (36 vs 78), procedure-related complications reached statistical significance for the LMWA approach over PMWA. Less technical invasiveness and lower amounts of energy delivered per tumor in PMWA may explain these differences in the number of complications recorded. Massive thrombosis of the right main portal branch was diagnosed one-month after LMWA effectively treated a cranially located tumor in proximity of the middle hepatic vein. The thrombosis did not respond to medical treatment with heparin, and at the three-month follow-up, collateral revascularization of the liver parenchyma was observed. This complication is particular and has been previously described in literature [19] associated to tumors adjacent to portal vessels. This type of late complication has also been described in an experimental setting [20] as potential consequence of reduced portal flow following intra-procedural CO 2 insufflation of the peritoneal cavity [21] which leads to an increased degree of thermal injury even in vessels not immediately surrounding the treated lesion [20]. This particular case may have been also influenced by the high energy delivered due to tumor size (>3 cm) which was far above the mean. Respiratory failure with hypercapnic acidosis is also a potential complication related to laparoscopic surgery due to the synergistic effect of anesthesia-induced respiratory depression and laparoscopic CO 2 insufflation of the peritoneum [22]. Post-ablation syndrome (PAS) [23], i.e., hyperthermia due to release of inflammatory mediators in response to tissue necrosis, occurred in one patient after treatment of a multifocal disease (3 tumors). This finding is consistent with what suggested by Andreano et al. [24] that total volume of ablation correlates with occurrence of this side effect. Both complications observed in the PMWA group occurred after treatment of subcapsular tumors located in segment VII. This is consistent with literature which suggests that percutaneous ablation of tumors located closer to vulnerable structures, such as the diaphragm in this case, are associated to a higher rate of complications [12]. Interestingly, no complications in the LMWA group occurred after treatment of subcapsular tumors, as laparoscopyinduced pneumoperitoneum allows isolation of the tumor from surrounding tissues and permits direct surgical hemostasis and repair. Limitations of the present study are linked to the limited sample size of LMWA compared to PMWA, non-consistent follow-up time for LTPS which led to exclusion of a number of patients and the retrospective nature of the study. In conclusion, LMWA and PMWA are both safe and effective options for treatment of HCC. Accurate case-to-case discussion during MDT meetings is necessary in order to evaluate the best treatment option and achieve comparable results between approaches. LMWA showed a tendency toward better outcome in the treatment of subcapsular tumors. However, given the greater risk of complications, it should be performed by expert interventionalists. Disclosure statement No potential conflict of interest was reported by the author(s).
3,665.2
2020-01-01T00:00:00.000
[ "Medicine", "Engineering" ]
CAPITAL STRUCTURE AND DIVIDEND POLICY : The Capital Structure describes the combination of various components on the right side of the balance sheet. In general, it is a combination of debt and equity where the composition of the company's debt and capital will determine the dividend policy of the company's value. The value of the company reflects the present value of the expected income in the future. The financial management function can maximize the value of the company. Optimal capital structure will also affect dividend policy and high dividend payout will be responded by investors as a sign that the company is in good condition. A high dividend policy will also be responded to by an increase in share prices which will increase the value company. INTRODUCTION Recall the balance sheet of a company. The left side is an asset, called the asset / business structure . The right side is debt and equity called the financial structure (financial structure). The capital structure is defined as the composition and proportion of longterm debt and equity (preferred stock and common stock) determined by the company. Thus, the capital structure is financial structure reduced by short-term debt. Short-term debt is not taken into account in the capital structure because this type of debt is generally spontaneous (changes according to changes in sales levels). Meanwhile, long-term debt is fixed over a relatively long period of time (more than one year) so that financial managers need to think more about its existence. That is the main reason why the capital structure consists only of term debt length and equity. For that reason, the cost of capital only considers long-term funding sources (not including short-term debt). In increasing its shareholders, the company does not only want to get high profits, but also wants to be able to increase the value of the company. According to Manopo and Arie (2016: 468), the importance of company value makes investors more selective in investing in providing credit to companies. The value of the company will give a positive signal in the eyes of investors to invest in a company, while for creditors the value of the company reflects the company's ability to pay off its debts so that creditors do not feel worried about giving loans to the company. the, In the company 's balance sheet (balance sheet ) which consists of the asset side that reflects the financial structure. The capital structure itself is part of the structure which can be interpreted as a permanent expenditure that reflects the balance between long-term debt and own capital. Capital structure is a consideration or comparison between the amount of long-term debt and own capital. The capital structure is the mix (proportion) of the company's long-term permanent funding represented by debt, equity preferred stock and common stock. LITERATURE REVIEW Basic Concepts of Capital Structure The company's capital structure on basically, the combination of the various components on the right-hand side of the balance sheet. In general, it is a combination of: debt and equity. In general, research is concerned with what factors influence this combination. On the other hand, textbooks usually discuss the optimal cost of capital, which is the weighted average of the capital components, which is often referred to as the weighted average cost of capital (WACC). 1 The relative levels of equity and debt affect risk and cash flows, and therefore, the amount investors are willing to pay for a company or for an interest in it. Capital structure refers to the amount of debt and/or equity used by a company to fund its operations and finance its assets. The company's capital structure is usually expressed as a ratio of debt to equity or debt to equity. According to Riyanto, the capital structure is a comparison between long-term debt and own capital used by the company, according to Riyanto, capital structure is a combination of all items that enter the right side of the company's balance sheet. Understanding the capital structure is distinguished from the financial structure where the structure is a permanent learning that reflects between long-term debt with own capital . while the financial structure reflects the balance of the entire forest (both long term and short term). Capital structure is a company's long-term permanent funding represented by debt, preferred stock and equity shares normal. company in general consists of several components, namely: 1. Foreign capital or term debt long foreign or long-term capital is debt with a long term, generally more than ten years. This longterm debt is generally used to finance expansion. Company (expansion) or modernization of the company, because the capital requirements for these purposes include a large amount big. The capital structure is basically a permanent financing consisting of own capital and foreign capital where own capital consists of various types of shares and retained earnings. The use of foreign capital will cause a fixed burden and the amount of use of this foreign capital . Determine the amount of financial leverage used as a company. Capital alone Own capital is capital that comes from the owner company and embedded in the company for an indefinite period of time. Own capital comes from internal and external sources . Internal sources are obtained from the profits generated by the company while external sources come from company owners. Capital Structure Theories Understanding the basics of capital structure theory is very important, because the selection of the financing mix is the core of the overall business strategy. The capital structure is the mix of permanent (long -term) funding sources used by the company. The aim of capital structure management is to create a mix of permanent sources of funds in such a way as to be able to maximize share prices and so that the objective of financial management is to maximize the value of the company. The ideal funding mix that management always strives for is called the optimal capital structure . One topic of endless debate in financial management is the relationship between capital structure and the cost of capital. Can a company influence its share price and overall cost of capital for better or worse by changing its mix of funding sources? According to the economic logic that Very In simple terms, it makes sense for a company to try to minimize its cost of capital and maximize its share price so that the company's value is maximized. A determination of the optimal level of financial leverage or the optimal composition of funding by minimizing the company's cost of capital is tantamount to maximizing the company's market value. Therefore, the implications of capital restructuring to achieve optimal funding composition need to be observed. 2 Some Structure Theories as follows: 1. Traditional Approach Theory This theory states that there is an optimal capital structure. That is, the structure has an influence on the company, in which the structure can change in order get optimal company value. 2. Modigliani and Miller Approach theory liyani and Miller Modigliani and Miller approach theory with tax states that structure is irrelevant to firm value. The value of two identical companies will remain the same and will not influence the financial choice adopted to finance the assets. Modigliani and Miller then put forward several assumptions to construct their theory among them ie no agency, no cost, no tax, no bankruptcy fees. Investors can benefit from the same interest rates from companies and investors have similar information about management about the company 's future prospects front. Theory trade-offs This theory determines the optimal structure by taking into account several factors, such as taxes, agency fees or financial difficulties, but still maintaining a metric assumption that is information and market efficiency so, business actors will think of saving on tax and cost difficulties finance. 4. The packing order theory states that the profit level of a company that has been high will have a higher level of debt small. 5. The theory of information asymmetry and signaling of the capital structure states that parties related to the company do not have the same information about the presence of company risk according to the signaling of capital development with the use of debt in that structure is a signal conveyed from managers to market managers managers ensure the company's prospects well with increasing stocks and consuming to investors. 6. Agency theory This theory states that the structure is created to reduce conflict between groups that have interests. Dividend Policy Dividend policy is part or all of the company's profit in running the business which is distributed to shareholders. In the distribution of dividends in a company must determine decisions that must be taken through dividend policy. Dividend policy is an integral part of the company's funding decisions . Dividend policy concerns the use of profits that are the rights of shareholders (wiagustini.2014:255). Dividend policy often creates a conflict of interest between the shareholders. Dividend policy is a difficult decision for the company's management, because the distribution of dividends on the one hand will meet the expectations of investors to get a return as a profit from the investment they make. Dividend policy as a mechanism for reducing agency problems can be used effectively simultaneously with the use of debt in the capital structure as an effort to improve lender monitoring. This dividend policy can be used to reduce agency problems. Bonding can be done through a dividend policy (reducing free cash flow ), while monitoring can be done by involving lenders in reducing the behavior of opportunistic managers. Various costs that can arise due to agency problems are: The monitoring expenditures by the principal, are expenditures paid by the principal to measure, observe and control the agent 's behavior so that it does not deviate. These costs arise due to an imbalance of information between the principal (owner) as the supervisor and the agent (manager) as the executor. The bonding expenditures by the agent, are expenses paid by the agent (manager) to create an organizational structure that minimizes unwanted manager actions, thereby guaranteeing that principals will provide compensation if they have carried out orders principals. The residual loss, costs incurred because of the opportunity cost, because the agent cannot make decisions without principals approval . This is motivated by the potential for differences in the interests of principals and agents, so that the prosperity of principals decreases. This condition occurs when agents carry out decisions from principals but their welfare does not increase, and vice versa. decisions of principals are not implemented but can improve the welfare of agents, so that principals must bear the cost of losses ( residual) . loss) A company's dividend policy is very important and requires consideration of the following factors: 1. Companies must protect the interests of investors. Therefore , the company's financial policies must be able to convince and guarantee the achievement of objectives for its shareholders. 2. Dividend policies affect the company's financial and capital budgeting programs . 3. Dividend policy affects the company 's cash flow /low liquidity automatically will limit the distribution of dividends. 3 4. Based on the important influence of dividend policy both from companies and investors, what can be said to be profitable companies are companies that are able to pay dividends (Sari and Sudjarni , 2015). Dividend policy is very important for companies to be able to determine whether the profits earned by the company should be distributed to shareholders or will be retained to be able to help support the growth of the company. Dividend policy in a company will determine the distribution of dividends to shareholders share. Relationship between Capital Structure and Dividend Policy Dividend policy has an influence on the level of use of a company's debt. A stable dividend policy makes it imperative for companies to provide a certain amount of funds to pay these fixed dividends. Therefore, the capital structure has an influence on the dividend policy applied by the company. Research on capital structure and dividend policy which Optimization is needed to increase shareholder value and prosperity. Optimal capital structure will also affect dividend policy and high dividend payout will be responded by investors as a sign that the company is in good condition. A high dividend policy will also be responded to by an increase in stock prices which will increase the value of the company. The capital structure directly to agency costs is positively significant. This result is consistent with the prediction of the hypothesis which states that capital structure has an effect on agency power. The greater the use of debt, the greater the consequence the interest expense and the greater the probability that there will be a decrease in income. This condition can cause the threat of bankruptcy ( financial distress). If the company goes bankrupt, bankruptcy costs will arise, which are caused by, among others: the existence of being forced to sell assets below market prices, company liquidity costs, damage to fixed assets that takes time before they are sold and so on. The agency theory of Jensen and Meckling (1976) which states that the greater the use of debt in the capital structure can reduce agency costs. This is due to the existence of this debt, it requires managers to act very carefully, especially in managing finances, among others, by reducing free cash flow and can encourage them to work more efficiently, considering that compensation for the use of debt is the company's obligation to return principal and interest from loans the. CONCLUSIONS Capital structure is a comparison between long-term debt and own capital used by the company , according to Riyanto, capital structure is a combination of all items that enter the right side of the company's balance sheet. Dividend policy is a difficult decision for company management, because dividend distribution on the one hand will meet investors' expectations of getting a return as a profit from the investment they make. Capital structure has an influence on the dividend policy applied by the company. Research on optimal capital structure and dividend policies is needed to increase shareholder value and prosperity. Optimal capital structure will also affect dividend policy and high dividend payout will responded by investors as a sign that the company is in good condition. The high dividend policy will also be responded to with an increase in stock prices that will increase the value of the company.
3,354.4
2023-02-27T00:00:00.000
[ "Business", "Economics" ]
A Simple but Highly Effective Approach to Evaluate the Prognostic Performance of Gene Expression Signatures Background Highly parallel analysis of gene expression has recently been used to identify gene sets or ‘signatures’ to improve patient diagnosis and risk stratification. Once a signature is generated, traditional statistical testing is used to evaluate its prognostic performance. However, due to the dimensionality of microarrays, this can lead to false interpretation of these signatures. Principal Findings A method was developed to test batches of a user-specified number of randomly chosen signatures in patient microarray datasets. The percentage of random generated signatures yielding prognostic value was assessed using ROC analysis by calculating the area under the curve (AUC) in six public available cancer patient microarray datasets. We found that a signature consisting of randomly selected genes has an average 10% chance of reaching significance when assessed in a single dataset, but can range from 1% to ∼40% depending on the dataset in question. Increasing the number of validation datasets markedly reduces this number. Conclusions We have shown that the use of an arbitrary cut-off value for evaluation of signature significance is not suitable for this type of research, but should be defined for each dataset separately. Our method can be used to establish and evaluate signature performance of any derived gene signature in a dataset by comparing its performance to thousands of randomly generated signatures. It will be of most interest for cases where few data are available and testing in multiple datasets is limited. Introduction In recent years, DNA microarray technology has been increasingly used in oncology. It has provided insight into the biological mechanisms underlying tumour formation and identified new therapy targets [1,2]. However, most studies performed in this field identify gene sets, or so-called signatures, which can be used to improve diagnosis and risk stratification [3,4,5,6]. These signatures can be acquired through supervised analysis methods [7]. Both patient microarray and clinical data are directly used to find the genes that correlate with tumour type or patient outcome [8,9,10,11,12]. Also biology-based signatures can be used for patient prognosis, which are usually derived from in vitro microarray data [2,13,14,15]. Though the performance of these classifiers can be very high in the dataset studied, application of these signatures in other datasets is often limited and data reproduction is not straightforward [16]. Furthermore, signatures identified in comparable studies show little overlap in gene content [1,10,17,18,19,20]. Michiels et al. [4] showed that identified gene lists were highly variable within one dataset and depended on the patients included in the training set. Further, they demonstrated that several published gene classifiers did not classify patients better than by chance. They stress that validation is an important issue in microarray research. Fan et al. [21] repeated and extended these analyses 5 years later and made similar conclusions. Moreover Boutros et al. [19] amongst other showed that the use of different statistical procedures could identify multiple highly prognostic signatures from one dataset [22,23]. An extensive analysis of the effect of different statistics on ranked gene lists showed large variability [24]. A major challenge with DNA microarray technology is to take account of variability across a very large number of parameters [1]. This variability arises from several sources: the biological samples, hybridisation protocols, scanning, and image and statistical analysis [7]. In a recent review, Dupuy et al. [1] demonstrated that proper methodology in pre-processing and statistical analysis is essential in these sorts of studies. They found that a large subset of published microarray studies show flaws in the applied analysis; serious mistakes are made in the selection of genes and inadequate control of multiple testing is performed. The issue of multiple testing is crucial, as microarrays monitor the expression of thousands of genes, while the number of samples is relatively small. Statistical significance of the differences in gene expression patterns for different patient groups or tumour types is often determined with traditional statistical testing procedures, such as the two-sample t-tests or Wilcoxon rank sum tests [1,7,20]. These procedures are challenged with serious multiplicity and without employment of a correction for multiple testing, the number of false positives will be extremely high. Various methods have been developed to overcome this problem of identifying differentially expressed genes and are used to create gene signatures [11,19,20,25]. More importantly multiple testing is often not considered in evaluating the prognostic power of signatures. Once a signature is created, its prognostic power is determined with traditional survival statistics and standard cut-off values for significance. We hypothesise that this can lead to high numbers of false prognostic signatures when the number of evaluated datasets is limited. Therefore we sought to develop a simple method to take into account the high-dimensionality of microarrays in the phase of evaluating signature prognosticity. To quantify the problem of multiple testing we have developed a method to test batches of random signatures in microarray datasets. We show that the average chance that a random signature produces a prognostic result in one dataset is approximately 10% but can range from 1% to ,40%. Increasing the number of datasets reduces this false positive rate significantly. As a result of this high degree of variability amongst datasets, we developed a method that can be used to determine an appropriate threshold level of significance that must be reached for a given signature. This is done by testing a set of randomly chosen signatures along with the signature of interest within the dataset under investigation. Results In order to assess the potential for identifying prognostic gene signatures by chance alone in microarray based datasets a method was developed to test the prognostic value of batches of randomly generated signatures. Six different publicly available microarray datasets with follow-up data were used (Table 1). These six datasets differ in number of patients, number of measured genes, number of reporters measured per gene, as well as platform and type of cancer. For each dataset separately 5 batches of 10,000 random signatures were generated and tested. In each batch the number of genes (UnigeneIDs) in a gene set was predefined. The number of genes (UnigeneIDs) in the five batches were 10, 25, 50, 100 and 200 respectively. For example, the first batch included 10,000 random signatures, each consisting of ten genes. For each signature a patient score was derived, defined as the average of the expression of the genes in a signature (equation 1). Each signature score was then tested for prognostic value by ROC analysis and determination of the AUC. Figure 1A-F shows the distribution of the AUCs for the first batch of 10,000 random signatures for the different datasets. To define a reasonable cut-off value for the AUC values, we first searched for AUCs used in published gene signatures. However, the majority of studies do not evaluate gene signatures using the AUC. Most gene signatures are evaluated with Kaplan-Meier survival curves and log-rank tests. Kaplan-Meier survival analyses and ROC analyses are linked; a high AUC corresponds to a good separation in distinct survival groups. To be able to define a cutoff, we calculated the AUCs for the different gene sets as evaluated in the review by Ntzani et al. [26]. The calculated AUCs as well as additional information are provided in Supplementary File S1. Based on these calculations we chose the cut-off values AUC#0.4 and $0.6. In Figure 1G the percentages of signatures that passed the criteria for the different batches of signatures are given. These percentages range from 1% to ,40%, dependent on the dataset and the number of genes in the signatures. Table 2 provides the average, standard deviation as well as maximum and minimum AUC for the analyses with the gene sets consisting of ten genes. These data show that the larger the standard deviation, the higher the chance that a randomly generated signature is considered prognostic. Further, the maximum and minimum AUC show that very high signature performances can be found at random. Sampling 10,000 gene sets is a small number compared to the total number of possible gene sets. In order to show that the 10,000 random gene sets are sufficient to estimate the AUC distribution, we tested batches of 1,000,000 signatures consisting of 10 genes in the six datasets. The AUC distributions for this permutation study were similar to the distributions for the batches of 10,000 gene sets (Table 3). From the differences between the six datasets (Table 1), it could be that the number of patients, the number of genes (UnigeneIDs) and the number of reporters measured per gene influence the probability that a randomly chosen signature is considered prognostic. To further investigate the impact of these parameters, the Miller dataset was used. To determine the influence of patient number, the dataset was split in halves and in quarters. For these partial datasets the same five batches of 10,000 random signatures were tested. The influence of the number of genes was tested by splitting the dataset in half, this time based on genes rather than patients. Again, five batches of 10,000 random signatures were tested. To investigate the influence of the number of reporters measured per gene, again a set of five batches of 10,000 genes was tested on the dataset, considering only genes with more than one reporter. This was repeated, but for each gene only one reporter measurement was considered. Of these parameters only patient number influenced the false discovery rate. Results of the analysis to determine the influence of the number of genes and the number of reporters measured per gene are given in Supplementary File S1 and figures S1A and S1B. Influence of patient numbers It has already been reported in previous studies [26,27] that the number of patients influences the false discovery rate. The Miller dataset was split into two and four groups respectively to confirm the importance of this factor. The same 5 batches of 10,000 random signatures that were tested on the whole dataset were tested on these subgroups ( Figure 2A). Indeed the number of prognostic signatures increases dramatically when the size of the patient group decreases. To characterize the relationship between patient number and the probability that a randomly chosen signature is considered prognostic, additional analyses were performed for the batch of 10,000 runs with ten genes. The dataset was split into three, five and ten groups respectively. Figure 2B shows the distribution of the AUCs for the batch of 10,000 random signatures consisting of ten genes for the different dataset sizes. It is clear that the smaller the dataset, the wider and flatter the distribution becomes. Figure 2C presents the number of prognostic signatures as a function of dataset size. Effect of filtering One of the parameters that could account for differences in the number of prognostic signatures for a given dataset is filtering. To briefly explore the influence of filtering, two simple filtering methods were applied on the Miller dataset. After this filtering, again five batches of 10,000 signatures were tested. The first filtering procedure was to only consider reporters that had no absent calls in the patients. This very stringent filtering resulted in a reduction in number of reporters from ,45,000 to ,7,300 (approximately 5,000 unique UnigeneIDs). The second filtering method, often used in microarray based studies, consists of simply applying a threshold to the fold change. To show the effect of this step on the number of false positives a twofold threshold was applied. Only genes that show at least a two-fold change across the patients are considered. This reduced the number of reporters from ,45,000 to ,23,000. The results for these analyses show that both filtering methods have a different effect ( Figure S1C). Fold change filtering did not influence the probability that a randomly chosen signature is considered prognostic; rather, it provides similar results to those of non-filtered analysis. Filtering for absent reporters, on the other hand, introduced a signature size dependency for the false positive rate. A small signature size resulted in a false positive rate of ,10%, whereas large signatures had a false discovery rate of only ,0.5%. The average, however, stands at 5-6%, similar to the non-filtering and fold change filtering analyses. Signature testing procedure To demonstrate that this random signature method can be used with all sorts of signature evaluation methods, two additional evaluation procedures were tested in the Miller dataset. In the previous analyses the signature score was used as continuous variable. Here we selected 10,000 random samples of 10, 25, 50, 100 and 200 genes. In the first setup the signature score was used to median dichotomize the patients. In the second setup these gene subsets were in a K-nearest neighbor classification (KNN) combined with leave-one-out-cross validation (LOOCV). Both procedures results in patient classification into two groups, which were then coupled to outcome and evaluated by the AUC. Similar AUC distributions are obtained with these different signature evaluation procedures, exact distributions characteristics differ slightly ( Table 4). The numbers of random gene sets passing the criteria are comparable ( Figure S1D). Evaluating signatures by random testing To show that the random signature testing method is a valuable tool in microarray based studies, several published gene signatures were tested. In short, the suggested procedure to test a signature in a dataset is as follows (Figure 3). For the signature of interest the AUC was calculated in the dataset, additionally the AUC distribution for batches of random signatures with a similar size as the signature of interest was computed. The signature AUC was then compared to the random signature AUC distribution with a Z-test to assess whether the signature of interest performed better than could be expected by chance. The Wound signature [13], ''invasiveness gene signature'' (IGS) [10] and two early hypoxia signatures [15] are recently published gene signatures. For the Wound and IGS signatures it was previously shown that these signatures had high prognostic value in different datasets and cancer types [10,13,28]. The two early hypoxia signatures however, were only evaluated in one dataset [15]. These signatures were evaluated in the three breast cancer datasets [8,29,30] with the signature score (details are provided in Supplementary File S1). For the Miller dataset also Kaplan-Meier survival analyses were performed, since the two early hypoxia signatures were previously tested in this dataset. The results of Kaplan-Meier survival analyses and the random signature testing are given in Table 5 and Figure S3. From the Kaplan-Meier survival analyses all four signatures seemed to have a high prognostic value (p-values logrank test ,0.05). However the random signature testing procedure indicated that the two early hypoxia signatures did not perform better than chance in that dataset. Testing the four signatures in the other two breast cancer datasets indeed showed that the two early hypoxia signatures did not have prognostic value (p-values log-rank test .0.05). For the Wound and IGS signatures both evaluation procedures indicated that the performance of these signatures is high in the different datasets and that this is unlikely due to chance. Discussion We assessed six patient microarray datasets spanning different cancer types, numbers of patients and arrays to evaluate the effect of false positives on gene signature evaluation. Different-sized batches of 10,000 random signatures were tested in all datasets. With the given threshold, the average chance that a randomly generated signature was considered prognostic was approximately 10%, but ranged from 1% to ,40%. Testing batches of random signatures in different datasets revealed that the AUC distribution varied widely between datasets. Choosing an arbitrary cut-off value for significance is then clearly not suited for gene signature evaluation. Rather a dataset-based cut-off value should be considered. The random testing method we propose here can be applied to calculate the level of AUC necessary to reach significance beyond random for a given signature size in a given dataset. Figure 3. Workflow signature testing procedure. A systematic overview of the proposed signature testing procedure is depicted here. First performance of the signature of interest is determined. A batch of random gene sets with the same size as the signature is subsequently tested. Signature performance is then compared to the AUC distribution of the random gene sets with a Z-test to address whether the signature performs better than random. doi:10.1371/journal.pone.0028320.g003 The random testing procedure can also be used to directly test whether the performance of a certain signature could be due to chance. A schematic overview is given in Figure 3. A batch of random signatures with the same size as the signature of interest can be tested along with the original signature. The AUC distribution of the random signatures can then be used to statistically test whether the original signature performs better than random. An equivalent permutation-based validation step was used by Boutros et al. [19] to evaluate their signature; this step provided significant information on the prognostic performance of the gene set. We have shown that proper validation is absolutely essential in gene signature research. This supports several previous studies, which have argued that signature performance is often overestimated due to improper validation in a large number of studies [1,26]. For several analyses, the maximum and minimum AUC were also calculated. We show that random signatures can have very high performances (AUC.0.9), which further supports this observation. A method to overcome this multiple testing problem is validation in multiple independent datasets. We have shown that testing random signatures in two datasets decreased the chance that a random signature is called prognostic dramatically ( Figure S2). However it is not always possible to validate a gene signature in multiple datasets. In oncology most microarray studies focus on breast and lung cancer, for these sites there are a lot of public datasets available that can be used for validation. Therefore this technique is not primarily meant for these cancer types, but rather for tumour types where only few data, in terms of the number of samples and number of datasets, are available; for those cases this technique would be valuable. By comparing the performance of four published signatures in one patient microarray dataset with Kaplan-Meier curves all signatures seemed to have prognostic value. However, applying the random testing procedure in that dataset already indicated that two out of four signatures did not perform better than chance. Testing the four signatures in multiple datasets indeed showed that these two signatures did not show prognostic value in the other datasets. From the analyses on all six datasets, several parameters could influence the number of false positives. To assess the effect of these variables, several parameters were manipulated in one of the datasets. However, of the tested parameters, only patient number influenced the false positive rate dramatically. The need for large patient groups to obtain reliable results has already been recognised in other studies. Ntzani et al. [26] evaluated 84 microarray studies and concluded that small studies often give inflated, over-promising results. Zien et al. [27] assessed the influence of the number of samples in a different way: a simulation model was applied in which specificity and sensitivity were measured depending on changes in sample size, technical and biological variability. They showed that with small sample sizes, sensitivity and specificity were highly dependent on the biological and technical variance, whereas larger sample sizes led to quite robust results that were less dependent on biological and technical variance. Moreover Popovici et al. [23] tested the effect of training set size on the performance of the trained marker in a validation dataset. Overall signature performance improved in the validation data and better concordance between training and testing results was observed when training dataset size increased. Testing batches of random generated gene sets in different gene expression microarray datasets showed that the use of an arbitrary cut-off value for evaluation of signature significance is not suitable. Further it is important to use the same signature evaluation procedure for the random gene sets as for the signature of interest, since the AUC distribution can differ when using a different method. Thresholds should be defined for single datasets separately in order to obtain reproducible results. This permutation method can be used to establish and evaluate signature performance of any derived gene set within single or multiple datasets by comparing its performance to the performance distribution of thousands of randomly generated signatures. However it will be of most interest for cases where limited data is available. Random signature testing A method to test the prognostic value of random gene signatures of a predefined size on a microarray dataset was developed in Matlab (Matlab 7.1, The Mathworks, Natick, MA, USA). Unless indicated otherwise, analyses were performed using this program. The program creates a user-specified number of random gene sets, consisting of a user-specified number of genes. For a given dataset all genes on the respective microarray were used to create the random signatures. This batch of random signatures was then tested on a dataset by means of a signature score calculation. Datasets Patient microarray and clinical follow-up data were collated to test the random gene sets. Datasets are publicly available in the microarray databases Gene Expression Omnibus (GEO: http:// www.ncbi.nlm.nih.gov/projects/geo/) and Stanford Microarray Database (SMD: http://genome-www.stanford.edu/microarray) and elsewhere. Accessory clinical and follow-up data were also given or provided by the authors on request. Table 1 provides an overview of the datasets and the databases, where these are accessible. Data filtering and pre-processing is explained in Supplementary File S1. Signature score calculation. Expression data of the genes in a signature was extracted from the dataset. The following step was used to calculate a signature score for each patient included in the dataset. This score was defined as the average expression value of the genes in the signature (equation (1)). When a gene was represented by more than one reporter on an array, the expression of the reporters was averaged before signature calculation. The signature scores for each patient were then coupled to the survival data of the patients. Where: Score, signature score; N, number of genes in the signature; exp i,m , gene expression of gene i in sample m. The signature score was used to median dichotomize the patient cohorts. In a second setup expression of the genes in the signature were used for K-nearest neighbor classification (KNN) combined with leave-one-out-cross validation (LOOCV). With this method one patient is withheld and the class membership of this patient is predicted using the KNN model (knnclassify function in Matlab) built on the remaining patients. The event parameter of the survival data was used as training class. This procedure was repeated for each patient, resulting in a class prediction for the whole cohort. Analysis The signature scores, median dichotomized groups or KNN classifications were evaluated with the area under the curve (AUC) of the receiver operator curve (ROC). Definitions for AUC calculations are as follows: -True positive: patient in the high score group that died from disease -False positive: patient in the high score group that is alive -True negative: patient in the low score group that is alive -False negative: patient in the low score group that died from disease A signature score was considered prognostic when the AUC is #0.4 or $0.6. This cut-off value was based on the AUCs of several published gene signatures evaluated in the study of Ntzani et al. [26] (further details are given in the results section and Supplementary File S1). File S1 The supplementary material contains a section with supplementary materials and methods and a section supplementary results. The supplementary materials and methods is a more detailed description of the data analyses. The supplementary results describe the analyses to check the influence of several parameters on the random signature AUC distribution that had minimal to no effect. Further additional tables are included.
5,507.6
2011-12-07T00:00:00.000
[ "Biology", "Computer Science", "Medicine" ]
Foreign Language Learning Process at an Early Age and Its Impact on the Native Language Education Perhaps the most complex question risen among linguists, psychologists and philosophers is how a child learns foreign language? Considering that language learning is natural and that babies are born with the ability to learn it since learning begins at birth, still Language learning (be it native or foreign) is a process that is not simple and short. It takes time, patience and self-discipline. Independent from some internal and external factors that are found inside and outside of every learner and which differ from each and every person this process has its pros and cons. A foreign (English) language learning at an early age has evolved considering modern technologies and methodologies used by individual learners and teachers. The earlier the language is learnt the more fluent the speaker is, but what happens to the mother tongue? Is the child well understood by the community, school teachers and friends? What is the progress of that child at school, what are psychological effects of technology used in the process of learning a language, what is the best age to learn a foreign language? , etc. These and many other questions will be discussed in this paper. The findings of this paper are assumed to also identify teachers’ perceptions about the main challenges they face during the classroom management with foreign language speakers in the classroom, the strategies they use, parents’ attitude toward this and also to find out some steps that parents and native language teachers should take to improve the situation. Introduction In today's world children are surrounded by screens all around them: television sets, computers, tablets, and phones either to watch or play with. And some children even have access to their own tablet and phone, starting at a young age. All these devices help children in a way or another how to behave, learn new games, new languages etc. Children are those who learn a new language quicker and easier as their brain is fresh and has enough space to accumulate things. Language learning through technology has its pros and cons, as its use in an uncontrolled time can harm them a lot without being noticed by their parents. The earlier the language is learnt the more fluent the speaker is, but what happens to the mother tongue? Is the child well understood by the community, school teachers and friends? What is the progress of that child at school, what are psychological effects of technology used in the process of learning a language, what is the best age to learn a foreign language? , etc. But in order to properly discuss the topic of this paper, first we should understand what bilingualism is! Bilingualism refers to the ability to use two languages in everyday life. Bilingualism is common and is on the rise in many parts of the world, with perhaps one in three people being bilingual or multilingual (Wei, 2000). Definitions of bilingualism range from a minimal proficiency in two languages, to an advanced level of proficiency which allows the speaker to function and appear as a native-like speaker of two languages. A person may describe themselves as bilingual but may mean only the ability to converse and communicate orally. Others may be proficient in reading in two or more languages (or bi-literate). A person may be bilingual by virtue of having grown up learning and using two languages simultaneously (simultaneous bilingualism). Or they may become bilingual by learning a second language sometime after their first language. This is known as sequential bilingualism. To be bilingual means different things to different people. 1 Having a child is a miracle that God blesses every parent, it is a mission given to us to complete in the best way possible. Considering that since the period of pregnancy when the child is in the mother's womb until the age of 6 the child receives what is served to it by the parents / guardians, the family members and the society in general But, not always parent unintentionally manage to do the best. Trying to make them happy parents sometimes do mistakes that will always regret. Or in the contrary, by putting limits on our children we give them lifelong happiness. Early childhood is the most important period of an individual's development and the greatest attention must be paid at this age. The way we feed children, protect them, the way we communicate, socialization, creation of skills and habits, etc. are some of the factors that directly affect their future. Parents, family members and the society that surrounds them are role model and the best lesson on how to behave with themselves and others. Children learn from what they see and experience. Nowadays, we are witness that almost every child, every young boy or girl, elderly , etc. use a mobile phone as a toy and as a tool to relax and have fun, thus causing Society in general face great difficulties leaving traces in the overall social development. One of the factors that is negatively affecting the development of children is the uncontrolled use of technology especially in early ages. Doing so very soon they start speaking foreign language, respectively English instead of their mother tongue making their parents proud. The question is: Should the parents really be proud? The next we seen on those children is wearing glasses.-Should parents worry about this? Parents themselves do not understand the needs and demands of their children? -Worried about? Teachers and schoolmates do not completely understand those children and as a result children come home crying for not being understandable because they speak half English half Albanian ( in our case-Kosovar children) -What to do? Bad results at school-Who to blame? Methodology, techniques and instruments In order to answer all these above mentioned questions and many others, there is a research done with parents, teachers of kindergarten and lower secondary school psychologists by distributing a questionnaire (as an instrument) and observing a class management in 10 schools of Peja, Gjakova and Prishtina municipality. The questionnaire contains 23 open and close ended questions and comments at the end of it given by each respondent. The research was done in those institutions that are claimed to have pupils/children with such problems. Research questions 1. How much will the use of a foreign language in early childhood affect the adequate acquisition of the mother tongue? 2. Will synchronization of foreign language and mother tongue at an early stage increase the confusion of children's expression? Hypotheses H0 = The use of a foreign language in early childhood does not have a major impact on the adequate acquisition of the mother tongue H1 = The use of a foreign language in early childhood has a great impact on the adequate acquisition of the mother tongue. H2 = Synchronization of foreign language and mother tongue at an early stage greatly increases the confusion of children's expression. The results on the above table show that all independent variables have a significance level lower than 0.05%. Then based on this we say that Hypothesis zero (H0) is rejected and the alternative Hypothesis (H1) is accepted. The interpretation of this hypothesis is: The use of a foreign language in early childhood has a great impact on the adequate acquisition of the mother tongue (2-tailed) .000 N 10 10 **. Correlation is significant at the 0.01 level (2-tailed). ** Correlation is significant at the 0.05 level (2-tailed). Model Summary The correlation table gives us a mathematical value to measure the strength of the linear correlation between two variables. As it can be seen on the table the correlation coefficient concludes a positive correlation between: Thinking Confusion of bilingual children and Fighting bilingualism in the context of children at an early stage, and we can say that this correlation is at a significance level 0.05 (0.000) Based on the research question: Will synchronization of foreign language and mother tongue at an early stage increase the confusion of children's expression? The results on the table with a significance lower than 0.05 (0.000), the interpretation of the hypothesis (H2) would be: Synchronization of foreign language and mother tongue at an early stage greatly increases children's confusion of expression. Conclusion In this article it was tried hard to analyze the use of technology in early childhood and its impact on native education process. The research was run in some preschool institution having children who speak a foreign language. Their teachers shared their experience with us on pros and cons of technology use in early childhood. We also consulted different books such as The Bilingual Edge (King & Mackey, 2009), and articles such as The Power of the Bilingual Brain (TIME Magazine; Kluger, 2013) that have touted the potential benefits of early bilingualism. According to these and many other resources one of the most important benefits of early bilingualism is often taken for granted: bilingual children will know multiple languages, which is important for travel, employment, speaking with members of one's extended family, maintaining a connection to family culture and history, and making friends from different backgrounds. However, beyond obvious linguistic benefits, our research has shown that parents need to realize the importance of teaching their child to play and limit the use of technology. Even though the science of bilingualism is still a new field, and definitive answers to many questions are not yet available, researchers are doing their best to rise parents' awareness on the pros and cons of technology use in early childhood. Our research concluded that: Despite the fact that the use of technology develops child's communication skills, develops intelligence, increases self-esteem and self-confidence, by giving the opportunity to become bilinguals, bilingual children are more likely to have language difficulties, delays, or disorders. Therefore parents are advised to introduce the second language and the use of technology after they have mastered their mother tongue fluently. Technology use makes children confused as they mix words from two languages in the same sentence which is known as code mixing. Therefor this makes them misunderstood. According to Pearson, 2008, code mixing is a normal part of bilingual development, as they are doing exactly what they hear on their technology devices . Technology use makes children smarter as they will learn multiple languages, which is important for travel, employment, speaking with members of one's extended family, maintaining a connection to family culture and history, and making friends from different backgrounds. ; Technology fosters open-mindedness, offers different perspectives on life, and reduces cultural ignorance; it also gives you more job opportunities and greater social mobility
2,479.4
2021-10-09T00:00:00.000
[ "Linguistics", "Education" ]
Wave propagation in a rotating disc of polygonal cross-section immersed in an inviscid fluid Abstract In this paper, the wave propagation in a rotating disc of polygonal cross-section immersed in an inviscid fluid is studied using the Fourier expansion collocation method. The equations of motion are derived based on two-dimensional theory of elasticity under the assumption of plane strain-rotating disc of polygonal cross-sections composed of homogeneous isotropic material. The frequency equations are obtained by satisfying the boundary conditions along the irregular surface of the disc using Fourier expansion collocation method. The triangular, square, pentagonal and hexagonal cross-sectional discs are computed numerically for Copper material. Dispersion curves are drawn for non-dimensional wave number and relative frequency shift for longitudinal and flexural (symmetric and anti-symmetric) modes. This work may find applications in navigation and rotating gyroscope . Introduction The rotating disc of polygonal cross-section is the important structural component in construction of gyroscope to measure the angular velocity of a rotating body. The wave propagation in a disc contact with a fluid finds many applications in the field of structural acoustics, the acoustic microscopic wave interaction in geophysics, characterisation of material properties of thin metal wires, optical fibres and reinforcement filaments used in epoxy, metal and ceramic matrix composites and non-destructive evaluation of solid structures. The characteristics of wave propagation in rotating disc of polygonal cross-section immersed in fluid have a wide range of applications in the field of machinery, submarine structures, pressure vessel, chemical pipes and metallurgy. The most general form of harmonic waves in a hollow cylinder of circular cross-section of infinite length has been analysed by Gazis (1959) in two parts. He has presented the frequency equation in part I and numerical results in part II in detailed form. Nagaya (1981Nagaya ( , 1983aNagaya ( , 1983b has discussed wave propagation in a thick-walled pipe, bar of polygonal plate and ring of arbitrary cross-section based on the two-dimensional theory of elasticity. The boundary conditions along both outer and inner free surface of the arbitrary cross-section are satisfied by means of Fourier expansion collocation method. Venkatesan and Ponnusamy (2002) have obtained frequency equation of the free vibration of a solid cylinder of arbitrary cross-section immersed in fluid using the Fourier expansion collocation method. The frequency equations are obtained for longitudinal and flexural vibrations, are studied numerically for elliptic and cardioidal cross-sectional cylinders, and are presented both in tabular and in graphical forms. Later, Ponnusamy (2011) studied the wave propagation in thermoelastic plate of arbitrary cross-sections using the Fourier expansion Collocations Method. Ponnusamy andSelvamani (2012, 2013) investigated the dispersion analysis of generalised magneto-thermoelastic waves in a transversely isotropic cylindrical panel and wave propagation in magneto thermo elastic cylindrical panel, respectively, using Bessel functions. Yazdanpanah Moghadam, Tahani, and Naserian-Nik (2013) obtained an analytical solution of piezolaminated rectangular plate with arbitrary clamped/simply supported boundary conditions under thermo-electro-mechanical loadings. Sinha, Plona, Sergio, and Chang (1992) have discussed the axisymmetric wave propagation in a cylindrical shell immersed in fluid, in two parts. In part I, the theoretical analysis of the wave propagating modes is discussed and in part II, the axisymmetric modes excluding tensional modes are obtained theoretically and experimentally and are compared. Berliner and Solecki (1996) have studied the wave propagation in fluid-loaded transversely isotropic cylinder. In that paper, part I consists of the analytical formulation of the frequency equation of the coupled system consisting of the cylinder with inner and outer fluid and part II gives the numerical results. Loy and Lam (1995) discussed the vibration of rotating thin cylindrical panel using Love's first approximation theory. Bhimaraddi (1984) developed a higher order theory for the free vibration analysis of circular cylindrical shell. Zhang (2002) investigated the parametric analysis of frequency of rotating laminated composite cylindrical shell using wave propagation approach. Body wave propagation in rotating thermoelastic media was investigated by Sharma and Grover (2009). The effect of rotation, magneto field, thermal relaxation time and pressure on the wave propagation in a generalised viscoelastic medium under the influence of time harmonic source is discussed by Abd-Alla and Bayones (2011). The propagation of waves in conducting piezoelectric solid is studied for the case when the entire medium rotates with a uniform angular velocity by Wauer (1999). Roychoudhuri and Mukhopadhyay (2000) studied the effect of rotation and relaxation times on plane waves in generalised thermo-viscoelasticity. Dragomir, Sinnott, Semercigil, and Turan (2014) studied the energy dissipation and critical speed of granular flow in a rotating cylinder and they found that the coefficient of friction has the greatest significance on the centrifuging speed. One-dimensional analysis for magneto-thermo-mechanical response in a functionally graded annular variable-thickness rotating disc was discussed by Bayat et al. (2014). In this paper, the wave propagation in rotating disc of polygonal cross-section immersed in an invicid fluid is analysed. The boundary conditions along irregular surfaces have been satisfied by means of Fourier expansion collocation method. The frequency equations of longitudinal and flexural modes are analysed numerically for triangular, square, pentagonal and hexagonal crosssections, and the computed non-dimensional wave number and relative frequency shift are plotted in graphs. Formulation of the problem We consider a homogeneous, isotropic rotating elastic disc of polygonal cross-section immersed in an inviscid fluid. The elastic medium is rotating uniformly with an angular velocity �� ⃗ Ω = Ω � ⃗ n, where n is the unit vector in the direction of the axis of rotation. The system displacements and stresses are defined by the polar coordinates r and θ in an arbitrary point inside the disc and denote the displacements u r in the direction of r and u θ in the tangential direction θ. The in-plane vibration and displacements of rotating polygonal cross-sectional disc is obtained by assuming that there is no vibration and a displacement along the z axis in the cylindrical coordinate system (r, , z). The two-dimensional stress equations of motion, strain-displacement relations in the absence of body forces for a linearly elastic medium are where where rr , , r are the stress components, e rr , e , e r are the strain components, is the mass density, Ω is the rotation, t is the time, and are Lame' constants. The displacement equation of motion has the additional terms with a time-dependent centripetal acceleration �� ⃗ is the displacement vector and �� ⃗ Ω = (0, Ω, 0) is the angular velocity, the comma notation used in the subscript denotes the partial differentiation with respect to the variables. The strain e ij related to the displacements are given by in which u r and u is the displacement components along radial and circumferential directions, respectively. The comma in the subscripts denotes the partial differentiation with respect to the variables. Substituting the Equations 3 and 2 in Equation 1, the following displacement equations of motions are obtained as Solutions of solid medium Equations 4a and 4b are coupled partial differential equation with two displacements components. To uncouple Equations 4a and 4b, we follow Mirsky (1965) by assuming the vibration and displacements along the axial direction z equal to zero. Hence, assuming the solutions of Equations 4a and 4b in the following form (2) rr = e rr + e + 2 e rr = e rr + e + 2 e r = 2 e r (3) e rr = u r, r , e = r −1 u r + u , , e r = u , r − r −1 u − u r, where n = 1 2 for n = 0, n = 1 for n ≥ 1, i = √ −1, and is the angular frequency; n (r, ), n (r, ), n (r, ) and n (r, ) are the displacement potentials. The solution of Equation 7 for symmetric mode is Similarly, the solution for the anti symmetric mode n is obtained by replacing cos n by sin n in where J n is the Bessel function of first kind, 1 a 2 are the roots of Equation 6. Solving Equation 7, we obtain for symmetric mode and n is obtained from Equation 11 by replacing sin n by cos n for the anti symmetric mode. Where 2 a 2 = ( 2 + c 2 ). If i a 2 < 0, i = 1, 2 then the Bessel function of first kind is to be replaced by the modified Bessel function of the first kind I n . Solution of fluid medium In cylindrical coordinates, the acoustic pressure and radial displacement equation of motion for an in viscid fluid are of the form Achenbach (1973) and is the acoustic phase velocity of the fluid in which f is the density of the fluid and (6) 2 + ∇ 2 + 2 + c 2 n = 0 n is the Bessel function of the first kind and f n is as same as f n . If 3 a 2 < 0, the Bessel function of first kind is to be replaced by the modified Bessel function of second kind K n . By substituting the expression of the displacement vector in terms of f and Equations 17 in Equation 13, we could express the acoustic pressure of the disc as Boundary conditions and frequency equations In this problem, the vibration of a polygonal cross-sectional rotating disc immersed in fluid is considered. Since the boundary is irregular, it is difficult to satisfy the boundary conditions both inner and outer surface of the disc directly. Hence, in the same lines of Nagaya (1983aNagaya ( , 1983b, the Fourier expansion collocation method is applied to satisfy the boundary conditions. Thus, the boundary conditions are obtained as where x is the coordinate normal to the boundary and y is the tangential coordinate to the boundary, xx is the normal stress, xy is the shearing stress and ( ) l represents the value at the lth segment of the boundary. The first and last conditions in Equation 19 are due to the continuity of the stresses and displacements of the disc and fluid on the curved surface. If the angle l between the normal to the segment and the reference axis is assumed to be constant, the transformed expression for the stresses are given by Polygonal cross-sectional disc with fluid and without rotation The wave propagation in a polygonal cross-sectional disc is obtained by substituting the rotation speed Ω = 0 in the corresponding equations and expressions in the previous section, the problem is reduced to wave propagation in polygonal cross-sectional disc immersed in fluid. Substituting Equations 2, 3 in Equation 1 along with Ω = 0, we get the displacement equations as follows: Equations 30a and 30b are coupled partial differential equation with two displacements components. To uncouple Equations 30a and 30b, use Equation 4 in Equations 30a and 30b, we obtain a differential equations of the form u , rr + r −1 u , r − r −2 u + r −2 + 2 u , + r −2 + 3 u r, By applying the same procedure as discussed the previous sections, and using the boundary conditions given in Equation 19, we obtain the frequency equations for polygonal cross-sectional disc immersed in fluid as where (31) b 11 = 2 n (n − 1) J n 1 ax + 1 ax J n+1 1 ax cos 2 − l cos n −x 2 1 a 2 + 2cos 2 − l J n 1 ax cos n b 12 = 2n (n − 1) J n 2 ax + 2 ax J n+1 2 ax cos 2 − l cos n +2 n (n − 1) − 2 ax 2 J n 2 ax + 2 ax J n+1 2 ax sin 2 − l sin n b 13 = 2 J 1 n 3 ax cos n b 21 = 2 n (n − 1) − 1 ax 2 J n 1 ax + 1 ax J n+1 1 ax sin 2 − l cos n +2n 1 ax J n+1 1 ax − (n − 1) J n 1 ax cos 2 − l sin n Polygonal cross-sectional disc without fluid and rotation The free vibration of homogeneous isotropic disc of polygonal cross-section without fluid medium and without rotation can be recovered from the present analysis by omitting the fluid medium and setting Ω = 0, along win the corresponding equations and solutions in the previous sections, then the problem of rotating disc of polygonal cross-section immersed in fluid is converted into a problem of a two-dimensional vibration analysis of polygonal cross-sectional disc. The frequency equations of an polygonal cross-sectional disc without fluid and rotation is obtained as Stress-free (Unclamped edge), which leads to Rigidly fixed (Clamped edge), implies that By applying the same procedure as discussed in the previous section, the stresses are transformed for a disc without fluid and rotation is given as follows. Substituting Equations 9-12 in Equation 40, the boundary conditions are transformed for stress-free disc as follows: where in which b 22 = 2n (n − 1) J n 2 ax − 2 ax J n+1 2 ax sin 2 − l cos n +2 2 ax J n+1 2 ax − n (n − 1) − 2 ax 2 J n 2 ax cos 2 − l sin n b 33 = 0 b 31 = nJ n 1 ax − 1 ax J n+1 1 ax cos n b 32 = nJ n 2 ax cos n Similarly, the frequency equations for the antisymmetric mode are obtained by replacing cos n by sin and sin by cos n in the above corresponding equations. Relative frequency shift Relative frequency shift plays an important role in construction of rotating gyroscope, acoustic sensors and actuators. The frequency shift of the wave due to rotation is defined as Δ = (Ω) − (0). Ω being the angular rotation; the relative frequency shift is given by k 1 n = 2 n (n − 1) J n 1 ax + 1 ax J n+1 1 ax cos 2 − i cos n −x 2 1 a 2 + 2cos 2 − i J n 1 ax cos n k 2 n = 2n (n − 1) J n 2 ax + 2 ax J n+1 2 ax cos 2 − i cos n +2 n (n − 1) − 2 ax 2 J n 2 ax + 2 ax J n+1 2 ax sin 2 − i sin n l 1 n = [2n (n − 1) − 1 ax 2 ]J n 1 ax + 2 1 ax J n+1 1 ax sin 2 − i cos n +2n 1 ax J n+1 1 ax − (n − 1) J n 1 ax cos 2 − i sin n that as the modes are increases the non-dimensional frequency also increases, whereas the values in the hexagonal modes are almost exponentially increasing with increasing aspect ratio. The amplitude of the all modes of vibrations exhibits high energy in the absence of fluid and rotational parameter ( Figure 1). Triangular and pentagonal cross-sections In triangular and pentagonal cross-sectional discs, the vibrational displacements are symmetrical about the x axis for the longitudinal mode and anti-symmetrical about the y axis for the flexural mode since the cross-section is symmetric about only one axis. Therefore, n and m are chosen as 0, 1, 2, 3, … in Equation 24 for longitudinal mode and n, m = 1, 2, 3, … in Equation 25 for the flexural mode. Square and hexagonal cross-sections In the case of longitudinal vibration of square and hexagonal cross-sectional disc, the displacements are symmetrical about both major and minor axes since both the cross-sections are symmetrical about both the axes. Therefore, the frequency equation is obtained by choosing both terms of n and m is chosen as 0, 2, 4, 6, … in Equation 24. During flexural motion, the displacements are antisymmetrical about the major axis and symmetrical about the minor axis. Hence, the frequency equation is obtained by choosing n, m = 1, 3, 5, … in Equation 25. Dispersion curves The Non -dimensional wave number flexural (symmetric and antisymmetric) modes merges for 0 ≤ ≤ 0.2 and begin starts increases monotonically. Also, in pentagon and hexagonal cross-sections in Figures 8 and 9, the relative frequency shift gets oscillating trend in the frequency range 0 ≤ ≤ 0.6. The merging of curves between the vibrational modes shows that there is energy transportation between the modes of vibrations by the effect of fluid interaction and rotation. The relative frequency shift profile is highly dispersive in trend for pentagon and hexagonal cross-section than in triangle and square cross-section. Non -dimensional wave number Conclusions In this paper, an analytical method for solving the wave propagation problem of rotating polygonal cross-sectional disc immersed in an inviscid fluid has been presented. The general frequency equation has been obtained using Fourier expansion collocation method. The frequency equations have been derived for the two cases: (i) Polygonal cross-sectional disc with fluid and without rotation (ii) Polygonal cross-sectional disc without fluid and rotation and are analysed numerically for different cross-sections. Numerical calculations for non-dimensional wave number and relative frequency shift have been carried out for triangular, square, pentagonal and hexagonal cross-sectional rotating disc immersed in fluid. From the dispersion curves it is observed that the non dimensional wave number and relative frequency shift are quite higher in all the cross-sections. The effect of rotation and fluid medium on the different cross-sectional discs is also observed to be significant and more in pentagon and hexagonal cross-sections. This method is straightforward and the numerical results for any other polygonal cross-section can be obtained directly for the same frequency equation by substituting geometric values of the boundary of any cross-section analytically or numerically with satisfactory convergence.
4,160.8
2015-02-03T00:00:00.000
[ "Engineering" ]
CONSTRUCTION OF MINIMUM CONNECTED DOMINATING SET IN A GRAPH AND ITS APPLICATION IN WIRELESS SENSOR NETWORKS It is largely applied in Wireless Sensor Networks (WSN). It acts as a virtual backbone in WSNs.algorithms of finding such minimum connected dominating set was studied. A new algorithm for constructing a routing in WSN using one of the algorithms is proposed and implemented using java language. The sampleoutputs are also included I. INTRODUCTION A backbone in a Wireless Sensor Network (WSN) reduces the communication overhead, increases the band width efficiency, decreases the overall energy consumption and thus increases network operational life. The nodes in a wireless sensor network forward data towards a sink via other nodes. The limited resources on the nodes require minimum energy to be spent in this energy consuming task. This necessitates a virtual backbone that can minimize the number of hops required to reach the sink assuming that all nodes have equal transmission range. In the wireless domain, this backbone is a minimum connected dominating set (MCDS). Away from an element of the subset forms a dominating set S. A connected dominating set (CDS) C of G is a dominating set S in hitch all the elements are connected i.e. it induces a connected graph. The nodes in C are called dominators and the other nodes which are one hop away from C are dominates. To minimize the number of hops, the minimum CDS is chosen as the backbone. The backbone is the smallest CDS and every node is adjacent to this virtual backbone. Once data is received by a dominator, it is relayed through the MCDS towards the sink for minimum hop communication. Since the nodes have equal transmission range, the CDS has to be determined for Unit Disk Graph. AUXILIARY DEFINITION In order to develop the algorithm, we state some definition and introduce some terminology relevant to the paper. 1. D is a dominating set in G. D induces a connected subgraph of G. In Fig.1 2.4. Convex hull -the convex hull for a set of points X in real vector space is the minimum convex set containing X. it is also called convex envelop and denoted by CH(X). It is represented by a sequence of the vertices of the line segment forming the boundary of the convex polygon. A POWER AWARE MINIMUM CONNECTED DOMINATING SET FOR WIRELESS SENSOR NETWORKS. A backbone in a wireless sensor networks (WSN) reduces the communication overhead, increases the bandwidth efficiency, decreases the overall energy consumption and thus increases network operational life. The nodes in a wireless sensor networks forward data towards data a sink via other nodes. The limited resources on the nodes require minimum energy to be spent in this energy consuming task. This necessitates a virtual backbone that can minimize the number of hopes required to reach the sink assuming that all nodes have equal transmission range. In the wireless domain, this backbone is a minimum connected dominating set (MCDS). Finding a Routing in WSN Using Power Aware Minimum Connected Dominating Set Here, we are presenting an algorithm for a finding a routing in WSN using power aware minimum connected dominating set is presented. The proposed algorithm is divided into two phases. In the phase a random network is generated and in the second phase routing is achieved using the algorithm. Generation of Random Network Random graphs were introduced by erdos and renyi in the late fifties. The random graph model generates a graph that has a number has a number of nodes which are connected randomly by undirected edges. A random graph is obtained by starting with a set of n vertices and adding edges between them at random. Here a random nodes are initially placed in the specified area m x m and at each iteration edges are created randomly using those nodes. MCDS Construction Dominating set is constructed which consists of minimum number of nodes. This phase consists of following steps: 1. An arbitrary number say id is assigned to each Node in the graph G(V,E) 2. Each node is assigned white color 3. The node u with maximum degree is taken from G(V,E) and colored as black, i.e. Dominator 4. All the neighbor nodes of the node u are Colored i.e. Dominatee 5. Do step 3-4 till all the nodes in the graph G(V,E) are colored either as black or gray. PHASE -1 ((G(V,E)) Explanation of Algorithm Phase-I: Each node in the graph is assigned an arbitrary number as id. Each node is assigned with white color in the beginning. A node x ∈ G (V, E) s.t. x has maximum degree is determined; if two nodes have same degree i.e. maximum then choose a node having minimum id. Let that node be u. color node u as black i.e. Dominator and this node is added into list of Dominating Set i.e. D. All the neighbors of node u i.e. Dominates are colored as gray so that they are not considered in dominating set. The same is repeated for remaining uncolored graph until all the nodes get colored. The above given algorithm can be understood with the help graph shown in figure 4. In this graph initially all nodes are considered as white nodes. Select node which has maximum degree. Node 8 and node 12 both have maximum degree i.e. five. according to this algorithm in case of tie of the degree, lowest id is considered first, so node 8 is selected and colored black and all its neighboring nodes i.e 6,7,9,10and 11 are colored gray. .Node 8 selected as domnting node Similarly in next step node 12 is considered which has maximum degree and colored black and its neighbours i.e. 9,11,13,14 and 15 colored as gray. Fig 3.3. Node 8 and 12 selected as dominating set In next step same procedure repeat and node 1 is selected which has maximum degree with lowest i.d., colored black and all its neighbors 2,3 and 5 colored as gray. Phase II. Determination of Connectors In this phase, set of connectors B is found such that all the nodes in dominating set D gets connected. Let a black node be a node in D and a dark gray node represent a node in B. a node in B is connected by at most K nodes in the graph G (V, E). Set of dark gray nodes with given D could be found using Steiner tree. It is a tree, interconnecting all the nodes in D by adding new nodes between them. The nodes that are in the Steiner tree but not in set D are called Steiner nodes. In the MCDS set, the number of Steiner nodes should be minimum. After this steps CDS is constructed, which will consist of black and dark gray nodes. Let the constructed CDS be set F. This involves the following steps. Repeat this step until all black nodes are connected . All the black nodes and all the dark gray nodes are form the connecting dominating set. So final CDS which covers nodes 1,2,4,8,9,11,12 and 16. This can be shown in fig 3.9. Fig 3.9: cds nodes are 1,2,4,8,9,11,12 and 16. Fig 3.10. {1,2,4,9,8,11,12} is finial MCDS backbone For this CDS graph select a node with the minimum degree among black nodes and dark gray nodes and delete it. Node 16 is deleted because node 16 is subset of N (12) N (11). The degree of node 16 is less than degree of the other CDS nodes. So final CDS is found after pruning process covers nodes are 1, 2, 4, 8, 9,11and 12. This final CDS is minimum and known as MCDS Algorithm proposed In this project we are proposing an algorithm to find a routing in WSN. This algorithm consists of two phases. In phase I, a random network with n nodes which are randomly placed in specified area m x m is generated. In phase II by applying the algorithm proposed minimum connected dominating set if found, using which routing is achieved for the network Phase I A random network with n nodes which are randomly placed in specified area m X m is generated Phase III: Pruning This is the pruning phase. In this phase, redundant nodes are deleted from the CDS constructed in phase II, to obtain the MCDS. The following steps are where i belongs to F -{u} 3. if step 2 returns true then remove node u and Goto step 1 4. Otherwise do not remove node u and Goto step 1 By applying the algorithm proposed in [6] Minimum Connected Dominating Set is found, using which routing is achieved for the network. Implementation Random graphs were introduced by Erdos and Renyi in the late fifties. The random graph model generates a graph that has a number of nodes which are connected randomly by undirected edges. The algorithm is implemented and tested using JAVA. The sample outputs of the proposed algorithm are given here. CONCLUSION The Minimum Connected Dominating Set (MCDS) act as virtual backbone in Wireless Sensor Networks. In this project work, an algorithm for finding MCDS using the concept of convex hull is studied. Another algorithm for finding power aware MCDS is also studied and implemented. A new algorithm for finding a routing using MCDS in wireless sensor network is proposed. Proposed algorithm is implemented and tested using Java-Eclipse. The sample outputs are also included
2,186.8
2019-10-30T00:00:00.000
[ "Computer Science", "Engineering" ]
Aberrantly high activation of a FoxM1–STMN1 axis contributes to progression and tumorigenesis in FoxM1-driven cancers Fork-head box protein M1 (FoxM1) is a transcriptional factor which plays critical roles in cancer development and progression. However, the general regulatory mechanism of FoxM1 is still limited. STMN1 is a microtubule-binding protein which can inhibit the assembly of microtubule dimer or promote depolymerization of microtubules. It was reported as a major responsive factor of paclitaxel resistance for clinical chemotherapy of tumor patients. But the function of abnormally high level of STMN1 and its regulation mechanism in cancer cells remain unclear. In this study, we used public database and tissue microarrays to analyze the expression pattern of FoxM1 and STMN1 and found a strong positive correlation between FoxM1 and STMN1 in multiple types of cancer. Lentivirus-mediated FoxM1/STMN1-knockdown cell lines were established to study the function of FoxM1/STMN1 by performing cell viability assay, plate clone formation assay, soft agar assay in vitro and xenograft mouse model in vivo. Our results showed that FoxM1 promotes cell proliferation by upregulating STMN1. Further ChIP assay showed that FoxM1 upregulates STMN1 in a transcriptional level. Prognostic analysis showed that a high level of FoxM1 and STMN1 is related to poor prognosis in solid tumors. Moreover, a high co-expression of FoxM1 and STMN1 has a more significant correlation with poor prognosis. Our findings suggest that a general FoxM1-STMN1 axis contributes to cell proliferation and tumorigenesis in hepatocellular carcinoma, gastric cancer and colorectal cancer. The combination of FoxM1 and STMN1 can be a more precise biomarker for prognostic prediction. INTRODUCTION Cancer is one of the high morbidity and mortality diseases. Although the survival rate of cancer patients has rapidly decreased in the past decades with deep understanding of the mechanisms of cancer initiation and progression, it is still an unconquered disease definitely. Strikingly, with growing body of evidence, researchers have summarized the major hallmarks of cancer, including sustaining proliferative signaling, evading growth suppressors, resisting cell death, enabling replicative immortality, inducing angiogenesis, activating invasion and metastasis, genome instability, inflammation, metabolic reprogramming and evading immune destruction. Of these ten hallmarks, excess growth is one of the most important characteristics of transformed cells, determining the fate of tumor cells. 1 Aberrantly high activation of oncogenic transcriptional activators is closely associated with unlimited proliferation of transformed cells. Fork-head box protein M1 (FoxM1) belongs to the Fox superfamily characterized by a conserved winged helix DNAbinding domain regulating cell division by transcription of a bundle of cell cycle-associated genes. It has been found that FoxM1 is frequently dysregulated in multiple cancers, and is emerged as an important molecule implicated in the initiation and progression of cancer. Functionally, it plays a vital role in the regulation of cancer cell proliferation and DNA damage repairment. A series of studies showed that aberrantly activated FoxM1 is closely associated with poor survival of clinical patients with breast cancer, ovarian cancer, bladder cancer, hepatocellular carcinoma, colorectal cancer 2 and so on. And more strikingly, accumulating evidence has identified that a large number of downstream genes of FoxM1, such as CCNB1, CDC20, Cdc25B, Aurora B kinase, p21 cip1 , p27 kip1 , survivin, centromere protein A/B/ F (CENPA, CENPB, CENPF) and so on, function as the oncogenic molecules to promote tumorigenicity. [2][3][4][5][6] With more deep understanding of oncogenic roles and molecular mechanisms of FoxM1 in cancers, besides FoxM1-driven cell proliferation, other malignant behaviors such as angiogenesis, 7 chemo-resistance, 8 metastasis 9 and even genome instability 10 have been identified to be transcriptionally regulated by FoxM1 in multiple cancers. Exploring more molecular events regulated by FoxM1 will help us to further develop the potential cancer treatment strategy. The dysfunctional assembly of microtubules causes mitotic disaster in cell mitosis and subsequently leads to cell death. This phenomenon provides a strategy targeting microtubule assembly to induce decreased cancer cell viability and even to suppress tumor growth in vivo. Stathmin1 (STMN1), also named as Oncoprotein 18, is a microtubule-binding protein, which can bind with α/β-Tubulin heterodimers, resulting in assembly suppression of microtubules, or promoting dissociation of microtubules. 11 Recent studies have demonstrated that the high level of STMN1 can be detected in a variety of malignant tumor cells and is functionally related to promoting tumorigenicity and cancer metastasis. 12,13 It has been reported that in multiple cancer cells, the high level of STMN1 antagonizes paclitaxel-induced mitotic disaster by promoting microtubule dissociation and cell division. 14 Accordingly, STMN1 could be a potential target for the development of cancer therapeutics. And more importantly, though the high level of STMN1 frequently indicates poor prognosis in cancer patients, the underlying mechanism for high level of STMN1 in cancers is still elusive. In the present study, we found that co-expression of FoxM1 and STMN1 is a frequent molecular event in multiple cancers. Furthermore, the co-expression pattern of these two molecules is closely associated with poor prognosis of almost all solid cancer patients derived from TCGA Database. Our mechanistic study showed that FoxM1 transcriptionally activates STMN1 in cancer cells using the promoter reporter assay, ChIP-qPCR and bioinformatics analysis in ENCODE ChIP-seq database. And more importantly, we found that the FoxM1-STMN1 axis is a general regulatory mechanism in multiple cancers. In the functional study, we investigated the biological effect of the FoxM1-STMN1 axis in multiple types of cancer cells using cell counting assay, clone formation assay, soft agar assay and xenograft nude mice model, and we found that the FoxM1-STMN1 axis contributes to cancer cell proliferation and tumor growth in vitro and in vivo. Our discovery here would shed light on the avenue to explore the new therapeutic strategy to target oncogenic pathways in cancers. RESULTS Co-expression of FoxM1 and STMN1 in cancers Since FoxM1 and STMN1 both play important roles in the process of cell cycle regulation, we wondered to know whether there is a functional link in cancers. To investigate the expression pattern of FoxM1 and STMN1 in different types of cancers, we first analyzed the mRNA levels of FoxM1 and STMN1 in the Oncomine database. The result showed that FoxM1 and STMN1 highly express in almost all types of solid tumors in a serial of independent cohorts (Fig. 1a), including gastric cancer, hepatocellular carcinoma and colorectal cancer (Fig. 1b). To figure out whether there is a general correlation between FoxM1 and STMN1 in cancers, we then analyzed the expression patterns of them. The results showed a significantly positive correlation between FoxM1 and STMN1 in all cancer samples derived from the TCGA database ( Fig. 1c and Supplementary Table S1). To further confirm the relationship between these two molecules at the protein level, we performed immunohistochemistry (IHC) on the collected tissue chips to examine the expression pattern of FoxM1 and STMN1 in liver hepatocellular carcinoma (LIHC), gastric cancer (GC) and colorectal cancer (CRC) samples. The data showed a closely positive correlation between FoxM1 and STMN1 at the protein level ( Fig. 1d and Supplementary Fig. S1). These results indicated that FoxM1 and STMN1 may have a positive regulatory relationship in multiple cancers. STMN1 is transcriptionally activated by FoxM1 in cancer cells To investigate the functional relationship between FoxM1 and STMN1 in cancers, we used Western blot to observe the expression pattern of FoxM1 and STMN1 in 18 cancer cell lines derived from LIHC, GC and CRC. Interestingly, we also noticed an unusual expression pattern of FoxM1 and STMN1 in two cell lines. In SNU-739, a liver cancer cell line, although there is a high level of FoxM1, the expression of STMN1 is very low, whereas in Colo-320 a CRC cell line, a low level of FoxM1 is accompanied with a high level of STMN1 (Fig. 2a). It may due to multiple regulatory mechanisms of STMN1 in these cells, such as epigenetic regulation, protein degradation or others. But there is a significant positive correlation of FoxM1 and STMN1 in most of the tested tumor cell lines (Fig. 2a). And then using two independent short hairpin RNAs (shRNAs), we knocked down endogenous FoxM1 and STMN1, respectively, in three different types of cancer cell lines. Strikingly, in tested five cancer cell lines, the protein levels of STMN1 dramatically reduced upon FoxM1 depletion (Fig. 2b), while we did not find obvious changes in FoxM1 protein level in STMN1-knockdown cells ( Supplementary Fig. S2). We also performed gain-of-function experiments in FoxM1 lower expression cells and found that the protein levels of STMN1 increase with FoxM1 overexpression (Fig. 2b). These results indicated that STMN1 is a downstream gene regulated by FoxM1. FoxM1 has diverse ways to regulate gene expression. Among all mechanisms of FoxM1-mediated gene expression, transcriptional regulation is the most important way. To explore the mechanism by which FoxM1 regulates STMN1 expression, we performed RT-qPCR to test the mRNA levels of STMN1 in FoxM1-knockdown cancer cell lines. The data showed that loss of FoxM1 significantly decreases STMN1 expression in the mRNA level (Fig. 2b). This led us to hypothesize that FoxM1 may regulate STMN1 at the transcriptional level in cancer cells. We then analyzed the public-opened ChIP-seq database and found the obvious binding peaks of FoxM1 on the proximal region of STMN1 locus (Fig. 2c). To further validate the direct binding of FoxM1 on the promoter region of STMN1 in cancer cells, we performed dual luciferase reporter assay (Fig. 2d) and ChIP assay (Fig. 2e) in the above-mentioned LIHC, GC and CRC cancer cells. Because of the important role of FoxM1 for survival of SMMC-7721 (an LIHC cell line) and MKN-28 (a GC cell line), we even cannot obtain enough cells to perform the ChIP assay when FoxM1 was knocked down using the lentivirus-mediated shRNA. But in other three cell lines, the DNA fragments of STMN1 were pulled down using FoxM1 antibody. Our ChIP-qPCR data showed the physical binding of FoxM1 on the promoter region of STMN1, which is consistence with the bioinformatics analysis in the public ChIP-seq database. Taken together, we revealed a generally transcriptional regulation of FoxM1 on the expression of STMN1 in cancers. STMN1 promotes survival and proliferation of cancer cells in vitro FoxM1 is a master regulator of cell division. Aberrantly high activation of its transcriptional activity functionally links to sustained survival and unlimited proliferation in transformed cells. Based on our finding that STMN1 is transcriptionally regulated by FoxM1, we wondered to know the biological function of STMN1 in cancer cells. We used two independent shRNAs to stably knock down STMN1 and test the survival and proliferation in cancer cells ( Fig. 3a and Supplementary Fig. S3a). We also overexpressed STMN1 in Huh7 and AGS cells and tested survival and proliferation (Fig. 3a). We found that the cell viability of the cancer cells significantly reduces with depletion of STMN1 ( Fig. 3b and Supplementary Fig. S3b). And the further clonogenic formation assay showed that in both 2D and 3D cell culture system, the number and the size of cell clones obviously decrease in STMN1-knockdown cells when compared with that in control group (Fig. 3c, d, Supplementary Fig. S3c, d). To investigate the influence of STMN1 on cell cycle and mitosis, we performed the immunofluorescent staining of α-Tubulin and the flow cytometry. The results showed that inhibition of STMN1 arrests cell cycle in S phase (Fig. 3e) and increases the percentage of abnormal mitosis cells, as formation of multinuclear cells (Fig. 3f). All data suggested that STMN1 plays a critical role in promoting tumor cell proliferation and sustaining cell cycle/mitosis in multiple types of cancer cells in vitro. Knockdown of STMN1 suppresses tumorigenesis in vivo To examine the oncogenic roles of STMN1 in vivo, we selected three cancer cell lines derived from LIHC, GC and CRC, respectively, to establish the xenograft model in nude mice. The protein levels of FoxM1 and STMN1 were detected using Western blot. The qRT-PCR assay was performed to detect the mRNA expression of STMN1 in FoxM1-knockdown or -overexpressed cells. The data were presented as the mean ± SD of three independent experiments. The significance was analyzed by the Student's t-test. *P < 0.05, **P < 0.01, ***P < 0.001. c Analysis of the ChIP-seq data of FoxM1 in HEK293T and K562 cells from the ENCODE database, respectively. The red box shows the binding peaks of FoxM1 on the STMN1 genomic locus. d Dual luciferase reporter assay of FoxM1 and STMN1 promoter or its mutation. The binding site of FoxM1 is in the transcription start site of −163− 168. The data were presented as the mean ± SD of three independent experiments. The significance was analyzed by the Student's t-test. *P < 0.05. e The ChIP-qPCR was used to determine the direct binding of FoxM1 on the promoter region of STMN1 in SGC-7901, HCT 116 and HT-29 cell lines. Primers located on the upstream of STMN1 transcript start site (−2893~−3000) was designed for a negative control. The data were presented as the mean ± SD of three independent experiments. The significance was analyzed by the Student's t-test. *P < 0.05 Fig. 1 Coordinated expression of FoxM1 and STMN1 in cancers. a Analysis of the mRNA levels of FoxM1 and STMN1 (cancer vs. normal) in multiple solid cancers from Oncomine Database. b The expression of FoxM1 and STMN1 in hepatocellular carcinoma cohorts (GSE14520, GSE14323), gastric cancer cohorts (GSE13911, GSE19826, GSE27342) and colon cancer cohorts (GSE8671). The data were presented as the mean ± SD of different samples. The significance was analyzed by the Student's t-test. **P < 0.01, ***P < 0.001, ****P < 0.0001. c The correlation of FoxM1 and STMN1 expression in 31 solid tumors (including ACC, BLCA, BRCA, CESC, CHOL, COAD, ESCA, GBM, HNSC, KICH, KIRC, KIRP, LGG, LIHC, LUAD, LUSC, MESO, OV, PAAD, PCPG, PRAD, READ, SARC, SKCM, STAD, TGCT, THCA, THYM, UCEC, UCS and UVM) from the TCGA database was analyzed using the GEPIA platform, and the correlation of FoxM1 and STMN1 expression in LIHC (n = 373), GC (n = 415) and CRC (n = 328) TCGA data. d Protein levels of FoxM1 and STMN1 in LIHC (n = 79), GC (n = 61) and CRC (n = 68) were analyzed by IHC. The correlation of FoxM1 and STMN1 was analyzed by linear regression analysis Aberrantly high activation of a FoxM1-STMN1 axis. . . Liu et al. Cancer cells infected with STMN1-shRNA or pLKO.1-control lentivirus were subcutaneously injected into the BALB/c nude mice. It was shown that the tumor volume and tumor weight significantly decrease in the STMN1 deficiency groups (Fig. 4a, b), indicating that the STMN1-knockdown cells obviously lost the capacity of tumor growth in vivo. Moreover, pathological analysis of tumor tissue sections showed a much higher frequency of the occurrence of abnormal cell division in STMN1-knockdown groups ( Fig. 4c). It further suggested that STMN1-induced tumorigenesis is related to cell mitosis. Additionally, the expression of Ki-67 reduces in cancer tissues with the depletion of STMN1 (Fig. 4d). All the above results demonstrated that STMN1 plays an oncogenic role in vivo in LIHC, GC and CRC cancers. STMN1 is essential for FoxM1-mediated proliferation of cancer cells To investigate the role of STMN1 in FoxM1-mediated cell proliferation in cancer cells, we tested cell proliferation using five cancer cell lines derived from LIHC, GC and CRC, which were infected with control or shFoxM1 lentivirus along with empty vector (EV) or STMN1-overexpressed (STMN1) lentivirus. Western blot and RT-qPCR were performed to detect the expression of FoxM1 and STMN1 ( Fig. 5a and Supplementary Fig. S4a). CCK-8 and plate colony assay were performed to test cell proliferation. We found that the cell viability decreases with FoxM1 depletion, and then raises up with STMN1-overexpression ( Fig. 5b and Supplementary Fig. S4b). Further clonogenic formation assay also showed that the number of cell clones obviously decreases in FoxM1-knockdown cells and raises up with STMN1 overexpression ( Fig. 5c and Supplementary Fig. S4c). We performed the flow cytometry and the immunofluorescent staining of α-Tubulin to test the contribution of a FoxM1-STMN1 axis on cell cycle/mitosis. It was showed that overexpression of STMN1 in FoxM1knockdown cells partially rescues the cell cycle arrest (Fig. 5d) and mitosis (Fig. 5e), indicating that STMN1 can mediate functions of FoxM1 through impacting on cell cycle/mitosis. To further investigate the function of a FoxM1-STMN1 axis in vivo, we performed rescue experiments in nude mice. It was showed that overexpression of STMN1 in FoxM1-knockdown cells partially rescues the tumor growth in nude mice (Fig. 5f). We also established STMN1-overexpressed cell lines and performed in vivo experiments, but there was no obvious change in tumor weight and tumor volume compared with the control group (Fig. 5f) (Fig. 6a). To further confirm this effect in specific types of cancers, we analyzed the survival of LIHC and STAD from TCGA database. The results showed a positive correlation of high levels of FoxM1 or STMN1 with poor prognosis of LIHC (Fig. 6b). To further investigate whether the co-overexpression of FoxM1 and STMN1 has an effect on LIHC progression, we analyzed the expression status of STMN1 and FoxM1. The LIHC patients were divided into four groups ( Fig. 6c): The result showed that patients bearing FoxM1 High /STMN1 High teratocarcinomas have high significantly shorter overall survival (P < 0.01) than patients whose tumors overexpressed either one or neither of the molecules (Fig. 6c). In addition, we also performed survival analysis using Kaplan-Meier Plotter. The results showed that high levels of FoxM1 and STMN1 also exhibit a poor prognosis of GC patients, though it showed no significance (P = 0.081) ( Supplementary Fig. S4). The data indicated that high levels of FoxM1 and STMN1 are closely associated with poor prognosis in cancers, thus shedding light on the prognostic value of combined utilization of FoxM1 and STMN1. DISCUSSION The biological function of FoxM1 in the cancer-promoting signaling networks FoxM1 has been found with aberrantly high expression in almost all kinds of cancers, including lung cancer, glioblastoma, prostate cancer, basal cell carcinoma, hepatocellular carcinoma, breast cancer and primary pancreatic cancer. 2,3 Our analysis in Oncomine and tissue arrays also showed that the FoxM1 expression is abnormal in hepatocellular carcinoma, gastric cancer and colorectal cancer (Fig. 1b, d). FoxM1 is reported as an important effector in response to several oncogenic signal pathways and also facilitates in cell growth, angiogenesis, tumor invasion, DNA damage repair, senescence and cell cycle. FoxM1 is reported frequently overexpressed in many kinds of cancer types and contributing to promote cell cycle and leads to excessive cell growth. The main function of FoxM1 is as a transcriptional factor to promote cell cycle-related gene expression. Moreover, FoxM1 also can cross-talk with other signal pathways and promote progression of cancers. For example, FoxM1 is an effector of Raf/MEK/MAPK in G2/M phase, and on the other hand, activation of Raf/MEK/MAPK pathway can enhance the activation of FoxM1 and cyclinB1. 15 In the vascular endothelial growth factor (VEGF) signaling pathway, it was reported that FoxM1 directly binds with the promoter region of VEGF gene, and consequently promotes angiogenesis, migration and invasion of tumor cells. 16,17 In the matrix metalloproteinase (MMP) signaling pathway, FoxM1 can regulate MMPs and promote tumor cell invasion. 18 Overexpression of FoxM1 can significantly inhibit senescence and the expression of p53 and p21 cip1 . 19 In Wnt/ β-Catenin signal pathway, FoxM1 protein interacts with β-Catenin to establish STMN1-knockdown cells using pLKO.1 gene silence system. The hepatocellular carcinoma cell line Huh7, gastric cancer cell lines AGS were used to establish STMN1-overexpression cell lines by lentivirus infection. The protein levels of STMN1 were detected by Western blot, and the mRNA levels were detected by qRT-PCR. The data were presented as the mean ± SD of three independent experiments. The significance was analyzed by Student's t-test. ***P < 0.001. b Cell viability of cells was detected by cell counting-8 kit (CCK-8). The data were presented as the mean ± SD of three independent experiments. The significance was analyzed by Student's t-test. ***P < 0.001. c Plate clone assay was performed and the number of clones was measured by ImageJ software. The data were presented as the mean ± SD of three independent experiments. The significance was analyzed by Student's t-test. **P < 0.01, ***P < 0.001. d Soft agar assay was performed and the tumorigenic sphere was photographed after 2 weeks. The data were presented as the mean ± SD of three independent experiments. The significance was analyzed by Student's t-test. **P < 0.01, ***P < 0.001. e Cell cycle was analyzed by flow cytometry. f Cell mitosis was analyzed by immunofluorescence of α-Tubulin. The nuclei (blue) are stained with DAPI and the α-Tubulin (red) is stained with Alexa Flour 555. The images were captured at 60× magnification, and the scale bar is 10 μm. The yellow arrows point to the normal mitosis cells. The percentage of normal or abnormal cells was calculated by numbers of normal or abnormal nucleus divided by numbers of total nucleus × 100%. The data were presented as the mean ± SD of three different fields of view at low magnification (20×). The significance was analyzed by Student's t-test. *P < 0.05, **P < 0.01 and induces the nuclear translocation of β-Catenin in both CML cells and glioma cells. 20,21 In breast cancer cells, direct interaction between FoxM1 and Smad3 has been confirmed and it induces transcription of TGF-β/Smad3-mediated target genes. 9 FoxM1 acts as a protective partner of Smad3 protecting Smad4 from degradation, inducing TGF-β signal transduction and promoting breast cancer metastasis. 9 FoxM1 can also interact with YAP/TEAD complex and regulate the expression of CIN-associated genes, then results in genomic instability in LIHC. 10 And a recent study showed that alter N6-methyladenosine (m6A) RNA modification of FoxM1 by ALKBH5 and a long non-coding RNA antisense FoxM1 in glioblastoma enhance transcription of FoxM1 and proliferation of patient-derived GSCs. 22 Other studies also showed that FoxM1mediated pathways induce resistance of some anticancer drugs. An Akt/FoxM1/STMN1 pathway was found as a mechanism for tyrosine kinase inhibitors (TKIs) resistance in non-small cell lung year-old nude mice for 4-6 weeks to grow tumors. The left side of the mice was injected with control cells and the right side was injected with shSTMN1-expressing cells. a The mice were killed after 4-6 weeks and tumors were removed to measure the weight. The tumor volume was measured and calculated by v = 0.5 × Length × Width 2 . SMMC-7721, n = 5. SGC-7901, n = 4. HCT 116, n = 6. The data were presented as the mean ± SD. The significance was analyzed by Student's t-test. *P < 0.05, **P < 0.01, ***P < 0.001. c HE staining was performed to observe the morphological changes of the nucleus. The yellow arrows show the typical normal or abnormal morphological changes of the nucleus in cell mitosis process. The scale bar is 10 μm. The percentage of normal or abnormal cells was calculated by numbers of normal or abnormal nucleus divided by numbers of total nucleus × 100%. The data were presented as the mean ± SD of 6-11 different fields of view at low magnification (20×). The significance was analyzed by Student's t-test. *P < 0.05, ***P < 0.001. d Ki67 expression in the tumor was detected by IHC and the percentage of positive cells was calculated by ImageJ IHC Profiler. The scale bar is 200 μm. The data were presented as the mean ± SD of three different fields of view at low magnification (50×). The significance was analyzed by Student's t-test. ***P < 0.001 cancer (NSCLC). 15 Since a growing body of evidence has demonstrated the importance of FoxM1 as a hub gene in the oncogenic network, it is emergent to further identify the novel downstream targets regulated by FoxM1. The meaning of our study is to provide more potential targets for the treatment of FoxM1-driven cancers. Aberrantly high activation of a FoxM1-STMN1 axis. . . Liu et al. STMN1 regulatory network plays vital roles in cell cycle and mitosis STMN1 gene encodes a protein involved in the regulation of the microtubule filament system by destabilizing microtubules. 23 It is essential for cell cycle and mitosis. Inhibition of STMN1 results in reduction of cellular proliferation and accumulation of cells in G2/ M. 11 In the human fibroblasts, deletion of STMN1 induces genomic instability during mitosis and consequently leads to senescence. 24 In tumor cells, STMN1 is a tumor promoter contributing to cell proliferation, 25 invasion, migration 26,27 and drug resistance. 14,28 Several studies showed that inhibition of STMN1 can promote the sensitivity of anti-mitotic agents such as taxanes (i.e., docetaxel) and vinca alkaloids (i.e., vinblastine, vincristine). Similarly, STMN1 silencing significantly reduces proliferation and inducedapoptosis in response to ruxolitinib. 29 It was also reported that siRNA-based inhibition of STMN1 increases the sensitivity of colorectal cancer cells to the treatment of 5-FU. 30 Moreover, the inhibition of STMN1 by the anti-cancer potency of three novel indoly-chalcones (CITs) can induce mitotic catastrophe in pancreatic cancer. 31 Abnormally high expression of STMN1 is seen in many kinds of cancers, but the mechanisms of the dysregulated expression are inadequate. Previous studies have shown that the regulation of STMN1 in cell mitosis mainly depends on post-transcriptional regulation. On the one hand, phosphorylation of Ser16, Ser25, Ser38 and Ser63 of STMN1 can be catalyzed by protein kinase like CAM II, 32 CDK1, CDK2, 33 MAPK and Kinase downstream of TNF, which control the formation of the mitotic spindle and progression of mitosis by maintaining the dynamic balance of microtubules. 34 Phosphorylated STMN1 promotes continuous assembly of microtubules and formation of mitotic spindle in G2/M phase. Ultimately it enables the cell cycle process to enter into mitosis successfully. While in the later stage of mitosis, STMN1 is dephosphorylated by protein phosphatase, and the nonphosphorylated STMN1 promotes depolymerization of microtubules and the disassembly of mitotic spindle, which then contributes to complete spindle division and exit from mitosis. 11 Consequently, the phosphorylation status of STMN1 determines not only whether the cells can enter into mitosis, but also whether the cell cycle can timely exit from mitosis and enter into cytoplasmic division at the later stage of mitosis. On the other hand, it has been reported that STMN1 is post-transcriptionally regulated by miRNAs, such as miR-101, 35 miR-34a, 36,37 miR-193b 38 miR-493 (ref. 39 ) and miR-223. 40,41 STMN1 is inhibited by miR-34a and induces expression of growth differentiation factor 15 (GDF15) in prostate cancer cells, which finally promotes cell proliferation and invasion. 37 In our previous study, we also demonstrated that STMN1 is a direct target of tumor suppressive miR-101 in LIHC cells. 42 In TGGA data analysis, we found that the expression of STMN1 in tumor tissues is also higher than that in adjacent tissues, which indicated that the oncogenic roles of STMN1 not only depends on post-transcriptional regulation, but also depends on the mRNA level. But the transcriptional regulation of STMN1 is rarely studied. Our study showed that FoxM1 upregulates STMN1 by transcriptional regulation, and the FoxM1-mediated transcriptional activation of STMN1 can promote cell proliferation and survival both in vitro and in vivo in LIHC, GC and CRC cancers. It revealed that a FoxM1-STMN1 axis may be a general regulatory mechanism of STMN1 in solid tumors. A FoxM1-STMN1 axis facilities tumor growth by promoting cell cycle progression in solid tumors In this study, we demonstrated that a general FoxM1-STMN1 regulatory axis contributes to cell proliferation and tumorigenesis in cancers. FoxM1 is aberrantly high-expressed in almost all solid cancers in our data analysis, like bladder cancer, breast cancer, sarcoma, colorectal cancer and lung cancer (Fig. 1a). Furthermore, the high-expressed FoxM1 is associated with poor prognosis, so this molecule is a biomarker of malignancy of tumor. We found that the mRNA levels of FoxM1 and STMN1 in transformed tissues are higher than those in adjacent tissues of multiple cancer (Fig. 1a). Then we found that the expression pattern of the two molecules has a remarkable positive correlation in most malignant tumors (Fig. 1c and Supplementary Table 1). We then tested the expression pattern of FoxM1 and STMN1 in 18 cancer cell lines derived from LIHC, GC and CRC, and found a significantly positive correlation between FoxM1 and STMN1 in tumor cells (Fig. 2a). It revealed that there may exist a general regulatory relationship between FoxM1 and STMN1 in solid tumor cells. To confirm the regulation between FoxM1 and STMN1, we established FoxM1 or STMN1 knockdown cell lines and found that FoxM1 can upregulate STMN1. The cancerous characteristic of FoxM1 mainly owes to its transcriptional activity. As a transcription factor, it can bind with conservative DNA sequences "AT/CAAAT/CA" and then activate gene expression. Our previous study also showed that silencing of FoxM1 in human LIHC cells results in cell cycle arrest and inhibits cell survival depending on downregulated CCNB1 in G2/ M. 4 In our present study, our luciferase report assay, ChIP-qPCR and ChIP-seq data analysis in public database demonstrate the mechanism that STMN1 is a direct target of FoxM1 in LIHC, GC and CRC (Fig. 2c-e). And the FoxM1-STMN1 axis promotes cell proliferation and tumorigenesis by maintaining cell cycle/ Fig. 5 FoxM1-mediated cancer cell proliferation requires STMN1 expression. a The hepatocellular carcinoma cell line SMMC-7721, gastric cancer cell line SGC-7901, colorectal cancer cell lines HCT 116 and HT-29 were used to established FoxM1-silenced and STMN1-overexpressed cell lines by lentivirus-mediated system. The protein levels of FoxM1 and STMN1 were detected by Western blot and the mRNA levels were detected by RT-qPCR. The data were presented as the mean ± SD of three independent experiments. The significance was analyzed by Student's t-test. *P < 0.05, **P < 0.01, ***P < 0.001. b Cell viability of cells was detected by cell counting-8 kit. The data were presented as the mean ± SD of three independent experiments. The significance was analyzed by Student's t-test. *P < 0.05, **P < 0.01, ***P < 0.001. c Plate clone assay was performed and the number of clones was measured by ImageJ software. The data were presented as the mean ± SD of three independent experiments. The significance was analyzed by Student's t-test. *P < 0.05, **P < 0.01, ***P < 0.001. d Cell cycle was analyzed by flow cytometry. e Cell mitosis was analyzed by immunofluorescence of α-Tubulin. The nuclei (blue) are stained with DAPI and the α-Tubulin (red) is stained with Alexa Flour 555. The images were captured at 60× magnification, and the scale bar is 10 μm. The yellow arrow points to the normal mitosis cells. The percentage of normal or abnormal cells was calculated by numbers of normal or abnormal nucleus divided by numbers of total nucleus × 100%. The data were presented as the mean ± SD of three different fields of view at low magnification (20×). The significance was analyzed by Student's t-test. *P < 0.05, **P < 0.01. f Colorectal cancer cell lines HTC 116 were infected with lentivirus. The mice were separated into four groups, control group (cells infected with pLKO.1-control lentivirus and pLV-GFP lentivirus), FoxM1-knockdown group (cells infected with pLKO.1-shFoxM1 lentivirus and pLV-STMN1-GFPSpark lentivirus), rescue group (cells infected with pLKO.1-shFoxM1 lentivirus and pLV-STMN1-GFPSpark lentivirus) and STMN1-overexpressed group (cells infected with pLKO.1-control lentivirus and pLV--GFP lentivirus). Cells were injected subcutaneously into the back of 5-year-old nude mice for 4 weeks to grow tumors. The mice were killed and the tumors were removed to measure the weight and volume. Tumor volume was measured and calculated by v = 0.5 × Length × Width 2 ; n = 5. The data were presented as the mean ± SD. The significance was analyzed by Student's t-test. *P < 0.05 mitosis (Fig. 5d, e). To confirm its effect on tumor progression, we performed survival analysis of LIHC patients. Surprisingly, we found that not only patients bearing FoxM1 high or STMN1 high tumors have a low survival, but patients bearing FoxM1 high / STMN1 high tumors have the highest risk of poor prognosis, as the data shown in Fig. 6c. It indicates that LIHC overexpressing both FoxM1 and STMN1 is more aggressive, thus shedding light on the prognostic role of combined use of FoxM1 and STMN1. Though the positive correlation between high STMN1 expression and poor prognosis of GC and CRC patients is not as apparently as that in LIHC, there is a trend that GC or CRC patients bearing highly STMN1-expressed tumor have poor prognosis (Supplementary Fig. S5). It may because that dynamic changes in the phosphorylation level of STMN1 is the main event of mitotic process besides the transcription of mRNA. Another reason might be the drug resistance of high STMN1 expression patients. In conclusion, we demonstrated that a general regulatory FoxM1-STMN1 axis promotes cell proliferation and tumorigenesis in FoxM1-driven cancers in vitro and in vivo (Fig. 6d). And survival analysis showed that the FoxM1-STMN1 axis promotes tumor progression. Our results revealed that the combination of the two molecules can be a more precise biomarker for prognostic prediction and a potential cancer treatment strategy. Supplementary Table S2. The pCDH-STMN1 plasmid was constructed via insertion of a PCR-amplified human STMN1 cDNA into a pCDH vector digested with BamH I and EcoR I. The pLV-STMN1-GFPSpark plasmid was purchased from Sino Biological (Beijing, China; HG15440-ACGLN). Western blot Cells were collected and washed with phosphate buffer saline (PBS) three times, and then harvested using RIPA Lysis Buffer. Proteins in the cell lysate were resolved on 10-15% SDSpolyacrylamide gel and transferred to a nitrocellulose membrane. Before incubation with primary antibodies, the membrane was blocked with 5% non-fat milk. Membranes were incubated with primary antibodies against FoxM1, STMN1 and β-Actin overnight at 4°C. After incubation with peroxidaseconjugated secondary antibodies for an hour at room temperature, the signals were visualized using ECL chemiluminescent regents by Tanon 5500 (Tanon Science & Technology; Shanghai, China). Quantitative RT-PCR RNA isolation and quantitative real-time RT-PCR were performed as previous described. Briefly, 5 × 10 6 cells were harvested for purification of total RNA using TRIzol Reagent (Invitrogen), and 1 μg of total RNA of each sample was reversed to cDNA by PrimeScript RT Master Mix (TaKaRa, Tokyo, Japan). For detecting the mRNA levels of specific genes, the diluted cDNA of each sample was used as a template to perform quantitative PCR and the amplifications were done using SYBR-green PCR MasterMix (TaKaRa). PCR assays were performed three times and the fold changes of genes were obtained after normalizing to β-Actin using the comparative Ct method (fold change = 2 −ΔΔCt ). Primers used for quantitative RT-PCR are listed in Supplementary Table S3. Dual luciferase reporter assay Cells were co-transfected with pGL3-STMN1-promoter/mutation, pCMV-FoxM1 and pRL-TK using Lipofectamine 2000 (Invitrogen). Cell extracts were prepared and luciferase activity was measured using the Dual Luciferase Reporter Assay System (Promega, Madison, WI, USA). The relative firefly luciferase activity was normalized with its respective Renilla luciferase activity. Chromatin immunoprecipitation assay The chromatin immunoprecipitation analysis was performed as described previously using a SimpleChIP Enzymatic Chromatin IP Kit (Cell Signaling Technology, 9003). Gastric cancer cell line SGC-7901, colon cancer cell lines HCT 116 and HT-29 were infected with control or two shFoxM1 lentiviruses. A total of 1.2 × 10 7 cells were crosslinked with 1% formaldehyde solution for 15 min at room temperature. The crosslink reaction was then stopped by addition of 10% glycine and lysed in 1 ml lysis buffer on ice. Lysates was harvested and sonicated into DNA fragments with 150-900 bp using the Micrococcal nuclease and Scientz-1500F Ultrasonic disperser (Ning Bo, China). Sonicated samples were spun down and subjected to overnight immunoprecipitation with IgG or FoxM1 antibody (Santa Cruz Biotechnology). After the proteins and RNA are removed by Protease K and RNAase A, the chromatin pulled-down by antibodies is purified. The enrichment of STMN1 is detected by qPCR amplification. Primers for qPCR amplification are listed in Supplementary Table S4. Cell cycle assay Cells were infected with lentivirus and harvested by trypsinization and centrifugation. Cells were then fixed in 75% ethanal overnight at −20°C. Then cells were stained with 10 µg/ml PI in PBS plus RNase. Then the cells were analyzed by a flow cytometer. Cell viability assay Cell viability was analyzed using Cell Counting Kit-8 Kits. The cells were pre-seeded in 96-well plates with the number of 1 × 10 3 . The cell culture medium was discarded and replaced with culture medium containing 0.05 µg/µl Cell Counting Kit-8 (0.5 mg/ml) reagent and cells were incubated at 37°C. After 0.5-4 h, the absorbance of the culture medium was detected using a Bio-RAD (Hercules, CA, USA) Microplate Reader with a wavelength of 450 nm. This procedure was repeated every day in the following 4-5 days. Colony formation assay Long-term cell survival was monitored in a colony formation assay. In brief, 1000 cells were seeded into 6-well plates and allowed to grow for 2 weeks. The cells were fixed with 4% paraformaldehyde for 15 min and visualized by 0.5% (w/v) crystal violet (Sigma-Aldrich) staining. Colons in the plate were scanned using Odyssey Scanner (LI-COR, Lincoln, NE, USA) and the number of colons was quantified by Image J software. Soft agar assay The cell survival in 3D culture was monitored by soft agar assay. Cells were plated in six-well plates with the bottom layer containing 0.5% low-melting agarose. Cells (3000-5000 per well) were mixed with low-melting agarose to a final concentration of 0.3% and layered over the bottom agar. The dishes were then cultured at 37°C for 2-3 weeks and 500 μl of the culture medium was added to keep the top layer moist. Spheres were photographed by a digital camera coupled to a microscope. Xenograft experiment To generate mouse subcutaneous tumors, SGC-7901, SMMC-7721 and HCT 116 cells were infected with control lentivirus or shSTMN1 lentivirus. Male 5-to 6-week-old BALB/c nude mice were implanted subcutaneously in the flank of back with 5 × 10 6 SGC-7901 GC cells, SMMC-7721 LIHC cells and HCT 116 CRC cells. The mice were killed after 4-5 weeks and in vivo solid tumors were dissected and weighed. For rescue experiments, HCT 116 cells were infected with control/shFoxM1 lentivirus and pLV-GFP/ pLV-STMN1-GFPSpark lentivirus. Male 5-week-old BALB/c nude mice were implanted subcutaneously in the flank of back with 6 × 10 6 cells. The mice were killed after 4 weeks and in vivo solid tumors were dissected and weighed. The tumor volume was determined using the formula 0.5 × L × W 2 , where L is the longest diameter and W is the shortest diameter. The tumors were removed into 4% polyformaldehyde solution for fixing tissues. Hematoxylin and eosin staining Hematoxylin and eosin (HE) staining was performed according to a conventional method. In short, tissues were fixed in 10% neutral buffered formalin and were embedded in paraffin and processed by standard histological techniques. Sections were placed in CAT hematoxylin for 1 min followed by rinsing in tap water to blue. The slides were stained with eosin solution for 3 min, dehydrated with graded alcohol, and washed with xylene. The slides were then photographed using fluorescence microscope. Quantitation of mitotic cells/total cell number was done on six of 20× fields of view from 2 or 3 mice. Immunohistochemistry Slides of paraffin-embedded tumor tissue were deparaffinized and antigen repaired before blocking endogenous peroxidase with 3% hydrogen peroxide for 20 min. The tissue slides were treated with 0.01 mol/l sodium citrate (pH 6.0) in a microwave oven for 10 min after deparaffination and rehydration. The sections were then incubated with anti-Ki67 antibodies above at 4°C overnight, subsequently developed using the Peroxidase/DAB, Rabbit/ Mouse. The slides were then counterstained with hematoxylin. The ImageJ IHC profiler was used to quantify Ki67-positive cells number on three of 40× fields of view from 2 or 3 mice. Tissue microarray assay Tissue microarrays of human LIHC, GC and CRC were obtained from the Department of Pathology, Fourth Military Medical University (China, Xi'an), and was stained with anti-human FoxM1 (Santa Cruz Biotechnology; sc-376471; 1:50) and STMN1 (Cell Signaling Technology, 13655S; 1:100) antibodies. The slides were scanned using Pannoramic (Santa Clara, CA, USA) MIDI and quantified using Quant center. The correlation of protein expression was analyzed using GraphPad Prism (Version 6; La Jolla, CA, USA). Public database The expressions of FoxM1 and STMN1 in different cancer tissues and normal tissues were analyzed using Oncomine database (https:// www.oncomine.org/). The mRNA levels of FoxM1 and STMN1 in hepatocellular carcinoma, gastric cancer and colorectal cancer patients were obtained from Gene Expression Omnibus (GEO, https://www.ncbi.nlm.nih.gov/geo/) and normalized using with GEO2R. Correlation analysis of FoxM1 and STMN1 from 31 TCGA solid tumors was done using Gene Expression Profiling Interactive Analysis (GEPIA, http://gepia.cancer-pku.cn). Oncoprint and correlation data of FoxM1 and STMN1 from TCGA were obtained by using the cBioportal for cancer genomics (http://www.cbioportal.org). Survival analysis of 31 solid tumors and LIHC from TCGA data was performed using Gene Expression Profiling Interactive Analysis (GEPIA, http://gepia.cancer-pku.cn). Kaplan-Meier Plotter (http:// kmplot.com/analysed/) was used to perform survival analysis of GC. Statistics The in vitro experiments were repeated at least three times unless stated otherwise. As indicated in the figure legends, all quantitative data are presented as the mean ± SD of three biologically independent experiments or samples. Statistical analysis was performed using GraphPad Prism 6. Statistical significance was tested using a two-tailed unpaired or paired Student's t-test. The correlation of protein expression of FoxM1 and STMN1 was analyzed by linear regression analysis. The Kaplan-Meier survival was analyzed by Log-rank (Mantel-Cox) test analysis. A P value lower than 0.05 was considered significant. DATA AVAILABILITY All data supported the paper are present in the paper and/or the Supplementary Materials. The original datasets are also available from the corresponding author upon request.
9,622.2
2021-02-01T00:00:00.000
[ "Medicine", "Biology" ]
Data describing the regional Industry 4.0 readiness index The data article presents a dataset suitable to measure regional Industry 4.0 (I4.0+) readiness. The I4.0+ dataset includes 101 indicators with 248 958 observations, aggregated to NUTS 2 statistical level) based on open data in the field of education (ETER, Erasmus), science (USPTO, MA-Graph, GRID), government (Eurostat) and media coverage (GDELT). Indicators consider the I4.0-specific domain of higher education and lifelong learning, innovation, technological investment, labour market and technological readiness as indicators. A composite indicator, the I4.0+ index was constructed by the Promethee method, to identify regional rank regarding their I4.0 performance. The index is validated with economic (GDP) and innovation indexes (Regional Innovation Index). Specifications Management, Monitoring, Policy and Law Specific subject area Dataset for indicator-based monitoring of Industry 4.0 readiness Type of data Combined Data Table of the Industry 4.0 indicators, including on special joins (Data Table) Statistics of available I4.0 patents in the regions (Data Table) Statistics of available I4.0 publications in the regions (Data Table) Statistics of Erasmus + program (Data Table) Statistics of Higher Education (Data Table) Statistics of Research Centers (Data Table) Statistics of Industry 4.0 relevant news from the media (Data Table) Rankings and indexes for validation (Data Table) Value of the Data • The data is suitable to identify thematic areas as well as key indicators to measure the potential of the region in human capitals, the current development level high technology industries and manufacturing, investments and scientific outputs regarding regional Industry 4.0-related activity aggregated to city and the European Nomenclature of Territorial Units for Statistics level 2 (NUTS2) level. • The developed composite indicator can function as a regional Industry 4.0 performance monitoring tool for decision makers and regional development researchers. • The well curated, carefully mined and selected data is compiled and ready to analyse in multiple statistical software and in regional statistical software, to improve the thinking about the Industry 4.0 concept as well as to monitor the convergence towards it by merging the key aspects e.g. technology, investment, higher education into a uniformed reasonable as well as analyzable dataset. Data Description The collected dataset aims to identify the regional potential of Industry 4.0, covering five dimensions of Industry 4.0 regional development aspects, namely the Labour market, Technological readiness, Innovation, Investment and Higher education. The data collection occurred through seven open data portals: • European Tertiary Education Register (ETER) [1] -Higher education graduates, in Industry 4.0 relevant fields. • Erasmus + -Statistics about students participating in mobility programs [2] . • id ( string ) -Shows the identifier of the patent category. This field corresponds to the CPC standard. • name ( string ) -Human readable name of the topic. • include_subtopics ( bool ) -Shows if the topic should include all subtopics or not. -I40_indicator_db_column_description.csv: The database contains all the 101 indicators. Regarding its volume, we are sharing its description separately, in the description file, which includes the following: • ColumnName ( string ) -Describes the column name how to refer to the data. • DataType ( string ) -Contains the type of data in the column. • Description ( string ) -Description of the data. -I40_indenticator_db.csv: Joined table of the previously mentioned data (I40_indicatior_db_column_description.csv). -rankings.csv: This table describes the results of the different regional rankings. • GDPrank ( int ) -Rank of the region based on GDP. • PrometheeRank ( int ) -The rank of the region by the promethee method. • RII ( float ) -Regional Innovation Index from 2019, by the Regional Innovation Scoreboard. Governmental policies are operated in the long-term planning horizon by reflecting socioeconomic as well as environmental development focused visions. It is stated that governmental policies focusing on application of Industry 4.0 simultaneously develop the region itself [8] . Macroeconomic open data proved to be suitable to measure regional innovation dynamics [9] , inclusive growth [10] as well as socio-economic performance [11] , however the I4.0-specific assessment has not been studied extensively. The collected, cleaned and analyzed dataset represents the socio-economical and technical standing of the current regional Industry 4.0 readiness. The collected data measures the potential and competitiveness of the region by the most crucial vectors, human capitals, as well as the current development level of high technology industries and manufacturing [12] . The dataset reflects upon key components of the new industrial revolution, such as the occupation possibilities for every level of education, as well as the current industrial and scientific outputs as well as its investments. The dataset is also taking soft indicators into account, such as the opinions of the media on selected keywords strongly correlated with the new industrial revolution (e.g. "JOBS", "MANUFACTURING"), where we measured the number of news appeared as well as the average sentiments of the texts. An informative, ranking system based on the collected indicators is provided, that can be effectively interpreted. We notice that the locations of the major universities, as well as the highly advanced industrial regions, e.g. North Italy are the key contributors. Fig. 2 . shows the patent distribution in the relevant field across Europe. The previous scientific contributions are in this case not so significant anymore but the industrial competent of the region. We see that the main contributors are the advanced northern Italy and the Bayern region, known in the car industry as well as the southern part of Sweden. We created an Index readiness 4.0 + rank, from the collected indicators and indexes, using the Promethee method [13] . Next, we show the correlations of the new rank with existing rankings and indexes. Fig. 3 . shows the correlation between GDP ranking and Industry 4.0 + readiness rank. The correlation is not so high as several factors and industries influence the GDP. However, this result illustrates that regions paying attention on science and technological employment has high GDP. Fig. 4 . indicates a 0.75 correlation between the I4.0 + and the Regional Innovation indexes. Their similarity is clear, however, the I4.0 + index measures only I4.0-specific areas that can boost regional innovation performance. Experimental Design and Methods The problem of missing focus on regional Industry 4.0 readiness is studied through examining existing Industry 4.0 readiness models and indexes as well as exploring open data that is available at regional scale. To sufficiently measure regional Industry 4.0 (I4.0 + ) readiness, we defined the requirements of data to be: NUTS 2 classified (greater coverage of data), available (online), Industry 4.0-specific (direct metrics) and up-to-date. Therefore, data sources meets the criteria are identified as following: European Tertiary Education Register (ETER), Erasmus + , Microsoft Academic Knowledge Graph (MA-Graph), Global Research Identifier Database (GRID), United States Patents and Trademark Office (USPTO), European Statistical Office (Eurostat) and the Global Database of Events, Language and Tone (GDELT). News can serve as an effective tool for online monitoring without significant delay, for which GDELT provides a platform to extract and monitor world news by using natural-language and data-mining algorithms. It consists of the Event Database and the Global Knowledge Graph (GKG). The former captures events, while the latter records and connects locations, organizations, themes, people, taxonomies, sources, tone and event of news. The indicators of 'I40_indenticator_db.csv' are categorized into five main dimensions, namely: higher education and lifelong learning, labour market, innovation, investment and technology readiness. Fig. 6 . presents the methodological workflow of analysis. Variables are used to form the regional Industry 4.0 (I4.0 + ) indicator system, which was analysed with both SRD [14] and the Promethee II. [13] method. The result promoted the rank of variables, which is interpreted by the use of PCA method in a two dimensional visualization. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships which have, or could be perceived to have, influenced the work reported in this article.
1,821.4
2020-10-27T00:00:00.000
[ "Economics", "Business" ]
Analyzing the Effect of Badminton on Physical Health and Emotion Recognition on the account of Smart Sensors Emotional ability is an important symbol of human intelligence. Human's understanding of emotions, from subjective consciousness to continuous or discrete emotional dimensions, and then to physiological separability, has shown a trend of gradually diverging from psychological research to the field of intelligent human-computer interaction. This article is aimed at studying the effects of smart sensor-based emotion recognition technology and badminton on physical health. It proposes a method of using smart sensor technology to recognize badminton movements and emotions during the movement. And the impact of emotion recognition based on smart sensors and badminton sports on physical health is carried out in this article. Experimental results show that the emotion recognition technology based on smart sensors can well recognize the changes in people's emotions during badminton sports, and the accuracy of emotion recognition is higher than 70%. At the same time, experiments show that badminton can greatly improve people's physical fitness and strengthen people's physique. Introduction In recent decades, the computer field has developed vigorously, other related technologies have received increasing attention, and the interaction between humans and computers has also received increasing attention. Therefore, people's requirements for computer intelligence are getting higher and higher, and we look forward to a more humane and more natural interaction between humans and computers. Information is based on human feelings, etc. and executes instructions to provide better services to humans. All these foundations are the understanding of feelings. External symptoms such as changes in facial expressions, changes in speech pitch, rhythm, and speed may cause changes in internal emotions. Therefore far, the research of feeling cognition has mainly focused on studying feeling cognition through voice and studying feeling through face image. A single mode of information can be used to understand the current state of human emotions. Compared with image-based emotion recognition, sensor-based emotion recognition technology has the great-est advantage of using various sensors such as image sensors and voice sensors. to analyze the human body's facial expressions and voice tone. At the same time, due to the diversity of sensor types, the collected data is also diverse. Emotion recognition technology based on smart sensor technology can effectively identify changes in people's emotions and can fully simplify operating equipment. It is not only beneficial to people's livelihood fields such as medical treatment, education, and psychological analysis but also has extremely high commercial value. Any field that requires human-computer emotional interaction has a broad application space. With the continuous development of smart sensor technology, there are increasingly researches on the smart sensor itself and its application fields. In Garcia et al.'s research, a smart sensor was designed to predict the established sensory fish quality index. The sensor dynamically correlates the microbial count and TVB-N with the quality index [1]. To detect the water environment, Dissanayake et al. designed a sensor to measure fluoride and hardness in water through an automated mechanism. The sensor designed is based on a simple colorimetric method and the color change program of complexometric titration [2]. In the maintenance and management of traffic roads, Graziano et al. provides descriptions and comments on the basic characteristics of wireless sensor networks used for road surface monitoring for damage detection. These include energy supply, detection methods, hardware and network architecture, and performance verification procedures [3]. In the intelligent detection of household appliances, Bono-Nuez et al. proposed a new method of online detection of pot materials. In this method, the inductor contained in the stove is used as a heating element and a sensor at the same time. First, the harmonic impedance is calculated from the spectrum estimation of the current and voltage waveforms recorded at the inductor. Then, machine learning algorithms are used to process the feature set including four harmonic impedances and power factors to identify pot materials [4]. In the recognition of people's emotions, smart sensors have also played an important role. However, in addition to smart sensors, there are many ways to perform emotion recognition. For example, Jenke et al.'s research shows that emotion recognition of EEG signals allows direct evaluation of the user's "inner" state. This is considered an important factor in human-computer interaction. And Jenke et al. also reviewed the EEG emotion recognition feature extraction method based on 33 studies. Using machine learning technology to select features of self-recording data sets, these features were compared [5]. The results show the performance of different feature selection methods, the use of selected feature types, and the choice of electrode position. In addition, Scherer and Ceschi conducted some research on existing emotion recognition technology and pointed out that emotion recognition problems in real-life lack clear standards for the nature of underlying emotions. In addition, Scherer and Ceschi stated that using the Facial Action Coding System (FACS), the objectively coded "feeling" (but not false) smile is positively correlated with the humor scale in the standard and judges' ratings [6]. In the research on emotion recognition technology, Xu et al. proposed a technology to transfer knowledge from Applied Bionics and Biomechanics heterogeneous external sources (including image and text data) to promote three related tasks of understanding video emotion, that is, emotion recognition, emotion attribution and emotion-oriented summary [7]. These researchers have made great efforts in the research of smart sensors and emotion recognition technology and have achieved many results. However, most of them conduct experiments on the basis of existing research, ignoring relevant research on the development of the technology itself. Emotion recognition technology based on smart sensors only needs to detect human language and actions through smart sensors to smoothly infer the changes in people's emotions. The innovation of this article lies in the research on the emotion recognition method based on smart sensor technology and the analysis of the effect of badminton on physical health based on smart sensor technology. Therefore, it can get the application of intelligent sensor-based analysis and emotion recognition technology in sports. It lays a certain theoretical foundation for the promotion of emotion recognition technology based on smart sensors. Emotion Recognition Methods Based on Smart Sensors and the Impact of Badminton Sports on Physical Health 2.1. Emotion Recognition. The so-called emotional awareness does not mean that the computer can directly measure or recognize the user's emotional state, but it needs to be interpreted as "inferring the emotional state by observing the preconditions of performance, actions, and feelings." Feelings are usually stimulated by some external factors. Because it is the subjective experience (such as joy, anger, sadness, and fear) and physiological response (heart rate, rhythm, specific activities under the skin, etc.) accompanied by changes in external performance (such as facial expressions, body actions, and voice intonation) change, it can also change too. Therefore, some observations about the emotional state can be obtained through electronic devices such as cameras and microphones and related sensors such as acceleration. Assuming that the observations of these data are valid and reliable, the underlying emotional state can be inferred based on these data. As the standard of emotion classification, the emotion library defines 14 basic emotions (Ejoy, happiness, joy, sad-ness, fear, panic, jealousy, alienation, neutrality, boredom, passivity, disgust, dominance, anger) [8]. However, there are many specific similarities in these feelings that are difficult to distinguish. Therefore, this article adopts the classification method widely used by social psychologists. In other words, all types of emotions in the emotion library are classified into 5 standard emotion libraries. Specifically, it is happiness, sadness, fear, anger, and neutrality. Figure 1 shows the common sentiment classification. The dimensional space theory believes that emotions do not exist independently, but there is a continuous and gradual relationship, and different emotions can transition smoothly and gradually [9]. Here, the emotional space is expressed as a Cartesian space, and each dimension in the space corresponds to a certain attribute of emotion. Therefore, each emotional state can be described as a mapping point in the Cartesian space, and the numerical value corresponding to the coordinates of each dimension reflects the strength of the emotion under this attribute [10]. Since emotions are described by real values in the dimensional space model, they are also called continuous emotion description models. Among them, the more classic dimensional space models include the circular emotion model and Plutchik's emotion wheel model. Figure 2 shows a more widely used circular emotion model diagram. The circular emotion model graph is composed of two parts: the pleasure dimension and the active dimension. Among them, the dimension of pleasure is the horizontal axis, which describes the degree of positive or negative emotions, and is specifically used to measure whether a person's emotions are positive or negative. The active dimension is the vertical axis, which describes the intensity of the emotion, which specifically expresses whether a person's behavior under a certain emotion is active or passive. Through the understanding and estimation of the emotional state, the emotions can be mapped out in the two-dimensional space one by one. It can easily convert between emotion tags and spatial coordinates. Action Recognition Method Based on Smart Sensor. The human body motion data collected by sensors is given based on the reference system of each sensor [11]. Nowadays, there are many sensor devices that can recognize human movements. The most widely used are acceleration sensors, motion capture sensors, and inertial sensors. Since human body motion cannot be described in accordance with the Applied Bionics and Biomechanics sensor's reference system, such source data cannot be used directly, so the coordinate system needs to be transformed [12]. After the coordinate system transformation is completed, according to the general process of human motion recognition (generally, human action recognition uses the following process: first collect data, then denoise or process the collected data, then extract feature quantities, then train and classify, and finally realize the recognition of human actions), the source data that can be used for human motion recognition needs to be feature extracted. Data Coordinate System Transformation. For human movement, it is necessary to know the specific form of movement. First of all, a coordinate system must be established according to the direction of human movement. This coordinate system is represented by a sensor installed on the human body. According to the working principle of the sensor, the angle data returned by the sensor is Euler angle. Euler angles represent the difference between the current motion coordinate system and the ground coordinate system [13]. Figure 3 shows a schematic diagram of the coordinate system. In Figure 3, the first coordinate system represents the sensor motion coordinate system, and the second coordinate system represents the ground coordinate system. According to the standing position and posture, all sensors will have an initial angle. The analysis principle is the same for each sensor; so, any sensor is used for analysis [14], establishing a motion coordinate system for any sensor. Since the current sensor has its own ground coordinate system as a reference, Xoyz is used to represent the motion coordinate system, and x n o n y n z n is the ground coordinate system. The difference between Xoyz and x n o n y n z n is represented by the three Euler angles of yaw, pitch, and roll. The pitch angle α represents the angle between the x-axis and the horizontal plane of the ground coordinate system x n o n y n z n , and the value range is (-180°, 180°). The yaw angle β represents the angle between the projection of the x-axis on the horizontal plane x n o n y n z n and the axis, and the value range is (-180°, 180°). The roll angle ρ represents the angle between the z-axis and the vertical plane x n o n y n z n , and the value range is (-180°, 180°). Figure 4 shows a schematic diagram of the coordinate system transformation. It can be calculated and proved that the coordinate transformation matrix of the point coordinate A 1 (x 1 , y 1 , z 1 ) expressed in the Xoyz coordinate system to the point A n (x n , y n , z n ) in the coordinate system x n o n y n z n is According to the coordinate transformation matrix, the coordinates can be mapped to a unified ground coordinate system. Because of the experiment, the direction of movement will not be the same every time due to site reasons. It is also impossible for the tested person to keep moving in the same direction every time. Therefore, it is necessary to establish a travel coordinate system to describe the action, as shown in Figure 5 for the human body motion coordinate system. Methods to determine the coordinate system of human body movement: First, the standing human body posture is used as a benchmark, and the tested person first keeps standing upright for 2 s. At this time, the mean value of the attitude angle can be calculated through the sensor data, and the mean value of the stationary attitude angle and the stationary acceleration can be calculated at the same time. Considering that it is only affected by gravity at rest, the direction of maximum acceleration is the direction of gravitational acceleration [15]. At this time, the representation of the acceleration vector in the coordinate system can be obtained according to the acceleration components in the three axial directions. The direction the vector points to is the Z-axis direction under the human body motion reference frame. Applied Bionics and Biomechanics Since the direction vector coordinates obtained by the measurement are all expressed in each sensor coordinate system, the sensor coordinate system can be converted into the human body motion coordinate system at one time. Signal Characteristics. Signal characteristics are the quantities that can describe the characteristics of statistical variables from multiple different angles based on the totality of the data and are divided into time-domain characteristics and frequency-domain characteristics [16]. Each segment of sampled data describing human body motion contains rich time domain and frequency domain features. The time-domain features commonly used for human action recognition include sample mean, sample variance or standard deviation, correlation coefficient between two axes, and energy [17]. Sample mean is as follows: Sample variance is as follows: Standard deviation is as follows: Covariance is as follows: Among them, TðδÞ and TðεÞ represent the expectations of δ and ε variables, respectively. Correlation coefficient is as follows: The frequency domain feature is usually calculated by Applied Bionics and Biomechanics fast Fourier transform. It is used to discover the characteristics of frequency and periodic information in the signal [18]. Kurtosis is as follows: kurtosis is the sharpness of the peak shape of a sample. The larger the kurtosis value, the sharper the signal shape, and the smaller it means the flatter the shape [19]. When calculating the kurtosis of motion, the process of subtracting 3 is usually done, so that the waveform kurtosis of the standard normal distribution is 0. The calculation formula of kurtosis is as follows: Skewness measures the shape of a signal and represents a deviation from the central axis of a normal signal, where S < 0 means the waveform is deviated to the left, and S > 0 means the waveform is deviated to the right. The key to the decision tree to solve the problem is to find an optimal classification feature and the corresponding classification feature values from the data set and decompose the data set into two subsets. Information entropy, information gain, conditional entropy, and information gain ratio are commonly used in decision trees as the basis for division [20]. For example, in the ID3 algorithm, according to the information gain evaluation and selection of features used in the division of subsets, the C4.5 algorithm uses the information gain rate to select attributes, and the CART tree corresponds to the Gini index. Information entropy indicates whether the sample subset is single, that is, the smaller the value, the single type of Applied Bionics and Biomechanics data in the set, and the more ideal division [21]. Information entropy is defined as Assuming that the discrete attribute x has K possible values fx 1 , x 2 , ⋯, x k g, and assigning weight jD K j/jDj to the branch node according to the number of samples, the information gain can be calculated: According to the definition of information gain, attributes that account for the majority of numbers have inherent advantages and therefore make the division effect worse. The C4.5 decision tree algorithm is different. It uses the gain rate as the division index to minimize the impact. Its definition is Among them, Gain (D, x) is called the "intrinsic value" attribute of the attribute. CART decision tree is divided by "Gini Index", which is defined as Among them, IV (x) is the Gini value. It can also represent the unity of the data set. As the samples are sampled, there will be some missing attributes in some samples. If only the samples with values on these attributes are divided, it will lead to the waste of sample collection. Given a training set D and attribute x, letD denote a subset of samples in D that has no missing values on attribute x. Obviously, onlyD can be used to judge the quality of the attribute. Suppose x has K possible values fx 1 , x 2 , ⋯, x k g, letD k denote the subset of samples inD whose attribute x is a k , andD n denote the subset of samples inD that belong to the nth type of fn = 1, 2, ⋯, jnjg; then, there isD = U jbj n=1Dn ,D = U K k=1D K . Suppose we assign a weight w a to each sample a and define Based on the definition, the promotion of information gain can be defined: Among them, Applied Bionics and Biomechanics The division method is intuitively to divide the same sample into different subnodes with different probabilities. The C4.5 decision tree algorithm uses the solution. C4.5 is a series of algorithms used in machine learning and data mining classification problems. The goal of the C4.5 decision tree algorithm is to find a mapping relationship from attri-bute values to categories through learning, and this mapping can be used to classify new entities with unknown categories. Badminton. Badminton is a convenient and suitable intensity project. It requires the activist to fully mobilize all parts of the body to participate in the movement. And Applied Bionics and Biomechanics badminton has the characteristics of fast moving speed, flexible body, varied tactics, strong skill, and fun. It makes it very easy for people to have a strong interest in this sport [22]. Relevant studies have shown that participating in badminton sports has greatly improved people's physical fitness. It can be summarized as follows: (1) it can exercise human agility. It improves the exchange function of the respiratory system, the pumping ability of the cardiovascular system, and the body's energy conversion ability. (2) It gains selfconfidence from sports, enhances social communication ability, and develops a good will and quality that can endure hardship and dare to fight. It can play an important role in improving the physical health of young people. Of course, a large number of badminton sports will also have certain damage to human health. In particular, it causes great damage to the elbows, shoulders, knees, ankles, and Achilles tendon. However, some moderate badminton sports are more beneficial than harmful to your health. Figure 6 shows the impact of badminton on people's physical fitness. Impact on Strength. The definition of strength in the sports world is "the ability of the human neuromuscular system to overcome or resist resistance at work." In the process of badminton activities, we can carry out targeted strength training. It can improve the depth of muscle bonds, ligaments, and joints to a certain extent and can effectively reduce the occurrence of sports injuries [23]. Relevant scholars said that athletes who have a better grasp of standardized badminton movements are better able to exert their own rapid strength and strength endurance. Impact on the Quality of Exercise Speed. Speed quality mainly refers to the athletic ability of sports players during the activity. In badminton activities, it can be summarized as judging the speed of the ball, the speed of swing, and the speed of front, back, left, and right hitting [24]. In badminton, speed includes the speed at which the muscles and joints of the body coordinate to complete the action, as well as the speed at which the rhythm of hitting the ball changes. Impact on Sports Coordination. Coordination refers to the coordinated work of various muscles of the human body during the activity process to ensure smooth and smooth movement. The intensity of badminton games is relatively high, and energy metabolism is provided by a mixture of aerobic and anaerobic metabolism. Among them, anaerobic metabolism is the dominant energy supply, and good coordination ability can reduce energy consumption and ensure long-term activity [25]. Some researchers believe that when coaching training, we use small numbers, more sets, fast swings, and short intervals to improve the coordination ability of the exerciser's muscles and reduce energy loss, developing the athlete's movement coordination through the training method of multiple balls, multiple sets, and a certain duration. Impact on Body Flexibility. The size of the range of motion of the joint, the elasticity and extensibility of the ligaments, muscles, tendons, the surface of the human body, and other tissues that protect the joints are called flexibility. Badminton has a fast flying speed. When hitting the ball, the athlete must judge the approximate flight path at the moment the opponent hits the ball and quickly respond to the steps on his feet. At the moment the ball falls, use kicking, turning, and swinging the ball back to the opponent's court. Sensitivity determines whether it can grab a high point and whether the ball's landing point is accurate. The badminton court is large, and the badminton racket is small. When hitting 9 Applied Bionics and Biomechanics the ball on both sides of the body and on the Internet, the legs need to be kicked and straddled to support the arm. All of these require good flexibility of the legs as support, and the characteristics of badminton determine the need to start the shot quickly and run around the court. Among them, there are a large number of back arches, kicks, and arm extensions, all of which test the agility and flexibility of athletes. 2.3.5. Impact on Cardiopulmonary Function. Cardiopulmonary function refers to the physiological process in which the oxygen transport system pushes the oxygenated hemoglobin carrier to transport oxygen and nutrients to the body through breathing. To a certain extent, it can reflect the gas exchange capacity of the lungs and the pumping capacity of the heart. Long-term participation in sports can increase arterial elastic fibers, strengthen transport functions, facilitate gas exchange, and reduce respiratory diseases. It can also increase the blood supply and oxygen supply of the myocardium and prevent the occurrence of heart disease. These are the issues that the general population is most worried about, and moderate-intensity aerobic exercise can effectively solve these problems. The development of badminton is flexible and diverse. It can be carried out indoors or outdoors, with measures adapted to local conditions, moderate intensity, and easy control of activ-ity time, which can meet people's needs for sports. As long as a piece of empty ground, we can enjoy the fun of sports but also can strengthen the body and enjoy the body and mind. According to the statistics, during high-level badminton games, or training, the heart rate reaches 160-180 beats per minute. Even with moderate-intensity activity, the heart rate can still reach 140-150 beats per minute. Even if a beginner continues for a period of time, he can still reach the medium-intensity standard. Persisting in participating in badminton activities can improve cardiovascular flexibility, increase blood pumping ability, and improve cardiorespiratory endurance. In addition, a large number of studies have shown that badminton is a mixed type of aerobic and anaerobic exercise. It focuses on long-term moderate-intensity exercise and aerobic exercise, which is beneficial to the development of adolescents' cardiopulmonary function. Emotion Recognition Based on Smart Sensors and Experiments on the Impact of Badminton on Physical Health Emotion Recognition Experiment Based on Smart Sensors. In this experiment, the face image obtained based 10 Applied Bionics and Biomechanics on the smart sensor is processed. After the processing is completed, these image files are stored locally in the form of text, and each sample is classified into seven emotional categories: anger, disgust, fear, happiness, neutral, sadness, and surprise. Table 1 shows the results of image emotion recognition during this experiment. Investigation and Experiment on the Participation of Student Groups in Badminton. In this experiment, a total of 750 students in a certain place were investigated. A total of 750 questionnaires were distributed in this survey. The content of the questionnaire included students' age, gender, grade, height, weight, favorite sports, their love for badminton, how long they play badminton each week, and the reasons for playing badminton. A total of 726 questionnaires were recovered in this survey, and the recovery rate reached 96.8%. When sorting out the collected questionnaires, they summarized the results of the student groups' liking for badminton, motivation for participating in badminton, and the duration of badminton each week. Table 2 shows the survey results of the students' love of badminton in this survey activity. Table 3 shows the statistics of students' motivation to participate in badminton obtained from this survey experiment. 3.3. The Impact of Badminton on Physical Health. Appropriate exercise is good for people's physical and mental health. In this experiment, some people who participated in badminton sports and those who did not participate in badminton sports were selected. During the experiment, various functions of the experimenter's body were detected and data recorded through smart sensors and other technologies. Table 4 shows the comparison of the basic physical indicators of the experimenters. The content and distribution of body fluids are closely related to the body's metabolism and functional capacity. People with different ratios of tendons and fibers have different ratios of intracellular fluid and extracellular fluid, and the higher the composition of the fast gluten-to-fiber ratio, the higher the ratio of intracellular fluid; conversely, the higher the proportion of slow tendon fibers, the higher the proportion of extracellular fluid. In this experiment, data recording was focused on the body fluid state of the experiment participants. Table 5 shows the body fluid comparison table obtained in this experiment. Emotion Recognition Based upon Smart Sensors and Experimental Analysis of the Impact of Badminton on Physical Health 4.1. Experimental Analysis of Emotion Recognition Based on Smart Sensors. In the emotion recognition experiment based on smart sensors, the experimental data was counted. To ensure the accuracy of the experimental results, many experiments were carried out. Combining the experimental data of Table 1 and other groups of experiments, a comparison chart of emotion recognition results based on smart sensors can be obtained, as shown in Figure 7: According to Figure 7, it can be concluded that the emotion recognition technology based on intelligent sensing can well recognize the emotion of human sadness, and the 11 Applied Bionics and Biomechanics recognition rate is the highest, reaching 100%. Secondly, the recognition rate of neutral and surprise is the second, but it also got very good recognition results. Although emotion recognition based on smart sensors has a slightly lower recognition rate for the three types of emotions: happiness, disgust, and anger, the recognition rate for these three types of emotions is also between 75.00% and 90.00%. Among them, happy emotions are easily misidentified as surprise emotions, and angry emotions are easily misidentified as disgusting emotions. Investigation and Experimental Analysis of Student Groups Participating in Badminton. In the survey of student groups participating in badminton sports, the results of the survey were collected from the survey form. Figure 8 shows the results of this survey. According to Figure 8, among the participants in this survey, more than 40% of the student groups like to play badminton, and only a small number of students do not like this sport. Among the participants in this survey, girls like badminton more than boys. Experimental Analysis of the Effect of Badminton on Physical Health. In the experiment on the effects of badminton on physical health, the changes in body fluids in the human body were recorded in detail. According to the data in Table 5, we can get the comparison of body fluids in people who participate in badminton and those who do not participate in badminton, as shown in Figure 9: According to Figure 9, it can be concluded that the extracellular fluid of girls who have played badminton is significantly higher than that of girls who have not played badminton, and there is at least a 9.6% difference between the two. It shows that tennis can promote the growth of slow muscle fibers in female students. At the same time, it can be seen in the figure that badminton has a significant effect on the extracellular fluid, total water content, and edema index of female students. During the experiment, the body fat percentage, waistto-hip ratio, and BMI index were recorded. Figure 10 shows the comparison between body fat ratio and total protein. According to Figure 10, it can be concluded that badminton has a significant impact on the BMI index of female college students. It has a significant impact on boys' fat percentage, waist-to-hip ratio, and BMI index. The percentage of body fat of girls who play badminton is greater than 23%, which is significantly higher than that of girls who do not play badminton. The fat percentage, waist-to-hip ratio, and BMI of boys who played badminton were significantly higher than those of boys who did not play badminton. It is consistent with the research results of basic physical indicators and girth. Conclusion According to the experiments in this article, the following conclusions can be drawn: badminton can significantly increase people's fat percentage, waist-to-hip ratio, and BMI index. At the same time, it can also greatly improve the distribution of body fluids in the human body. In the process of playing badminton, based on smart sensor technology, it can effectively identify people's emotional changes. Whether it is happy, disgusting, or sad emotions, smart sensor technology can respond to changes in people's emotions in a timely manner. Although the recognition accuracy rates of emotion recognition technology based on smart sensors are different in different emotions, the recognition accuracy rates are all higher than 70%. Data Availability Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. Conflicts of Interest The authors declare that they have no conflicts of interest.
7,210
2022-04-04T00:00:00.000
[ "Computer Science" ]
Determination of Montelukast Sodium and Bambuterol Hydrochloride in Tablets using RP HPLC An accurate, specific and precise assay level gradient reverse-phase high-performance liquid chromatographic method was developed for simultaneous determination of montelukast sodium and bambuterol hydrochloride in tablet dosage form. An inertsil ODS C-18, 5 μm column having 250×4.6 mm I.D. in gradient mode, with mobile phase A, containing 0.025 M sodium phosphate buffer: methanol (85:15) and mobile phase B, containing acetonitrile:methanol (85:15) was used at different time intervals. The flow rate was 1.5 ml/min and effluent was monitored at 218 nm. The retention times of montelukast sodium and bambuterol hydrochloride were 21.2 min and 5.8 min respectively. The linearity for both the drugs was in the range of 0.25-0.75 mg/ml with correlation coefficients of 0.9999 and 0.9996 for montelukast sodium and bambuterol hydrochloride, respectively. Montelukast sodium (MTK), 1-[({(R)-m-[(E)-2-(7-chloro-2-quinolyl) vinyl]-α-[o-(1-hydroxyl-1m e t h y l e t h y l ) p h e n e t h y l ] b e n z y l } t h i o ) m e t h y l ] cyclopropaneacetate sodium is a leukotriene receptor antagonist, used in the treatment of asthma [1][2][3] .It is not official in IP and BP.Various analytical methods, such as liquid chromatography with fluorescence detection [4][5][6] , stereoselective HPLC for MTK and its S-enantiomer [7] , simultaneous HPLC and derivative spectroscopic method with loratadine [8] , stability indicating HPLC method for MTK in tablets and human plasma [9] have been already reported. Bambuterol hydrochloride (BBL), (RS)-5-(2tert-butylamino-1-hydroxyethyl)-m-phenylene bis(dimethylcarbamate) hydrochloride is a direct acting sympathomimetic with predominantly -adrenergic activity (β 2 -agonist) [10] .It is an ester prodrug of β 2 adrenergic agonist terbutaline [11] .Bambuterol hydrochloride is official in BP [12] .Different HPLC methods have been reported for the estimation of BBL in pharmaceutical dosage form [13][14][15] .The drug has been also estimated by solid-state NMR spectroscopy [16] .The combination dosage forms of MTK and BBL are available in the market for the prophylaxis and treatment of chronic asthma and chronic bronchitis in pediatrics.Present study involves development a n d v a l i d a t i o n o f R P -H P L C m e t h o d f o r t h e estimation of MTK and BBL in combination dosage form. Combination tablet formulation containing montelukast sodium equivalent to montelukast 10 mg and bambuterol hydrochloride 10 mg (Montair Plus, Okasa Pharma, Satara, India) was procured from the local pharmacy.HPLC grade acetonitrile, methanol (Rankem, India) and HPLC grade water (Milli-Q) were used in this method.NaH 2 PO 4 was of analytical grade obtained from Qualigens (India).Mobile phase A was prepared by mixing 850 ml of 0.025M NaH 2 PO 4 buffer with 150 ml of methanol and mobile phase B was prepared by mixing 850 ml of acetonitrile with 150 ml of methanol.The solution was sonicated for 10 min and filtered using Whatman A Shimadzu HPLC LC-2010 AHK unit and Agilent 1100 system with variable wavelength programmable UV/Vis detector, an inertsil ODS C-18, 5 µm column of dimensions 250×4.6 mm was used.A Rheodyne injector with a 10 µl loop was used for the injection of sample. Standard stock solution was prepared by weighing pure MTK and BBL (25 mg each) and dissolving in 30 ml of diluent in 50 ml volumetric flask.The solution was sonicated for 15 min, cooled and volume was made up to the mark with diluent to obtain final concentration of 500 µg/ml each.The solution was filtered.Calibration curves were prepared by taking appropriate aliquots of standard MTK and BBL stock solution in 10 ml volumetric flask and diluted up to the mark with diluent to obtain final concentrations of 250, 300, 400, 500, 600, 700, 750 µg/ml of each.Standard solutions (n=6) were injected through 10 µl loop system, and chromatograms were obtained using 1.5 ml/min flow rate.The time programme was set for gradient elution.Different compositions of mobile phases at different time intervals (mobile phase A:mobile phase B, 85:15 at 0 min, 15:85 after 15 min, 15:85 after 22 min, 85:15 after 28 min and 85:15 after 33 min) were run to obtain the satisfactory resolution.The effluent was monitored at 218 nm.Calibration curve was constructed by plotting average peak area against concentration, and regression equations were computed. Five intact tablets (0.9380 g) containing MTK and BBL, each of 10 mg, were weighed accurately and transferred to 100 ml volumetric flask, sonicated for 15 min and the volume made up to the mark with diluent (water:acetonitrile:methanol, 1:1:1) to obtain final concentration of 500 µg/ml of each drug.The solution was filtered.Sample solutions were chromatographed (n=6), and concentrations of MTK and BBL in tablet samples were found using regression equations. The average retention time for MTK and BBL was found to be 21.2 min (% RSD, 0.28) and 5.8 min (% RSD, 0.15), respectively (fig.1).The linearity of the assay was checked at 50-150% of the assay level concentration of MTK and BBL.The calibration was linear in the range of 0.25-0.75mg/ml for both the drugs with regression coefficient 0.9999 and 0.9996, intercept −24564.35and 69825.13 and slope 22166620.23 and 8402793.74 for MTK and BBL, respectively.The low % RSD value of peak area, 0.32 (MTK) and 0.19 (BBL) indicated that the method is precise and accurate (Table 1). An accurate, specific and precise assay using level gradient reverse-phase high-performance liquid chromatographic procedure for the simultaneous determination of MTK and BBL in tablets was developed in the present investigation.Satisfactory separation was obtained with the gradient system.The results obtained by the proposed method were close to the label claim of both the drugs.The low value of % RSD and recovery experiments indicates that the method is accurate.Ropinirole hydrochloride, chemically known as 4[2-(dipropylamine ethyl)]-1,3-dihydro-2H indole-2-one monohydrochloride, is a non-ergot dopamine D2-antagonist with similar actions to those of bromocriptine [1,2] .It is used as antiparkinson agent [3,4] .Only HPLC and thermospray liquid chromatography are reported for estimation ropinirole hydrochloride in formulation [3,4] . The objective of the study was to develop a simple, rapid, accurate and specific spectrophotometric method for the estimation of ropinirole hydrochloride using UV spectrophotometry.The simple method was developed using distilled water as a solvent with minimum processing steps.The λ max of ropinirole in distilled water was found to be 250 nm and Beer's law was obeyed in the range of 5-35 µg/ml.The result of analysis was validated statistically using recovery studies.Thus this method of estimation of ropinirole was found to be simple, precise and accurate. A Shimadzu 1700 UV spectrophotometer with 1 cm matched cuvettes were used for estimation.Standard solution of drug (100 µg/ml) was prepared in distilled water.Twenty tablets of ropinirole hydrochloride were weighed and powdered in glass mortar.Powder equivalent to 10 mg of the drug was transferred to 100 ml volumetric flask, dissolved in about 50ml distilled water and made up the volume to the mark with distilled water to obtain the concentration of 100 µg/ml.Aliquots of 0.5 to 3.5 ml portions of the standard solution were transferred to a series of calibrated 10 ml corning test tubes and the volume in each test tube was adjusted to 10 ml with distilled water.The absorbance of solutions was measured at 250 nm against reagent blank and calibration curve was constructed.Similarly absorbance of sample solution was measured and amount of ropinirole hydrochloride was determined by referring to the calibration curve.Recovery studies were carried out by adding a known quantity of the pure drug to the preanalyzed formulation and the proposed method was followed.From the amount of drug found, percentage recovery was calculated. The proposed method of determination of ropinirole hydrochloride showed molar absorptivity of 8.703×10 3 l/mol.cmand Sandell's sensitivity 0.0341 monitored at 218 nm.The retention times of montelukast sodium and bambuterol hydrochloride were 21.2 min and 5.8 min respectively.The linearity for both the drugs was in the range of 0.25-0.75mg/ml with correlation coefficients of 0.9999 and 0.9996 for montelukast sodium and bambuterol hydrochloride, respectively.Key words: Montelukast sodium, bambuterol hydrochloride, HPLC, simultaneous, dosage form filter paper (No.41). a Va l u e s o f % R S D o f s i x e s t i m a t i o n s ; R S D : Re l a t i v e s t a n d a r d deviation.
1,922.2
2009-01-01T00:00:00.000
[ "Chemistry" ]
Production and Characterization of Activated Carbon Fiber from Textile PAN Fiber ABSTRACT: This paper presents the preparation and characterization of carbon fiber felt and activated carbon fiber felt from textile polyacrylonitrile fiber. Carbon fibers are usually related to aircraft manufacturing or high mechanical purposes. Activated carbon fibers are known as excellent adsorbent materials. Despite all advantages, carbon fiber and activated carbon fiber are expensive materials because of their raw material cost. On the other hand, in this study, carbon fiber felt and activated carbon fiber felt were produced from textile polyacrylonitrile fiber, which is cheaper than their precursor, polyacrylonitrile fiber, and can be converted into carbon fiber felt and activated material with high micropore content and surface area. This research describes the transformation of textile polyacrylonitrile fiber in its oxidized form. After that, the oxidized material was transformed in felt and, in the sequence, converted into carbon fiber felt and activated carbon felt. The carbon fiber felt and activated carbon fiber felt were characterized by X-ray photoelectron spectroscopy, Raman spectroscopy and scanning electron microscope. N2 isotherms were performed to qualify the material obtained for further electrochemical applications. The main result was the conversion dynamics of textile polyacrilonitrile fiber into carbon fiber in felt form and activated carbon fiber in felt with high surface area and high micropores content. INTRODUCTION Activated Carbon Fiber Felt (ACFF) has special characteristics when compared with common activated carbons (granular or powder).It can be transformed into fabric, woven or yarn forms which gives them self-sustainable characteristics.In addition, ACFF shows well-defined pore structures on its surface providing a high and fast adsorption capacity for specific components, such as for specific components with small molecular dimensions as ions, metals or organic.One of the most important characteristics, which makes ACFF a very special adsorbent material, is its pore size distribution (Suzuki 1994).In the ACFF case, a large amount of micropores can be found directly connected on their surface leading to a faster and less energetic adsorption mechanism, especially for gases (Cuña et al. 2014).ACFF is widely used in many applications such as air purification and water treatment, chemical (adsorption and desorption for organic compounds and solvents), military area such as protection garment and masks (Marsh and Reinoso 2006).The density of ACFF is considerably lower than the regular carbon fiber, making it ideal for applications requiring low weight.Despite all the advantages of Carbon Fiber Felt (CFF) and ACFF application, their use has been limited due to their relatively high cost. The use of the textile type polyacrilonitrile (PAN) fiber to produce CFF and ACFF is a way to produce a material having an attractive surface area and some studies show this possibility (Carrott et al. 2001;Nabais et al. 2005;Marcuzzo et al. 2012).The use of standard textile fiber as raw material to make CFF and ACFF is a way to make this material economically attractive (Yoon et al. 2004).Containing more than 95% of carbon, CFF and ACFF made from textile PAN have low electric resistance and they show potential application for catalytic process, especially when combined with metallic particles or in the form of films (Chung 2004;Pierozynski 2012).The CFF and ACFF can also be used as electrodes.The aim of this study is to present the production and characterization of CFF and ACFF made from a standard textile PAN. MATERiAl The commercial 200 ktex tow of 5.0 dtex textile PAN fibers was thermal oxidized in a laboratory scale oven set by, aiming the production of flame resistant fibers.The oxidation process was performed in 2 steps, the first at 200 °C and the second at 300 °C.The total time process was 50 min for each step.After that, the oxidized PAN produced was used as a raw material to produce felt with 200 g/m 2 and about 3 mm thickness.The oxidized PAN fiber felt was carbonized, resulting into CFF.After that, the CFF was submitted to activation process to obtain the ACFF (Marcuzzo et al. 2012). During the carbonization process, the oxidized PAN loses about 50% in mass and shrinks linearly about 10%.The shrinkage is an important parameter and must be controlled because an inadequate shrinkage result in poor mechanical characteristics and the fiber can not be handled.For this purpose, the oxidized PAN fiber felt sample was cut into pieces of about 0.7 × 0.25 m and placed in a special sample holder that can control the sample shrinkage in 2 dimensions. The set was then introduced in an electrical furnace.Both ends of the furnace tube were closed by flanges, which allowed the insertion and the purge of processing gas to provide an atmosphere condition necessary for the carbonization and activation.The carbonization was performed in argon atmosphere at a final temperature of 1,000 °C at a heating rate of 30 °C/min.The process time at maximum temperature was set in 20 min to complete the carbonization process.After finishing the carbonization process, the furnace was turned off and maintained in Ar atmosphere.This condition of inert atmosphere was maintained until the room temperature inside the furnace reactor was reached. The CFFs were carbonized at 900 °C, prior to activation, which was immediately accomplished at 1,000 °C, replacing the Ar gas by CO 2 gas, during 50 min.The activation time was previously defined by tests where the mechanical evaluation was done (Marcuzzo et al. 2012).The activation time of 50 min was fixed to guarantee the minimal mechanical condition to complete the carbonization process. METhODS The characterization was performed by gas adsorption aiming the measurements of surface area and pore size distribution function.The isotherm was performed by Beckman Coulter SA 3100 equipment.The Brunauer, Emmett, Teller (BET) method was applied to determine the total surface area and the pore size distribution was estimated by applying the Non-Local Density Function Theory (NLDFT) method over the adsorption isotherm, and the micropore volume was also estimated by the same method (Marsh and Reinoso 2006).The burn-off was estimated weighing the sample before and after the activation process. The structural characterization of the CFF and ACFF were carried out by micro-Raman scattering spectroscopy (Renishaw Microscopy -2000).Using the 514.5 nm line of an Ar ion laser taking the spectra at the range from ~ 800 to ~ 1,800 cm −1 .The optical microscope was used to excitation laser beam onto the sample and to collect the backscattered light.The diameter of the laser spot on the sample surface was 0.45 µm for the fully focused laser beam at 50X objective magnification.The instrument was calibrated using a pure diamond crystal.From the Raman spectroscopy, it was possible, quantitatively, to evaluate the change in the graphite organization through fitting by a combination of 4 Lorentzian bands (L, D 1 , D 2 , D 4 ) and a Gaussian band (D3) (Sadezky et al. 2005). The morphological aspects of the CFF and ACFF were evaluated by scanning electron microscopy (SEM) using a JEOL JSM microscope.All the X-ray photoelectron spectroscopy (XPS) measurements were carried out with a Kratos Axis Ultra XPS spectrometer using a monochromatic Al-K alpha (1,486.5 eV) X-ray radiation with power of 15 kV at 150 W. The emitted photoelectrons were detected using a hemispherical analyzer at 15 µm spatial resolution.The vacuum system was maintained at approximately 10 -9 Torr during all the experiments.Survey scans were collected from 0 to 1,100 eV with a passing energy equal to 160 eV with step size of 1 eV to identify the elements present on the surface.Using the X-ray diffraction, it was possible to obtain information related to the crystalline structure.To carry out these measures, we used a PANalytical diffractometer serie X'PertPRO. SuRfACE TExTuRE AnAlYSiS The aim of this topic is to show the characterization of CFF and ACFF.The carbonized fiber felt material, in theory, has about no significant surface area.The nitrogen isotherm technique was not able to show information about CFF surface area; consequently, it was considered smooth and without pores.On the other hand, for ACFF, the N 2 adsorption at 77 K isotherm showed significant information about surface texture.The isotherm is shown in Fig. 1 and is typically type I isotherm with no hysteresis and the saturation occurring at 0.2 relative working pressure and atmospheric pressure (P/P 0 ).than 1.0 nm, because of N 2 penetration limit.However, it can be clearly observed in this curve that the distribution in the region for a width less than 1 nm is ascendant in the direction of origin.This fact infers that pore volume and surface area of micropores may be bigger than those calculated by using these isotherms.The main ACFF characteristics are presented in Table 1. These results indicate that this activated material is predominantly populated with micropores.The pore size distribution curve is shown in Fig. 2, and it clearly shows that the maximum pore width presented is around 2 nm and the predominant pores are sized at around 1.2 nm.This technique does not give information about pores sized less x-RAY PhOTOElECTROn SPECTROSCOPY DATA XPS has proved a powerful method for the investigation of carbon surfaces (Kim and Park 2011;Ma et al. 2013).XPS surface characterization method combines surface sensitivity with the ability to quantitatively obtain both elemental and chemical state information.The surface chemistry of carbons is determined by the distribution and the nature of the surface functional groups.Many different functional groups can be identified on carbon surface.The information of all elements can be obtained from the survey scan spectrum of XPS.Vision software was used to calculate atomic concentrations.From survey spectra, atomic concentrations were calculated using peak areas of elemental lines.and ACFF samples was revealed.The spectra indicate that the major peaks in the spectra were due to the C 1s and O 1s, and a smaller N 1s peak is also discernible. X-Ray Photoelectron Spectroscopy Survey Spectra The contribution of the XPS technique in this work is the precise identification of the elements present on the surface of CFF and ACFF samples.The elemental composition (in %) on the CFF and ACFF samples were summarized in Table 2. XPS survey spectra showed the presence of less than 4.0% of oxygen on the surface of CFF and ACFF.The ACFF displayed O/C ratio of 2.42% while CFF showed higher O/C ratio of 4.05%.The same behavior was observed in the relative percentages of N/C atoms decreasing from 2.79 to 1.75% after the activation process. ACFF samples.The spectra consist of 2 major peaks attributed to D 1 and G.The G peak corresponds to the 1st-order Raman.The band frequency G (~ 1,584 cm -1 ) is the E 2g 1st-order mode at the Brillouim zone center (gamma point).The G peak is due to the C-C bond stretching of all pairs of sp 2 atoms in both rings and chains.In particular, the G peak is the main Raman signature for sp 2 carbon.The spectra also exhibit additional first-order band characterized by a disorder that represents a zone-edge A 1g mode.Since the D 1 band is activated by defects, its intensity can be used to quantify disorders.Besides, it is used to characterize the microcrystallite size (L a ), which can be estimated from the ratio between D 1 and G bands by using the method of Tuinstra and Koenig (1970).Thomsen et al. (2004) were the first to explain the Raman spectra of the D mode using the concept of double resonances for a given laser energy and phonon branch.The origin and dispersion of the D band in carbon materials were also investigated by Matthews et al. (1999).To explain the physical basis, these authors discussed the 2-D electron and phonon dispersion curves for graphite and concluded that the electronic transition only occurs in the The elemental composition and N/C as well as O/C ratio show a chemical surface change after the activation process.Besides, a slight increase in the amount of carbon was observed.The contribution of the XPS technique in this work is the precise identification of the elements present on the surface of CFF and ACFF samples.As expected, carbon, oxygen and nitrogen have been found on the surface of both fibers.In Fig. 3, it was observed that the carbon peak intensity is higher for ACFF.On the other hand, it may be observed a decrease in the oxygen amount after the activation process.In other words, the activation process contributed to the removal of the oxygen layer on the CFF, increasing the amount of carbon on the surface of ACFF. RAMAn AnAlYSiS Raman spectroscopy is a technique widely used for analyzing carbon-based materials due to its sensitivity to different carbon structures, which produce distinctive Raman peaks for various forms of carbon.For example, the exact frequencies of the Raman bands of diamond, graphite, and amorphous forms of carbon depend on the crystallite size and stress present in the different carbon domains.The graphitic materials consist of a large number of peaks.However, the most intense broad bands are the G and D. In this paper, the D band is assigned as D 1 .Raman spectra have been analyzed following Sadezky et al. (2005).Figure 4 shows the Raman spectra of the CFF and vicinity of the K point in the Brillouin zone.In the literature, it has been accepted that the relationship between band intensities is proportional to the degree of organization of carbonaceous materials.However, this relationship has been used in different forms as (I D1 /I G ) A,H or (I D1 /(I G + I D1 )) A,H .The subscripts A and H indicate that the ratio is based on integrated intensities or peak heights, respectively.Beyssac et al. (2003) proposed to characterize the organization using R 2 = (I D1 /(I G + I D1 + I D2 )) A,H . In this study, we tested the 3 different forms used in the literature.Besides, Lobo et al. (2005) considered the intensity values obtained from the integration of the D and G bands instead of using the ratio of the peak heights.In other words, the effect of line broadening is included in their calculation.Special care should be taken to define a baseline to compare one spectrum to another.It is very important to point out that the decomposition is very sensitive to the choice of baseline.A linear baseline was chosen for both spectra to be the most appropriate.Raman spectra of the CFF and ACFF samples were submitted to deconvolution to separate G and D 1 peak in the first order by fitting as a sum of the Lorentzian-shaped G, D 1 , D 2 and D 4 bands, as well as the Gaussian-shaped D 3 band.The reason for the D 3 band being Gaussian-shaped is because it presents better fit with the experimental data (Sadezky et al. 2005) and this study is also in agreement with this result.The fitting accounts the totality of the signal.It is noteworthy that both spectra G and D bands overlap each other to some extent.The overlapping is very important when we consider the integrated intensities.The spectra were recorded at different positions on the surface and looked very similar.The proportions in which the band participates are reported in Fig. 4 and the parameters used to calculus in Table 3.There is an appreciable difference in the absolute value of the 3 different forms to characterize carbonaceous materials.However, when we analyze each one individually, the ratio based on integrated intensities or peak heights did not present significant difference.By comparing the 3 different forms presented (Fig. 5), it is possible to verify that I D1 /(I G + I D1 ) A,H and I D1 /(I G + I D1 + I D2 ) are more dispersive than I D1 /I G .Several authors (Tuinstra and Koenig 1970;Thomsen et al. 2004;Matthews et al. 1999;Beyssac et al. 2003) discuss the D and G band intensity ratios (I D1 /I G ) and full width at half maximum for G-band (FWHM-G) and for D-band (FWHM-D) decrease with the increasing degree of crystalline.By considering the integrated intensities of D and G, our results reveal that (I D1 /I G ) for both CFF and ACFF are very close.However, according to Table 3, the FWHM-D 1 changes from 210.87 for CFF to 167.01 cm -1 for ACFF present a decrease of approximately 21%.Besides, the FWHM-G decreased after the activation processing by 17%.The decrease can be interpreted as some kind of transition related with the temperature during the activation process.Probably, it is related with the decrease in oxygen during the activation process as can be observed in XPS measurements.This indicates that defects were not introduced and Hu et al. (2015) reported diff erent fi tting procedures; the authors claim that their procedures are the most appropriate.Th ere is a considerable controversy regarding fi tting procedures. In this study, we consider that the most appropriated procedure for our results is to deconvolute the spectra utilizing multibands as reported in Sadezky et al. (2005).Aft er analyzing the 3 diff erent forms, we conclude that the variations of FWHM are no refl ected in the (I D1 /I G ) form.On the other hand, the relation I D1 /(I G + I D1 ) A,H and I D1 /(I G + I D1 + I D2 ) indicates that the structure of the CFF was changed aft er the activation process, which may be related to the production of surface defects aft er the removal of surface radicals from the sample. x-RAY PhOTOElECTROn SPECTROSCOPY AnAlYSiS From the X-ray diff raction (XRD) patterns of the CFF and ACFF, the lateral size (L a ) and stacking height (L c ) parameters were calculated using the Scherrer equation (Warren 1969) -Eqs.1 and 2, respectively.For calculation of L a and L c , we applied the Scherrer constant k c equal to 0.9 for (002) diff raction line and k a equal to 1.77 for (011) diff raction line, where λ (0.15 nm) is the wavelength of the X-ray used; β 011 and β 002 are full width at half maximum of the diff raction line ( 002) and ( 011).Th e results are summarized in Table 4. It can be noticed a small change in L a and L c after the activation process.Th is can be explained by the formation of pores on the CFF surface aft er the activation process.Although the changes in L a and L c are not considerable, there is a signifi cant increase in the crystallite size (Table 4). SCAnninG ElECTROn MiCROSCOPY AnAlYSES Th e SEM images for the CFF and ACFF samples are presented in Fig. 7. From the SEM images obtained from the magnifi cation of 1,000X, it is possible to observe the production of samples consisting of fi bers of about 10 -15µm in diameter.Th e SEM images also revealed the presence of smooth fi bers, without damages, grooves and holes.The visualization of the fiber surface with more details was possible through SEM images obtained from magnification of 5,000X.Based on these images, a smooth, clean and damage-free surface can be visualized for the CFF sample.On the other hand, the ACFF sample presents a surface containing dark zones and grooves.Pores with nanodimension are not identified by SEM in Fig. 7.However, the presence of pores is confirmed according with Fig. 2.This morphological difference related to surface damages may be associated with the activation process.In fact, these types of damages were expected. CONCLUSION Activated carbon fibers produced from PAN fibers textile were successfully achieved.These studies had an important contribution regarding the understanding of the activation process on the CFF samples.The activation process plays an important role for the production of fibers with high surface and pore distribution, which are determinant to increase the exposed area.These characteristics were revealed through nitrogen adsorption isotherm analysis.XPS analysis reported the decrease in the heteroatoms quantity present in the sample, causing defects on the surface.The defects were observed by Raman spectroscopy analysis.The presence of surface imperfections was revealed through SEM images.The X-ray diffractograms showed the CFF sample had higher crystallites when compared to the ACFF one.This decrease in the crystallite size may be associated with the surface tension resulting from the activation process.Besides, the release of heteroatoms is responsible by the creation of defect, causing damage on the surface. Figure 3 Figure3shows the XPS survey spectra from which the elemental composition of the most external surface of the CFF Figure 4 . Figure 4. Raman spectra of the CFF and ACFF. Figure 5 . Figure 5. Calculation of the different ratio involving I G , I D1 and I D2 bands. of activation.In fact, this indicates a small organization of the ACFF.Recently,Mallet-Ladeira et al. (2014) Figure 6 Figure 6 . Figure 6 illustrates the XRD spectra for the CFF and ACFF samples.Th e diff ratogram pattern of these samples exhibits 2 distinctive broad refl ections in the θ angles located approximately at 25 and 44°.The peak broadening is an indicative of an amorphous structural aspect.Th e broad refl ection in the 20 at 25° and 44° are associated with (002) and (001) diff raction line, respectively. Table 2 . Elemental composition and N/C and O/C ratio for the CFF and ACFF samples. Table 3 . Fitting parameters of the G, D 1 , D 2 , D 3 and D 4 bands for the CFF and ACFF samples. Table 4 . Crystallite size of the CFF and ACFF.
4,912.6
2017-10-19T00:00:00.000
[ "Materials Science" ]
Review on the application of chemometrics for the standardization and authentication of Curcuma xanthorrhiza Temulawak ( Curcuma xanthorrhiza Roxb.) or Javanese turmeric is one of Indonesia's native medicinal plants and is widely distributed throughout Southeast Asia. Temulawak contains some bioactive compounds having biological activities. The secondary metabolites in temulawak vary widely, depending on the environmental conditions where it grows. Temulawak as a raw material for herbal medicine is often faked with other rhizomes so that the analytical method capable of detecting the adulteration practice of temulawak is needed. The standardization of temulawak is a difficult task because the chemical compounds in temulawak are rather complex. In order to overcome the large and complex data, chemometrics is needed. The purpose of this paper was to highlight the application of chemometrics used during the standardization of temulawak through fingerprinting profile studies. During the literature searching, several databases namely Scopus, Web of Science, Pubmed and Google scholar were explored to get the relevant articles using specific keywords related to the topic. Some chemometrics techniques in combination with several instrumental techniques like spectroscopic and chromatographic methods are successfully used for the characterization and fingerprinting profiling of temulawak. Based on the data synthesized, chemometrics is powerful technique for treating the complex data intended for standardization of temulawak. Introduction In order to increase the preventive and promotive efforts in the health sector, herbal medicine is one of the people's choices to protect and maintain their health. Temulawak (Curcuma xanthorrhiza) is a native Indonesian medicinal plant widely spread to Southeast Asia (Nihayati et al., 2013). Temulawak, known as Javanese turmeric, is a rhizome widely applied as raw material for herbal and food ingredients. The demand for the herbal medicine industry is relatively high with an increase of 5.4% per year. The main chemical contents of C. xanthorhiza are xanthorrhizol accounting of 1.48-1.63%, curcuminoids such as curcumin, demethoxycurcumin, bisdemethoxycurcumin accounting of 1-2%, phelandren, camphor, tumerol, cineol, borneol, flavonoids, and sesquiterpenes (Husni, 2016). C. xanthorrhiza also contains some essential oils with chemical compounds of β-elemene, zingiberene, γ-elemene, β-farnesene, α-curcumene, benzofuran, αcedrene, epicurzerenone, ar-curcumene, germacrone, aromadendrene, α longipene, trans-caryophilene and curcuphenol . These compounds are responsible for the yellow to orange color, as well as for the biological activities of temulawak ( Itokawa et al., 1985). Currently, the awareness and public concern in the standardization, authenticity, and quality of herbal medicines has increased significantly, therefore, analytical methods have been developed to perform these tasks (Rohman, Rawar, Sudevi et al., 2020). Herbal medicines have many complex chemical contents characterized by specific markers to differentiate plant species. Their biological activities are the cumulative effects of many chemical compounds (Jia et al., 2017). Some factors contribute to the biological activities such as time harvesting, seasons, plant age, therefore, some eISSN: 2550-2166 © 2022 The Authors. Published by Rynnye Lyan Resources MINI REVIEW efforts are needed to standardize the herbal raw materials in order to ensure the quality of the herbs (Gopi et al., 2019). Medicinal plants with complex chemical contents, of course, require a special method for quality controls through physico-chemical and molecular biology analyses (Ni et al., 2009). The process of identification and analysis of chemical constituents in medicinal plants can be performed by three approaches namely single component analysis through analysis of specific markers, fingerprinting analysis and metabolomic (Esteki et al., 2018) using some instrumental techniques including molecular spectroscopic and chromatography methods (Mazina et al., 2015). Temulawak is widely applied in herbal and traditional medicine products. Due to its high demand, temulawak is the potential to be substituted or adulterated with other species having a similar appearance such as Curcuma domestica (Muttaqin, 2018), therefore, it is very important to standardize temulawak to assure its quality (Windarsih et al., 2021). The standardization of herbal medicine using fingerprinting profiling and metabolomics resulted in large numbers of responses which make it difficult to handle them. Fortunately, the special statistical package known as chemometrics could resolve this problem. Almost fingerprinting profiling and metabolomic studies used chemometrics for special purposes including pattern recognition and multivariate calibration (Granato et al., 2018). Chemometrics is a combination of mathematical and statistical techniques to process chemical data (Rohman., 2017). Some reviews on the application of chemometrics in herbal standardization include Traditional Chinese Medicines (Razmovski-Naumovski et al., 2010;Bansal et al., 2014;. Therefore, the purpose of this paper is to highlight the application of chemometrics used during the standardization of temulawak. In authentication of herbal medicine, the identification of the geographical origin of a medicinal plant including C. xanthorrhiza is part of drug analysis and part of quality control in pharmaceutical analysis. Identifying the geographic origin of plant material is a difficult task to do chemically, thus, an application of chemometrics is required to perform these tasks (Xie et al., 2006). Methods During performing this narrative review, we followed some steps as suggested in several papers reporting the writing of the review articles (Green et al., 2006;Gasparyan et al., 2011;Gregory and Dennis, 2018). The databases used during searching works of literature needed for writing review articles were Web of Science, Scopus, PubMed and Google Scholar. The keywords used for information search are Curcuma xanthorrhiza + chemometric OR Curcuma xanthorrhiza + standardization OR Curcuma xanthorrhiza + geographic origin. Chemometrics According to the International Chemometrics Society (ICS), the definition of chemometrics is described as "the science of relating chemical measurements made on a chemical system to the property of interest (such as concentration) through the application of mathematical or statistical methods . Chemometrics is a branch of science, which relates the chemical analysis, mathematical or statistical methods (Gemperline., 2006). In chemometrics, some analytical purposes namely quantitative analysis using multivariate calibration, identification or classification using supervised or unsupervised pattern recognition are commonly applied in chemical sciences including authentication and standardization of herbal medicines (Brereton, 2003;Rohman et al., 2014). For example, in pattern recognition using chromatography methods, the samples are grouped according to their measurement (responses) using chromatogram which is the specific character of the analyzed sample (Beebe et al., 1998). The variables used during this task can be retention time, peak area, and peak height. The data were displayed in the form of a matrix consisting of rows and columns written numerically. Each row relates to one object and each column relates to certain features of the object or samples (Massart et al., 1997). Chemometric methods in data analysis are pervasive and important in the decisionmaking and problem-solving processes. The chemical analysis deals with complex mixtures, compounds, and their properties, which are often very complicated to be analyzed. Therefore, chemometrics is suitable for the analysis of herbal medicines which are typically complex in nature (Rohman, Rawar, Sudevi et al., 2020). Today with the sophisticated development of statistical software and computers, chemometrics have become the main tool for processing data (Bansal et al., 2014). Several chemometric techniques applied in the standardization and authentication of herbal medicine are data preprocessing such as normalization and derivatization, data exploratory using Principal Component Analysis (PCA), unsupervised pattern recognition using Soft Independent Modeling of Class Analogy (SIMCA) and cluster analysis, supervised pattern recognition such as discriminant analysis and multivariate calibrations such as partial least square regression and principle component regression (Bansal et al., 2014 (Bansal et al., 2014;Rohman, Ghazali, Windarsih et al., 2020). While in the authentication studies, chemometrics assisted in determining the origin of herbal medicine, adulteration of high-quality components of herbal medicines with the lower one, or identification of undeclared components in herbal products (Liu et al., 2020). Fingerprint profiling can be obtained from spectroscopic, chromatographic or electrophoretic data. The fingerprint profile must be able to display the similarities and differences in the analyzed samples (Cubero-Leon et al., 2014). The authentication of herbal medicine is a difficult task because many components of medicinal plants are unknown (Bauer, 1998). Fortunately, the chromatography and spectroscopy methods are able to show very strong authentication techniques through fingerprinting profiles. Combining the chemometrics with instrumental responses, the standardization of herbal medicine can be achieved with high precision and accuracy (Gad et al., 2013). Application of chemometrics in standardization of Curcuma xanthorrhiza Some chemometrics techniques of pattern recognition and multivariate calibrations combined with some instrumental techniques of spectroscopic and chromatographic methods were applied for the standardization of Temulawak. The variables used for chemometrics analysis using spectroscopic methods are absorbance values or ratios at specific wavelengths or wavenumbers, while variables exploited using chromatographic techniques were retention time, peak area, peak height or its ratios . Table 1 compiled some chemometrics methods combined with instrumental techniques for the identification and discrimination of Temulawak, intended for standardization as described in Table 1. Principal component analysis Principal component analysis (PCA) is an exploratory data analysis commonly used for the classification of samples (Irnawati et al., 2021), and two outputs in PCA commonly reported during classification or clustering are principal components (PCs) and loading plots. PCs are useful to identify any groupings in the data set. In addition, loading plots are shown from coefficients by which the original variables are multiplied in order to get PCs values. Loading plots describe the variables responsible for the separation and or classification of objects (Kim et al., 2010). The identification and discrimination of similar plants, such as turmeric (C. longa), Javanese turmeric (C. xanthorrhiza) and Bangle (Zingiber cassumunar), need to be done to ensure the quality of raw materials used (Rohaeti et al., 2015). Fourier transform Infrared (FTIR) spectroscopy combined with chemometrics can be the method of choice because of the analysis due to its nature as a fingerprint. Visual discrimination of the three species is indicated by the marker bands of FTIR spectra of each species. PCA followed by Canonic Variate Analysis (CVA) using FTIR spectra could classify these plants (Rohaeti et al., 2015). PCA and discriminant analysis in combination with UV spectroscopy have been applied for the differentiation of four Curcuma species, namely Curcuma xanthorrhiza, C. longa, C. aeruginosa and C. mangga. These four rhizomes are widely used in herbal medicine and dietary supplements. The absorbance values of UV-Vis spectra at the wavelength of 210-500 nm were used as variables during differentiation. UV-Vis spectra were acquired in the interval of 200-800 nm and the standard normal variate was used for preprocessing the spectral data. Using two PCs (PC1 and PC2), PCA could differentiate Curcumas in which PC1 and PC2 were accounting for 79.30% and 12.0%, respectively. In addition, DA using discriminant function 1 (DF1) and discriminant function 2 (DF2) could discriminate four species with an accuracy rate of the correctness of 95.5% . PCA is also applied for the differentiation of C. xanthorrhiza, C. aeruginosa, and C. longa using two-dimensional NMR spectra. PC1 and PC2 were accounting for 63.1% and 28.1%, respectively. Based on the identification of metabolites, curcumin and xanthorrhizol are responsible for this differentiation (Wahyuni et al., 2019). Multivariate analysis of PCA using data set obtained from 1 H-NMR spectra clearly discriminated pure and adulterated C. xanthorrhiza with C. aeruginosa as shown in Figure 1. PCA using two PCs showed a clear separation between pure C. xanthorrhiza, pure C. aeruginosa, and adulterated C. xanthorrhiza using several concentration levels of C. aeruginosa. Several original variables used for making the PCA model were reduced to be principal components, which explains the original variables. PC1 and PC2 described 73% of PC1 and 24% of PC2. Research has been carried out on the existence of adulteration of C. xanthorrhiza with C. domestica, based on the fingerprint profiling by Thin Layer Chromatography (TLC). Fingerprinting profiles of C. xanthorrhiza were obtained from C. xanthorrhiza from Cianjur, Semarang, and East Nusa Tenggara, while the fingerprint profiles of C. domestica were obtained from Cianjur regions. Furthermore, the analysis was carried out by PCA. The results of PCA analysis showed that the (Muttaqin et al., 2018). PCA was also successfully used for the classification of other Curcuma species intended for hindering the adulteration practice (Windarsih et al., 2019). Discriminant analysis Discriminant analysis (DA) is one of the supervised pattern recognitions commonly used for the discrimination or classification of objects/samples into several groups (Rohman and Putri, 2019). DA using algorithm of orthogonal projections to latent structures (OPLS) has been applied for the authentication of C. xanthorrhiza with C. aeruginosa using variables of 1 H-NMR spectra. OPLS-DA was successfully applied for the classification of pure and adulterated C. xanthorrhiza with higher R2X (0.965), R2Y (0.958), and Q2(cum) (0.93) as shown in Figure 1 (Rohman, Wijayanti, Windarsih et al., 2020). Authentication and discrimination studies have also been conducted to differentiate the fingerprints of C. xanthorrhiza and C. longa based on curcuminoid levels using HPLC assisted with chemometrics of DA. This combination can separate C. xanthorrhiza and C. longa species . In addition, the combination of 1H-NMR and chemometric methods are promising for the authentication of medicinal plants (Windarsih et al., 2021). 1 H-NMR spectroscopy and chemometrics have been applied to authenticate C. xanthorrhiza adulterated with Zingiber cassumunar. Partial least squarediscriminant analysis (PLS-DA) using 7 main components (PCs) was successfully classified the original sample and the adulterated C. xanthorrhiza with values of R2X (0.988), R2Y (0.998), and Q2 (0.993). The chemometrics of PCA and PLS-DA allows for the discrimination of pure C. xanthorrhiza and C. xanthorrhiza adulterated with Z. cassumunar (Wijayanti et al., 2019). Multivariate calibrations Multivariate calibration is one of the quantitative tools for prediction of analyte(s) of interest in herbal medicine like curcumin in Temulawak using several variables. Partial least square regression (PLSR) and principle component regression (PCR) are the most applied techniques (Keithley et al., 2009). PLSR using absorbance values of FTIR spectra at 4000-650 cm -1 has been used to predict the levels of curcumin (C), demetoxycurcumin (DM) and total curcuminoid (TC) in C. xanthorrhiza intended for the standardization. The actual contents of curcuminoid in the ethanolic extract of C. xanthorrhiza were previously determined using HPLC with a PDA detector. With PLSR, the R 2 values of the calibration model for CUR, DMCUR and TCUR were > 0.99. The levels of curcuminoid determined using FTIR spectroscopy-PLSR were not statistically significant compared with the HPLC method based on an independent sample t-test (P > 0.05) (Lestari et al., 2017). The combination of FTIR spectra-PLSR was also successfully applied for the prediction of the levels of curcumin in curcuma in C. longa and C. xanthorhiza. The actual levels of curcumin were determined using HPLC. PLSR using absorbance values at wavenumbers of 2000-950 cm -1 was suitable for the prediction of curcumin. The R 2 values for the correlation between actual values and FTIR predicted values of curcumin were 0.96 and 0.99 with RMSEC values of 0.299 and 0.089 in C. longa and C. xanthorriza, respectively. High R 2 values and low RMSEC values indicated high accuracy and precision of the analytical method (Rohman et al., 2015). Conclusion Several chemometrics techniques either pattern recognition (supervised such as discriminant analysis and unsupervised like principal component analysis) or multivariate calibrations like partial least square using variables generated from several instrumental techniques like spectroscopic and chromatographic methods are successfully used for characterization and fingerprinting profiling of herbal medicines including Temulawak intended for the authentication, quality control and standardization of herbal medicine. Based on the data synthesized, chemometrics is a powerful and meaningful technique for treating the complex data intended for the standardization and authentication of herbal medicines such as Javanese Turmeric.
3,551.8
2022-01-31T00:00:00.000
[ "Chemistry", "Medicine" ]
Green-Aware Virtual Machine Migration Strategy in Sustainable Cloud Computing Environments As cloud computing develops rapidly, the energy consumption of large-scale datacenters becomes unneglectable, and thus renewable energy is considered as the extra supply for building sustainable cloud infrastructures. In this chapter, we present a green-aware virtual machine (VM) migration strategy in such datacenters powered by sustainable energy sources, considering the power consumption of both IT functional devices and cooling devices. We define an overall optimization problem from an energy-aware point of view and try to solve it using statistical searching approaches. The purpose is to utilize green energy sufficiently while guaranteeing the performance of applications hosted by the datacenter. Evaluation experiments are conducted under realistic workload traces and solar energy generation data in order to validate the feasibility. Results show that the green energy utilization increases remarkably, and more overall revenues could be achieved. Introduction Large-scale datacenters, as the key infrastructure of cloud environments, usually own massive computing and storage resources in order to provide online services for thousands of millions of customers simultaneously. This leads to significant energy consumption, and thus high carbon footprint will be produced. Recent reports estimate that the emissions brought by information and computing technologies grow from 2% in 2010 [1] to 8% in 2016 and will grow to 13% by 2027 [2]. Hence, considering the heavy emissions and increasing impact on However, the intermittency and the instability of the renewable energy sources make it difficult to efficiently utilize them. Fortunately, we know that the datacenter workloads are usually variable, which give us opportunities to find ways to manage the resources and power together inside the datacenters to utilize renewable energy sources more efficiently. On the other hand, to provide guaranteed services for third-party applications, the datacenter is responsible of keeping the quality of service (QoS) at a certain level, subject to the service level agreements (SLAs) [3]. In modern datacenters, applications are often deployed in virtual machines (VMs). By virtualization mechanisms, VMs are flexible and easy to migrate across different servers in the datacenter. In this chapter, we attempt to conduct research on energy-aware virtual machine migration methods for power and resource management in hybrid energy-powered datacenters. Especially, we also employ thermal-aware ideas when designing VM migration approaches. The holistic framework is described, then the model is established, and heuristic and stochastic strategies are presented in detail. Experimental results show the effectivity and feasibility of the proposed strategies. We hope that this chapter would be helpful for researchers to study the features of VM workloads in the datacenter and find ways to utilize more green energy than traditional brown energy. The remainder of this chapter is organized as follows. Section 2 introduces some relevant prior work in the field of energy-aware and thermal-aware resource and power management. Section 3 presents the entire system architecture we discuss in this chapter. Section 4 formulates the optimization problem corresponding to the issue we need to address. Section 5 describes the methods and strategies we designed to solve the problem. Section 6 illustrates the experimental results by comparing three different strategies, and finally conclusion is given out in Section 7, in which we also discuss about some of the possible future work. Literature review This section reviews the literature in the area of energy-aware resource management, thermal-aware power management, and green energy utilization in datacenters. In the recent decade, many researchers started to focus on power-aware management methods to manage workload fluctuation and search trade-off between performance and power consumption. Sharma et al. [4] have developed adaptive algorithms using a feedback loop that regulates CPU frequency and voltage levels in order to minimize the power consumption. Tanelli et al. [5] controlled CPUs by dynamic voltage scaling techniques in Web servers, aiming at decreasing their power consumption. Berl et al. [6] reviewed the current best practice and progress of the energy efficient technology and summarized the remaining key challenges in the future. Urgaonkar et al. [7] employed queuing theory to make decision aiming at optimizing the application throughput and minimizing the overall energy costs. The above work attempts to reduce the power consumption while guaranteeing the system performance. On the basis of such ideas, we incorporate the usage of renewable energy into the optimization model, which might support performance improvement when the green energy is sufficient enough. Besides, thermal-aware resource management approaches also attracted some interest of researchers recently. For example, Mukherjee et al. [8] developed two kinds of temperatureaware algorithms to minimize the maximum temperature in order to avoid hot spots. Tang et al. [9] proposed XInt which can schedule tasks to minimize the inlet temperatures and also to reduce the cooling energy costs. Pakbaznia et al. [10] combined chassis consolidation and efficient cooling together to save the power consumption while keeping the maximum temperature under a controlled level. Wang et al. [11] designed two kinds of thermal-aware algorithms aiming at lowering the temperatures and minimizing the cooling system power consumption. Islam et al. [12] proposed DREAM which can manage the resources to control allocate capacity to servers and distribute load considering temperature situations. Similarly, we consider the impact of temperature on two kinds of cooling devices in this chapter, which directly decide the cooling power consumption. As renewable energy becomes more widely used in datacenters, corresponding research starts to put insights into green energy-oriented approaches for managing the resources and power. Deng et al. [13] treated carbon-heavy energy as a primary cost and designed some mechanisms to allocate resources on demand. Goiri et al. designed GreenSlot [14] aiming at scheduling batch workloads and GreenHadoop [15] which could deal with MapReduce-based tasks. Both of them tried to efficiently utilize green energy to improve the application performance. Li et al. [16] proposed iSwitch, which can switch the power supply between wind power and utility grid according to the renewable power variation. Arlitt et al. [17] defined the "Net-Zero energy" datacenter, which needs on-site renewable generators to offset the usage of power coming from the electricity grid. Deng et al. also conducted research on Datacenter Power Supply System (DPSS) and proposed an efficient, online control algorithm SmartDPSS [18] helping to make online decisions in order to fully leverage the available renewable energy and varying electricity prices from the grid markets, for minimum operational cost. Zhenhua et al. [19] presented a holistic approach that integrates renewable energy supply, dynamic pricing, cooling supply, and workload planning to improve the overall attainability of the datacenter. Upon the basic concepts of these work, we exploit the possibility and of efficient VM migration management toward sufficiently utilizing renewable energy supply, incorporating the flexibility of transactional workloads, cooling power consumption, and the amount of available green energy. Datacenter architecture This section describes the datacenter architecture, including the hybrid power supply and virtualization infrastructure. Figure 1 shows the system architecture of the sustainable datacenter powered by both renewable energy and traditional energy supplies. The grid utility and renewable energy are combined together by the automatic transfer switch (ATS) in order to provide power supply for the datacenter. Both functional devices and cooling devices have to consume power, as shown in the bottom part of the figure. Figure 2 illustrates the infrastructure of virtualized cloud datacenter. As shown, the underlying infrastructure of the datacenter is comprised of many physical machines (PMs), which are placed onto groups of racks. The utility grid bus and the renewable energy bus are connected together to supply power for the datacenter devices. Renewable sources will be used first, and the grid power will be leveraged as the supplementary energy supply. As mentioned before, virtual machines (VMs) are running on the underlying infrastructure as used to host multiple applications, as shown in the virtualization layer in Figure 2. Different VMs on the same PM might serve for different applications. In this chapter, we mainly discuss about transactional applications which needs CPU resources mostly, other than other types of resources. Problem definition This section defines necessary variables and also the problem we need to solve throughout this chapter. Model of computing and service units In the target problem, there are N heterogeneous physical machines in the virtualized cloud environment, and the available CPU resource capacity of PM i is denoted as Ф i . The entire environment is hosting M kinds of different applications, deployed on M different VMs. Denote the jth VM as VM j . Then, denote x j as the index of the PM which is hosting VM j . Denote φ i as the allocated CPU capacity to VM j and d i as the demanded CPU capacity of application j at the current time slot. Power consumption model According to the mechanisms of dynamic voltage and frequency scaling (DVFS) techniques, here we use a simple power model which assumes that the power consumption of other components in the PM correlate well with CPU [20]. Denote p i as the power consumption of PM i in each time slot and p i MAX as the maximum power consumption of PM i (100% occupied by workloads). Then, the following equation can be used to compute the PM power consumption: where c is a constant number representing the ratio of the idle-state power consumption of a PM compared to the full-utilized-state power consumption [21] and θ i is the current CPU utilization of PM i. Besides, we also consider the power cost spent on cooling devices when establishing the power model, which is usually much related to temperature. The cooling system we discuss here consists of both the traditional computer room air conditioning (CRAC) unit and the air economizer. According to relevant studies [10], the coefficient of performance (CoP) is often used to indicate the efficiency of a cooling system, which can be computed by where k is a factor reflecting the difference between outside air and target temperature, T sup is the target supply temperature, and T out is the outside temperature. As it can be observed, Eq. (2) contains two parts, corresponding to the situation whether the CRAC or the air economizer will be used for cooling, respectively. Hence, the total power consumed by both functional devices and cooling devices can be calculated by Furthermore, considering the impact of environmental temperature inside the datacenter, we also tried to exploit thermal-aware VM migration strategies. The power consumption of the servers will make the surrounding environmental temperature increase, due to the dissipated heat. Prior studies [11] provided ways to model the vector of inlet temperatures T in as where D is the heat transferring matrix, p is the power consumption vector, and T s is the supplied air temperature vector. The thermal-aware strategy tries to reduce the cooling power by balancing the temperature over the servers. Accordingly, the workload on different PMs should also be maintained balanced. Denote T safe as the safe outlet temperature and T server as the outlet temperature of the hottest server. In order to lower the server temperature to the safe level, the output temperature of cooling devices should be adjusted by T adj = T safe − T server . Then, the output temperature after adjusted will be T new = T sup + T adj . Hereafter, the CoP value can be determined by T new and T out [22]. Modeling overhead and delay To reduce the power consumption of the PM, it can be switched to sleeping state which can help save energy as much as possible. In addition, the operational costs also include the VM migration costs, since migrating VMs dynamically will definitely lead to some overhead. Denote a i as the flag recording whether PM i is active or sleeping. Denote c A as the cost for activating a PM from sleeping state and c MIG as the cost for migrating a VM from one PM to another. Besides, the time delay is also considered and integrated into the experiments in Section 6 for waking up a PM and migrating a VM. Optimization problem formulation From the resource providers' point of view, the objective should be maximizing the total revenues by meeting the requirements of the hosted applications while minimizing the consumed power and other costs. Usually, the revenues from hosting the applications are related to service quality and the predefined level in the SLA. Assume here that the service quality is reflected by the CPU capacity scheduled to the target application. Denote d j as the demanded CPU capacity of APP j and φ j as the CPU capacity amount scheduled to APP j. Denote Ω j (•) as the profit model for APP j, which gives the actual revenue by serving APP j at a certain quality level. Since the dynamic action decisions are made during constant time periods, denote τ as the length of one time slot. Denote t as the current time slot, and then in time slot t+1, the goal is to maximize the net revenue subject to various constraints. Denote x j as the index of PM currently hosting VM j, and then the VM placement vector X can be denoted as Hence, the optimizing objective of the defined problem can be expressed as where the first term is the total revenue summarized over all of the hosted applications, the second term represents the power consumption costs of the entire datacenter, the third term is the PM wake-up cost, and the last term represents the VM migration cost. With respect to the objective defined above, the constraints could be expressed as 0 ≤ ϕ j ≤ d j , j = 1, 2, … , M where Eq. (7) means that the allocated capacity cannot exceed the PM CPU capacity, Eq. (8) means that the CPU scheduled to a VM should be less than its demanded value, and Eq. (9) gives the validated ranges of the defined variables. Methods and strategies In this section, we design some heuristic methods and also the joint hybrid strategy, and describe the ideas in detail. Dynamic load balancing (DLB) The idea of the DLB strategy is to make the workload on different PMs balanced by dynamically placing VMs. To achieve the balancing effect, if one PM is detected to be more utilized than the specified upper threshold, some VMs on this PM will be chosen to migrate otherwhere. As a result, the PM utilization ratio will be controlled in a certain range, and there will be as few overloaded PMs as possible. Dynamic VM consolidation (DVMC) According to the features of virtualization techniques, VMs could be consolidated together onto a few PMs to make other PMs zero loaded. Hence, the main idea of the DVMC strategy is to consolidate VMs as much as possible aiming at saving more power. Both the upper threshold and the lower threshold of the PM utilization level are defined. If one PM is light loaded enough that its utilization is less than the lower threshold, the VM consolidation process will be triggered. After this process, VMs upon underutilized PM will be migrated onto other PMs. Finally, zero-loaded PMs could be turned into inactivate state in order to save more power. Joint optimal planning (JOP) The JOP strategy aims to optimize the VM placement scheme with the objective of sufficiently utilizing the renewable energy and reducing the total costs. Renewable energy forecasting Since renewable energy is used as one source of power supply, we have to forecast the input power value in the next time slot. Here the k-nearest neighbor (k-NN) algorithm is adopted. A distance weight function is designed to calculate the distance each solar radiation values, as follows: where d i is the distance between the ith neighbor and the current point. Figure 3 shows the forecasting effect on one day in October 2013. The data were measured and collected in Qinghai University, Xining, Qinghai Province of China. By analyzing the data points, the allowed absolute percentage errors (AAPE) of 97.01% data are less than 30%. The accuracy of the prediction method depends on the similar weather conditions in the recent past and may be affected by weather forecast data. Stochastic search In order to look for the best scheme of VM placement, we use stochastic search to do the optimization. Specifically, the genetic algorithm (GA) is modified and employed as follows: For a typical genetic algorithm, there are two basic items as follows: A genetic representation of solution space Here, for this problem, the decision variable is the vector of VM placement, which can be denoted as X = (x 1 , x 2 …, x M ). A fitness function to compute the value of each solution As described, the objective function defined by Eq. (6) could be used as the fitness function. It is functional in measuring the quality of a certain solution. Hereafter, the fitness function will be denoted as F(X). The procedure of genetic algorithm can be divided into following steps: i. Initialization First, we add the current configuration vector in the last time epoch into the initial generation. Besides, a fixed number (denoted as n g ) of individual solutions will be randomly generated. Specifically, a part of the elements of each solution will be generated randomly, in the range of 0~N−1. ii. Selection After initialization, the generations will be produced successively. For each generation, n b best-ranking individuals from the current and past population will be selected to breed a new generation. Then, in order to keep the population constant, the remained individuals will either be removed or replicated based on its quality level. The selection procedure is conducted based on fitness, which means that solutions with higher fitness values are more prone to be selected. According to such concepts, the probability to select an individual X i can be calculated as n g F ( X i ) (11) In this way, less fit solutions will be less likely to be selected, and this helps to keep the diversity of the population and to keep away from premature convergences on poor solutions. Reproduction After selection, a second generation of population should be generated from those selected solutions through two kinds of genetic operators: crossover and mutation. The crossover operator first selects two different individuals, denoted as . Then, a cutoff point k is set from the range 1~M. Both X 1 and X 2 are divided into two halves, and the second half of them will be swapped and then . As a result, two new individuals will come out, which is perhaps already in the current population or not. After crossover, the mutation operator will mutate each individual with a certain probability. The mutation process starts by randomly choosing an element in the vector and then changing its value, and then converts an individual into another. Termination This production process will repeat again and again until the number of generations reaches to a predefined level. Evaluation results This section shows our experiments comparing different strategies, and then the results and some details will be discussed. Parameter settings For the following experiments, we used C#.NET to develop the simulation environment and set up the prototype test bed. Specifically, a virtualized datacenter is established, comprised of 40 PMs with CPU capacity of 1500 MIPS each. For the power model, p i MAX is set to 259W according to Ref. [23] and, c is set to 66% according to Ref. [21]. Then, 100 VMs hosting different applications were simulated and put on the PMs. The workload on each VM fluctuates with time, with the value randomly generated under the uniform distribution. Table 1 shows all of the parameter settings in detail, and Figure 4 shows variation of the total CPU demand summarized over all of the workloads, from which it can be seen that there are two peaks in the 24-h period. We defined a nonlinear revenue function for each application, as mentioned in Section 4. Figure 5 shows some three typical examples. It can be seen that the revenue of every application changes elastically in a certain range. The control interval for reconfiguration actions in the experiment is set to 60 minutes. According to Refs. [24][25][26], we set c P to $0.08, set c A to $0.00024, and set c MIG to $0.00012. The VM migration delay is set to 5s, and the PM wakeup delay is set to 15s. The total experiment time is set to 1440 minutes. The temperature data used in the experiments come from the realistic data measured on 4 October 2013, recorded in the campus of the Qinghai University, Xining, Qinghai Province, China, as shown in Figure 6. Results In order to investigate the effectiveness of the proposed strategy, we will compare the performance among three different strategies -DLB, DVMC, and JOP, as stated in Section 5. Revenues As described in Section 4, the net revenue is a main optimizing objective in our problem. Figure 7 shows the total accumulated net revenues throughout the 1440-min experiment time. It can be observed that the JOP strategy can keep the net revenue relatively higher than other ones. Moreover, the DVMC approach behaves relatively better than DLB since it can save more power by VM consolidation. By examining the detailed data, we found that JOP could make the gains 38.2 and 24.2% higher than DLB and DVMC, respectively, with respect to the accumulated revenue. Power consumption Now we intend to investigate the power consumption in detail when using JOP, as Figure 8 illustrates. It can be observed from the figure that JOP is able to follow the solar energy variation quite well. When the solar power drops to insufficient level, JOP is prone to degrade the application performance to save more power. On the contrary, when the solar power arises, JOP allows both functional and cooling devices to consume more power, under the constraints of the input power. Interestingly, we can see that the temperature varies more or less in coincidence with solar energy generation, which implies that thermal-aware coscheduling of energy supply and consumption might be promising, since the temperature also affects energy consumption to some extent. Figure 9 shows the number of active servers when using the three different strategies. We can see that JOP can increase or decrease the number of active servers according to the variation of the solar power generation amount. Under the DLB strategy, all PMs are kept active so that the system-wide workload could be balanced. Comparatively, DVMC uses much fewer active PMs than DLB due to VM consolidation. However, it still uses more PMs at night time because it cannot effectively deal with the relationship of revenues and costs. Overall, JOP tries to manage PMs dynamically toward the optimization objective and thus can keep the number of active PMs as needed. Energy for cooling The cooling energy consumption is also investigated when using the three different strategies, as shown in Figure 10. As illustrated, JOP allows cooling devices to consume more power Conclusion and future work As the energy consumption of large-scale datacenters becomes significant and attracts more attentions, renewable energy is being exploited by more enterprises and cloud providers to be used as a supplement of traditional brown energy. In this chapter, we introduced the target system environment using hybrid energy supply mixed with both grid energy and renewables. From the datacenter's own point of view, the optimization problem was defined aiming at maximizing net revenues. Accordingly, three different strategies were designed to migrate VMs across different PMs dynamically, among which the JOP strategy could leverage stochastic search to help the optimization process. Results illustrate the feasibility and effectiveness of the proposed strategy and further investigation about the accumulated revenues, PM states, and cooling power consumption helps us to see more details of the working mechanisms of the proposed strategy. As datacenters become larger and larger and thus enormous amount of energy is still needed to power these datacenters, it can be expected that green sources of energy will attract more insights to provide power supplies instead of traditional brown energy. Our work tries to explore some strategies to migrate VMs inside a datacenter in a green-aware way. Nevertheless, there are still a lot of challenges in the field of leveraging sustainable energy to power the datacenters. On one hand, more kinds of clean energy sources besides wind and solar could be exploited, such as hydrogen and fuel cell, and their features should be studied and developed. On the other hand, how to synthetically utilize the battery, utility grid, and datacenter loads to solve the intermittency and fluctuation problems of the energy sources remains a difficult problem for system designers. In addition, it is also necessary and interesting to conduct some research on the air flow characteristics among racks and server nodes inside the datacenter room and develop some thermal-aware scheduling approaches correspondingly.
5,776.4
2017-06-14T00:00:00.000
[ "Environmental Science", "Computer Science", "Engineering" ]
An electromagnetic extension of the Schwarzschild interior solution and the corresponding Buchdahl limit We wish to construct a model for charged star as a generalization of the uniform density Schwarzschild interior solution. We employ the Vaidya and Tikekar ansatz [{\it Astrophys. Astron.} {\bf 3} (1982) 325] for one of the metric potentials and electric field is chosen in such a way that when it is switched off the metric reduces to the Schwarzschild. This relates charge distribution to the Vaidya-Tikekar parameter, $k$, indicating deviation form sphericity of three dimensional space when embedded into four dimensional Euclidean space. The model is examined against all the physical conditions required for a relativistic charged fluid sphere as an interior to a charged star. We also obtain and discuss charged analogue of the Buchdahl compactness bound. I. INTRODUCTION In 1916, almost immediately after the derivation of the unique exact solution describing the exterior gravitational field of a static spherically symmetric isolated object, Schwarzschild [1,2] obtained an interior solution for a uniform density fluid sphere of finite radius whose exterior was described by the former metric. Since then, a large number of exact solutions have been obtained for a more realistic description of relativistic compact stars, out of which only a few are shown to be physically viable, regular and well-behaved [3]. A natural extension of such models has been the inclusion of the electromagnetic field. The corresponding Einstein-Maxwell system, being highly non-linear, is extremely difficult to solve, and hence different simplifying techniques are often invoked to solve such a system. The choice of the Vaidya and Tikekar (VT) [4] metric ansatz, is one such approach which, ever since its inception, has found huge success for modelling of realistic astrophysical objects. The VT model was subsequently generalized by many investigators [5][6][7][8][9]. This ansatz is motivated by a geometric property that t = const. hypersurface of the associated spacetime, when embedded in a 4-Euclidean space is not spherical but spheroidal. The parameter k, which appears in the ansatz, indicates the departure from the sphericity of associated 3-space. The VT ansatz has been utilized to model a wide variety of compact stars (see e.g., Ref. [10][11][12][13][14][15][16][17]) and radiating stars (see e.g., Ref. [18][19][20][21]). The model has found its applications in higher dimension spacetimes as well (see, e.g., Ref. [22]). Recently an anisotropic stellar model has also been developed in the Buchdahl-Vaidya-Tikekar metric ansatz [23]. In this paper, we revisit the Vaidya-Tikekar metric ansatz to model a static spherically symmetric compact star with a charged fluid interior. Incorporation of the electromagnetic field in the modelling of relativistic astrophysical objects has a long history and is relevant for a wide variety of astrophysical systems at different stages of its evolution. Some of the pioneering works in this field include the investigations of Majumdar [24], De and Raychaudhari [25], Papapetrou [26], Cooperstock and Cruz [27] and Bekenstein [28], to name a few. Among many other factors, the observed high value of the electric field at the surface of ultra-compact stars [29] provides a sufficient ground to study the Einstein-Maxwell system. A large class of analytic solutions to the Einstein-Maxwell system corresponding to the exterior Reissner-Nordström metric has been compiled by Ivanov [30]. There has been considerable work in the literature on charged fluid model with the VT ansatz and with different prescriptions for the electric field, and models have been examined for their physical viability and acceptability [14,[31][32][33][34]. To get a solution of the Einstein-Maxwell field equations, one has always to prescribe some fall off behaviour for electric field and/or choose a specific value of the spheroidal parameter k. All these papers and the references given therein offer a good spectrum of different choices. In this paper, our main motivation is to superimpose electric field distribution on uniform density fluid distribution. We demand that when the electric field is switched off, the solution should reduce to the uniform density Schwarzschild solution. This limit would require the parameter k → 0, which means electric charge should be proportional to geometric parameter k. This, of course, amounts to prescribing a particular charge and electric field distribution. The novel and interesting feature of this model is clubbing of k with charge distribution. It is matched to R-N metric at the boundary defined by p = 0, and the solution is physically fully viable and satisfactory for modelling an astrophysical charged stellar object. We obtain and discuss charged analogue of the Buchdahl limit [35] for the model. In relativistic astrophysics, measurement of the maximum mass to radius ratio of a stellar configuration has been a matter of prime importance ever since Buchdahl [35] derived the relation M/R ≤ 4/9 for an isotropic fluid sphere. In a very simple manner the Buchdahl bound could be found by demanding pressure at the centre being finite. This is what has motivated us to seek charged analogue of the uniform density Schwarzschild fluid sphere so that we could obtain the charged analogue of the Buchdahl limit. Theoretical developments in the analysis of Buchdahl limit for Schwarzschild as well as Reissner-Nordström background spacetimes is available in Ref. [36] and references therein. In this paper, by restricting the model parameters in such a manner that central pressure does not diverge, we obtain an upper bound on compactness of charged star which is a charged generalization of the Buchdahl limit. The paper is organized as follows. In section II, for a static and spherically charged fluid distribution, we lay down the independent set of equations for the Einstein-Maxwell system in terms of a single generating function f (r). The idea of this generating function is such that by setting f (r) = 0, one gets back the Schwarzschild solution for a homogeneous fluid sphere. For f (r) = 0, we specify the charge distribution q(r) so as to have fallen back on the Schwarzschild solution. The most interesting choice we would explore is f (r) = kr 2 /C 2 , which is, in fact, the VT-metric ansatz, and we generate a new class of solutions where the electromagnetic field gets determined in terms of the parameter k. In section III, we match the interior solution to the exterior Reissner-Nordström metric at the boundary given by p = 0. Making use of the junction conditions, we determine the constants of the model in terms of total mass M , radius R and charge Q. In section IV, we discuss the Buchdahl limit, some physical properties and overall viability of the model. We end with a discussion in section V. II. EINSTEIN-MAXWELL SYSTEM We write the line-element describing the interior of a static spherically symmetric charged fluid sphere in the form in standard coordinates x i = (t, r, θ, φ) where ν(r) and µ(r) are the undetermined functions. The unknown functions can be determined by solving the Einstein-Maxwell field equations where, represent the energy-momentum tensor corresponding to matter and electromagnetic field, respectively. In Eqs. (4) and (5), ρ, p, σ and F ij denote the energy-density, pressure, charge-density and electromagnetic field tensor, respectively. u i is the 4-velocity of the fluid. Due to spherical symmetry, the only surviving independent component of the electromagnetic field tensor is F tr E. The Maxwell equations (3) yield where the total charge q(r) contained within the sphere of radius r is defined as From Eq. (6), we have The Einstein-Maxwell field equations (in system of units having G = c = 1) are then obtained as where, a prime ( ) denotes differentiation with respect to the radial parameter r. Combining equations (10) and (11), we obtain which becomes the definition of the electric field provided µ(r) and ν(r) are known. To solve the system, we assume the metric potential µ(r) in the Buchdahl-VT [4,35] form as where C is an arbitrary constant. It is not yet the Schwarzschild solution because ν is still undetermined. Note that when f (r) = 0, one can generate the Schwarzschid interior solution for an incompressible fluid sphere. A coordinate transformation x 2 = 1 − r 2 C 2 allows us to rewrite Eq. (13) in the form where f x represents derivative with respect to x. To solve equation (15), we introduce a new variable and rewrite equation (15) as Now let's try to superimpose charge distribution on the Schwarzschild solution. This suggests that ψ(r) should be a linear function and when charge distribution is switched off; i.e. f = 0. Thus, we demand d 2 ψ/dx 2 = 0 giving us and Note that the above solution is obtained for the particular choice (18) which in terms of the radial parameter r takes the form The choice of the charge distribution q(r) is, therefore, motivated by the fact that it provides a simple solution which may be treated as a generalization of the Schwarzschild interior solution for an incompressible fluid as discussed below. Moreover, the choice ensures that q(r) is well behaved at r = 0 as well as at all interior points of the star. Combining equations (16) and (19), the unknown metric function e ν is obtained as where, a, b, C, and k are constants to be determined by matching the solution to the exterior R-N metric at the boundary. Thus, the space-time metric of a static and spherically symmetric object in the presence of an electric field is obtained as Note that it reduces to the Schwarzschild solution when f (r) = 0, and it is f (r) that determines charge q(r) 2 in equation (20). The model would be fully determined when f (r) is prescribed. In the next section we shall now consider the particular choices of this function for building a model for a charged stellar object. A. Case f (r) = 0 The electric field in this case vanishes and we regain the Schwarzschild interior solution for an (uncharged) incom- where R is the radius of the object and density and pressure are as given below That pressure should vanish at the boundary determines the constant R = C 1 − a 2 9b 2 which implies that we must have a < 3b. The central pressure takes the form which implies that we must have a > b. Combining the two conditions we get the bound, b < a < 3b. We must choose the function f (r) such that it provides a regular, well-behaved and physically meaningful stellar model. Accordingly, we write f (r) = k r 2 C 2 which is the Vaidya-Tikekar (VT) [4] ansatz for the modelling of a relativistic compact star. The 3-hypersurface is spheroidal which becomes flat and sphere for k = −1, 0, respectively. The spacetime is well behaved for r < C and k > −1. The energy-density, pressure and charge-density are then given by and at the centre they take the values as Obviously, the central density remains positive for k > −1 and pressure for a > b > a/3. For a = b, the central pressure diverges. The total mass within a sphere of radius r defined as integrates to give m(r) = 1 16 It is noteworthy that m(r = 0) = 0. Using Eq. (20), we have which also vanishes at the centre. III. MATCHING CONDITIONS The exterior spacetime of the static charged object is described by the Reissner-Nordström metric where M and Q represent the total mass and charge, respectively. The matching conditions are the continuity of e ν , e µ , and p = 0 at the boundary r = R. This means m(R) = M and q(R) = Q, and so we write The conditions explicitly take the following form at r = R, where, n = R 2 C 2 , u = M R and α 2 = Q 2 M 2 . Solving Eq. (40), we get so that where y = 1 − 2u + α 2 u 2 . Solving Eqs. (39) and (41), we obtain a = y[24 + 54kn + 4k 3 n 2 + k 2 n(9 + 25n)] Thus, all the constants are expressed in terms of k, M , R and Q. We rewrite Eq. (38) as and substitute the value of n to obtain where g(u, α) = 16 − 96α 2 + 288uα 2 + 96uα 4 − 456u 2 α 4 − 24u 2 α 6 + 232u 3 α 6 − 39u 4 α 8 . Note that for α = 0, we have k = 0 while the converse is always true from Eq (39). A. Application to astrophysical objects A physically acceptable stellar interior solution should have the following features: (i) The density and pressure should be positive throughout the interior of the star i.e., ρ, p > 0; (ii) the pressure p should vanish at some finite radial distance i.e., p(r = R) = 0 and (iii) the causality condition should be satisfied throughout the star which implies that 0 ≤ dp dρ ≤ 1. To verify whether the above conditions are fulfilled in this model, we take the mass and radius of the pulsar 4U 1820 − 30 as input parameters. The estimated mass and radius of the star are M = 1.58 M and R = 9.1 km, respectively [37]. With these values the values of the constants, for different choices of the parameter k, are given in Table I It is noted that the parameter C, which goes inverse to energy density increases with increasing k; i.e. density decreases. Since k is directly related to charge, which means an increase in k means increase in charge. Subsequently, increase in repulsive component due to charge resulting in a decrease in fluid density. For these set of values, we show the behaviour of the physically interesting quantities in Figs. (1)-(7). The plots indicate that the model is regular and well-behaved at all interior points of the star. Figs (1) and (2) respectively show that density and pressure monotonically decrease with increasing radius. Note that central density is larger for larger k, in contrast central pressure in larger for lower k. That is central density and pressure are respectively largest and smallest for homogeneous fluid didtribution. The rate of fall is stronger for larger k, again reflecting the repulsive effect of charge. It is interesting to note that at r ∼ 6 km, all density curves cross the uniform density straight line downwards. This clearly indicates that homogeneous distribution has the largest density at the boundary. On the other hand, mass and charge as expected monotonically increase with the radius as shown in Figs. (3)-(4). Like density curves, mass curves also cross over at some r, and the rate of increase for larger k falls down the uniform density curve. On the other hand, charge always has a stronger rate of increase with increasing k. Fig. (5) shows that the electric field is zero at the centre, and it monotonically increases towards the boundary and the rate of growth is stronger for larger k. Radial variation of the charge density is shown in Fig. (6). Note that matter density decreases with r while charge density increases and attains maximum then slowly decreases. It is interesting to note that the charge density becomes maximum at a radial distance where the inhomogeneous density meets the uniform density profile. The sound speed for different k values are shown in Fig. (7) which indicates that even though for relatively higher values of k the causality condition is satisfied, as we approach the constant density case (k ∼ 0.1), sound speed becomes as expected greater than unity. IV. BUCHDAHL LIMIT Astrophysically it is of prime importance to find how compact a star could be, i.e. what is the upper bound on mass to radius ratio -the compactness ratio, M/R? From an intuitive perspective, the stiffest equation of state is of uniform density incompressible fluid which is uniquely described by the Schwarzschild interior solution. The compactness limit would be indicated by upper bound on pressure at the centre. As we have seen above, this would be the condition a ≥ b in Eq. (31) giving the compactness limit, M/R ≤ 4/9. Buchdahl was first to obtain this limit under very general conditions of density and pressure being positive, the former decreasing with radius outward and at the boundary, it is matched to Schwarzschild exterior solution [35]. The same limit is also obtained for anisotropic fluid by invoking the strong energy condition, ρ ≥ p r + 2p t [38,39]. For a charged object there exist more than one limits [40][41][42][43] obtained for different interior distributions and equation of state. One that is closest in spirit to the uniform density case [41] in which it is envisaged a uniform density distribution is enveloped by a thin charged shell and the Buchdahl analogue limit reads as (1 + 1 − 8α 2 9 ) , It reduces to the Buchdahl limit, M/R ≤ 4/9 when charge is switched off Q = 0, and M/R ≤ 2/3, 8/9 < 1 for α 2 = 1, 9/8, respectively. Interestingly it prescribes the upper bound on charge a star could have, α 2 ≤ 9/8 which is > 1. That is, a non-black hole charged object could indeed be overcharged relative to a charged black hole. Very recently an insightful and novel prescription [36] has been proposed for the compactness limit which is given by gravitational field energy being less than or equal to half of the non-gravitational matter-energy of the object. The remarkable feature of this definition is that it is entirely determined by the unique exterior R-N metric without any reference to interior distribution, may what that be! The limit that follows is the one given above. In the following, we would like to obtain the compactness ratio M/R for our model. For a homogeneous sphere, the mass within a sphere of radius r is given by where ρ 0 = 3 8πC 2 is the homogeneous density. Defining the compactness factor as we write the metric potentials of the inhomogeneous sphere in the form We also express the physical quantities as Equation (56) shows that charge in our model is proportional to the constant density mass m 0 . The dimensionless parameter k, representing deviation from sphericity, also gets tagged into the expression of charge. Since the departure from sphericity is expected to be small, expanding tan −1 √ kr in equation (34), we obtain the mass function as m(r) = m 0 32(1 + k) + (41 + 53k)kφ 0 − 6(11 + 7k)k 2 φ 2 which shows that for k = 0 we regain m(r) = m 0 . At the centre r = 0, we have e 2ν = (a − b) 2 , e 2µ = 1, q 2 = 0, The regularity of σ c demands that we must have k ≤ 2. We also note that p c → ∞ for a = b. In other words, for a stellar configuration, we must have a > b and consequently, the upper bound on the compactness vis-a-vis Buchdahl type limit can be obtained by setting a = b. Now, imposing the condition a ≥ b in Eqs. (44) and (45), we obtain In the absence of any simple solution of Eq. (58), let us first consider the case k = 0. Substituting k = 0, in Eq. (38), we obtain Q = 0 which implies that we must set α = 0 in this case. Note that in the uncharged case (k = 0), the exterior Reissner-Nordström metric should be replaced by the Schwarzschild solution by setting Q = 0. Therefore, substituting k = 0 = α in Eq. (58), we obtain which readily provides the Buchdahl limit u(= M/R) ≤ 4/9. For k = 0, substituting the value of k from Eq.(47) into Eq. (58), and solving it numerically, one can find the upper bound on u = M R for a given charge to mass ratio, α 2 = Q 2 M 2 . The results are shown in Table II and Fig. (8). Alongside Eq. (58) we also plot Eq. (48) clearly indicating how beautifully our model is coasting along with the exact Buchdahl bound for charged object. A. Approximation method To reduce the complexity of the equations, we set k = k , where 0 < << 1. This is a reasonable assumption as the departure from sphericity denoted by k is expected to be small. In this case, retaining terms up to O( ), we use (46) to obtain Eq. (60) ensures that we have α = 0 for k = 0. Inserting the value of α 2 in Eq. (58) and neglecting terms O( 2 ), we obtain Even though this is an approximate equation, a neat solution of the above is not available. Nevertheless, numerical solution of the above equation provides a generalization of the Buchdahl limit in the case of a charged sphere. It is not difficult to show that for k = 0, the equation yields the Buchdahl limit u ≤ 4/9. To find an analytic solution of (58) in the extreme case a = b (when the central pressure diverges), we make use of equation (47) and obtain a truncated equation up to the order O( ) as which readily yields Rationalizing the above, we finally obtain a charged analogue of the Buchdahl limit given by . The above result provides an upper bound on α 2 ≤ 9/8 and u ≤ 8/9 < 1. Non black hole object would always have radius larger than the black hole. For α 2 = 0, we regain u ≤ 4/9. It is remarkable to note that by making use of a different technique and a specific model, we have been able to obtain the desired upper bound on the compactness of a charged object -a charged Buchdahl limit. By employing the Vaidya-Tikekar metric ansatz we have constructed a generalization of the homogeneous density distribution in which the parameter k gets coupled to charge distribution. When charge is set to zero, the solution goes over to the Schwarzschild uniform density fluid sphere. That means it is kind of charging the Schwarzschild uniform density solution. This has facilitated computation of the charged analogue of the Buchdahl compactness limit in Eq. (58). In Fig. 8, the compactness ratio M/R is plotted for Eqs. (48) and (58), the former is the Buchdahl bound as found in Refs. [41] and [36] while the latter is computed for the present model. It is remarkable to see that how the one due to Eq. (refmaseq) coasts beautifully that due to Eq. (48). Another worth noting feature is that where charge density (Fig. 6) attains the maximum value where energy density crosses the uniform density line (Fig. 1). This indicates that charge density increases while energy density decreases with radius until the latter crosses the uniform density line. Then the former attains maximum value and begins decreasing. This means for radius greater than the one where energy density becomes less than the uniform density value, charge density also begins decreasing. It is interesting to see how the behaviour of charge density is linked to the fact whether energy density is greater or less than the uniform density value. It is remarkable that a charged object could have α 2 = Q 2 /M 2 ≤ 9/8 > 1; i.e. it could be overcharged relative to a charged black hole. The question then arises, is it possible to construct models with 1 ≤ α 2 ≤ 9/8 -an explicit example of overcharged object? It would be interesting to construct such a model and that's what we would like to take up next in a separate investigation.
5,544
2020-06-16T00:00:00.000
[ "Physics" ]
Fate of graft cells: what should be clarified for development of mesenchymal stem cell therapy for ischemic stroke? Mesenchymal stem cells (MSCs) are believed to be promising for cell administration therapy after ischemic stroke. Because of their advantageous characteristics, such as ability of differentiation into neurovascular lineages, avoidance of immunological problems, and abundance of graft cells in mesodermal tissues, studies regarding MSC therapy have increased recently. However, several controversies are yet to be resolved before a worldwide consensus regarding a standard protocol is obtained. In particular, the neuroprotective effects, the rate of cell migration to the lesion, and differentiation direction differ depending on preclinical observations. Analyses of these differences and application of recent developments in stem cell biology or engineering in imaging modality may contribute to identification of criteria for optimal stem cell therapy in which reliable protocols, which control cell quality and include safe administration procedures, are defined for each recovery phase after cerebral ischemia. In this mini review, we examine controversies regarding the fate of grafts and the prospects for advanced therapy that could be obtained through recent developments in stem cell research as direct conversion to neural cells. Mesenchymal stem cells (MSCs) are believed to be promising for cell administration therapy after ischemic stroke. Because of their advantageous characteristics, such as ability of differentiation into neurovascular lineages, avoidance of immunological problems, and abundance of graft cells in mesodermal tissues, studies regarding MSC therapy have increased recently. However, several controversies are yet to be resolved before a worldwide consensus regarding a standard protocol is obtained. In particular, the neuroprotective effects, the rate of cell migration to the lesion, and differentiation direction differ depending on preclinical observations. Analyses of these differences and application of recent developments in stem cell biology or engineering in imaging modality may contribute to identification of criteria for optimal stem cell therapy in which reliable protocols, which control cell quality and include safe administration procedures, are defined for each recovery phase after cerebral ischemia. In this mini review, we examine controversies regarding the fate of grafts and the prospects for advanced therapy that could be obtained through recent developments in stem cell research as direct conversion to neural cells. Keywords: mesenchymal stem cell, ischemic stroke, stem cell therapy, translational research, neurovascular unit DEVELOPMENT OF MESENCHYMAL STEM CELL THERAPY STUDY FOR ISCHEMIC STROKE Ischemic stroke is a common central nervous system (CNS) disease. Despite continuous development in treatments, stroke is still a major cause of death or disability, and therefore, more effective therapies are required. In 1990s, clinical trials neuroprotective agents targeted single mechanism, i.e., glutamate-induced neurotoxicity revealed to become failure (Hoyte et al., 2004). In the lesion insulted by brain ischemia, multiple pathogenic mechanisms are activated. As the failures in the early neuroprotective drug development showed (Degraba and Pettigrew, 2000), a genuine effective therapy would be required to solve the pleiotropic pathology (Teng et al., 2008;Guo and Lo, 2009). Another concept to treat lost function by ischemia is to supply cells or tissue for replacement of the damaged brain tissue. In the early days of stem cell research, stem cells were expected as a source of tissue regeneration. Since the publication of the earliest reports of attempted administration of embryonic or neonatal neural stem cells for regeneration of the CNS in the early 1990s (Renfranz et al., 1991;Snyder et al., 1992), diverse cell types have been investigated to identify an ideal cell line to generate tissue grafts for CNS. Candidate cells can be categorized into embryonic, fetal, neonatal, or adult by maturation of each origin tissue. When categorized by a stage of differentiation, the examined cells can be sourced from pluripotent cells (embryonic stem cells or induced pluripotent cells), ectodermal lineage (neural stem cells, olfactory neuroepithelial stem cells, or NT2 cell line derived from neuroteratocarcinoma), mesodermal lineage [mesenchymal stem cells (MSCs), CD34+ cells, endothelial progenitor cells, hematopoietic stem cells, or bone marrow mononuclear/stromal cells]. As discussed in published reviews on stem cell therapies (Locatelli et al., 2009;Bhasin et al., 2013;Kalladka and Muir, 2014), neural stem cells, and mesodermal lineage listed above have already been applied for ischemic stroke in clinical settings from subacute phase to chronic phase. In this mini review, the advantages of MSCs, as a source for stem cell therapy, are summarized. Furthermore, controversial points in preclinical experimental studies and the developing field of MSC therapy resulting from the recent evolution in stem cell biology are discussed by focusing on the biological features of mesenchymal stem cells (MSCs). Among stem cell therapies, the greatest numbers of clinical trial for MSC have been conducted (Rosado-De-Castro et al., 2013a), thus MSC therapy can be the most practical stroke treatments in cell-based therapies . More than 30 years after when Friedenstein et al. (1966) isolated osteogenic cell population from bone marrow, MSCs have been identified in bone marrow (Pittenger et al., 1999), adipose tissue (Zuk et al., 2002), umbilical cord (Erices et al., 2000), peripheral blood (Ukai et al., 2007), dental pulp (Gronthos et al., 2000), and a wide range of mesodermal tissues including perivascular site in brain (Kang et al., 2010;Paul et al., 2012). The criteria for identifying MSCs as proposed by the Mesenchymal and Tissue Stem Cell Committee of the International Society for Cellular Therapy are (1) plastic adherence of isolated cells in culture; (2) in cell surface marker analysis, >95% of the culture positively expressing the cell surface markers CD105, CD73, and CD90, while being negative for CD34, CD45, CD14, or CD11b, CD79a, or CD19, and human leukocyte antigen-DR; and (3) in vitro differentiation into three mesodermal cell types, namely osteoblasts, adipocytes, and chondroblasts (Dominici et al., 2006). Moreover, the characteristics of MSC present advantages. MSC have been shown their multipotency that is beneficial to differentiate into multiple lineages to repair neurovascular unit or neural network; they could demonstrate multiphasic actions to modify endogenous repairing process including reprogramming, harmful immune response, or chemical reactions via secretion abilities; they are easier to prepare for grafting due to their accessible cell source and proliferation potential for rapid cell expansion. (Doeppner and Hermann, 2010;Grande et al., 2013;Wan et al., 2013) The first series of successful experiments for MSCs for the treatment of ischemic stroke was reported by Chopp's group Li et al., 2000;Zhang et al., 2000). They have examined multiple protocols for bone marrow stromal-derived stem cells (BMSCs) such as administration route (intracerebral, transventricular, intra-arterial, transvenous), timing, or dose, as well as have analyzed mechanisms of functional recovery focused on restore or remodeling functional connectivity in neural circuits/tract. Subsequently, details required for the establishment of safe and effective therapy protocols (Borlongan, 2009;The STEPS Participants, 2009;Savitz et al., 2011) have been analyzed by a number of investigators. Most results in the preclinical studies have indicated that MSC administration is beneficial. In this context, clinical trials employing systemic administration via peripheral veins were initiated more recently (Lee et al., 2010;Honmou et al., 2011). So far, these trials have not demonstrated severe adverse results (Lalu et al., 2012), even during observation periods lasting longer than a few years, despite the prediction of risks, such as embolization (Ge et al., 2014;Yavagal et al., 2014), infection, and tumorigenesis (Coussens et al., 2000;Li et al., 2007), in experimental studies. CONTROVERSIES IN PRECLINICAL STAGE Overall, accumulated findings have indicated that MSC therapy is reliable for stroke treatment. However, several points must be clarified for achievement of consensus as a reliable protocol. As shown in Table 1, the conditions of some preclinical studies resulted in differing outcomes because of graft cell detection in the lesion, infarct volume reduction, functional recovery, marker expression (neuronal, glial, or vascular: direction of differentiation), and the type of MSCs considered to have more therapeutic effects, particularly BMSCs and adipose tissue-derived stem cells (ASCs). MIGRATION TO THE LESION A major discrepancy in the results of preclinical studies is whether graft cells have the ability to migrate to a cerebral lesion, although mechanisms of MSC transmigration across the blood-brain barrier (BBB) have been analyzed (Liu et al., 2013). The accumulation of graft cells in the lesion is expected to directly enhance neuroprotection and cell replacement in infarcted tissue. A comparison of different administration routes revealed that transarterial delivery was more successful in order to detect graft cells in the brain than transvenous delivery, although several studies reported a decrease in the number of detected cells in the later phase (Ishizaka et al., 2013;Mitkari et al., 2013). The transvenous route induced fewer side effects than intra-arterial infusion; however, physiologically, graft cells must pass through several traps, such as the lung and BBB. Although, the BBB can be disrupted by ischemic insult around the damaged areas, MSCs may have the basic ability to transmigrate the BBB as immune cells in response to homing signals to the lesion (Liu et al., 2013). Nonetheless, there are certainly successful examples demonstrating the integration of graft cells in the peri-infarct area even after transvenous infusion from a peripheral vessel (Table 1). Classically, immunohistological analysis is a standard method to detect MSC migration, but recent imaging techniques, such as magnetic resonance imaging (MRI) with magnetic cell labeling (Detante et al., 2012;Canazza et al., 2013) and nuclear imaging using 99m Tc-labeled graft (Detante et al., 2009;Vasconcelos-Dos-Santos et al., 2012), have been proposed to reveal the distribution of MSCs. Subsequently, a phase I clinical trial employing 99m Tcsingle photon emission computed tomography (SPECT) for assessment of biodistribution of the labeled grafts in subacute patients have safely conducted (Rosado-De-Castro et al., 2013b). The findings of these recent analytical methods may resolve the question of accurate distribution of graft cells. FUNCTIONAL RECOVERY Many preclinical studies have also reported differences in infarct volume reduction and functional recovery (Hao et al., 2014). Assessment methods of functional recovery vary, although there certainly are popular tests in animal studies, such as the treadmill test or Roger's test. Therefore, differences in functional assessment may simply be based on differences in the employed assessment methods. On the other hand, it is more difficult to elucidate discrepancies in infarct volume reduction. In vivo studies with rodents have been conducted to investigate the changes in infarct volume reduction by direct measurement of the brain tissue after decapitation. Regarding clinical applications, non-invasive methods, such as MRI, may be beneficial to translate the findings of in vivo studies to clinical settings. Although the availability of mechanical devices varies among laboratories, the development of alternative clinical methods is recommended for future in vivo experiments. Another problem is whether MSCs isolated from different tissues also differ. MSCs are obtained from diverse mesodermal tissues, i.e., bone marrow, adipose tissue, dental pulp, or cord blood. MSCs from different sources show different characteristics in vitro (Kern et al., 2006;Hsiao et al., 2012). Therefore, comparative study for different cell sources as conducted by Frontiers in Cellular Neuroscience www.frontiersin.org Gutierrez-Fernandez's group is important, however, the therapeutic effects in similar experimental ischemic stroke models also differ in transvenous administration studies (Ikegame et al., 2011;Steiner et al., 2012;Gutierrez-Fernandez et al., 2013) compared to intra-arterial administration studies that have shown graft cells in the lesion. (Table 1) On the other hand, nuclear imaging is another available method to assess the therapeutic effectiveness. Diffusion and perfusionweighted imaging provide information of blood supply in the brain (Canazza et al., 2013). Furthermore, functional MRI is employed by experimental studies in rodents, which unable to assess functional recovery and even neural network by analyses of resting state functional MRI (Canazza et al., 2013). The neural integrity has been investigated by 123 I -Iomazenil SPECT (Saito et al., 2013). A 18 F-FDG positron emission tomography study have measured glucose metabolism after MSC therapy in rats for cerebral ischemia (Miyamoto et al., 2013). For assessment of functional recovery, these methods from more biofunctional aspect would be practical in addition to observations of behavioral change. DIRECTION OF DIFFERENTIATION The direction of differentiation also remains controversial for in vivo experimental studies. Although MSCs are derived from mesenchymal tissue, they exhibit multipotency and transdifferentiation into ectodermal lineages, including neural cells, both in vitro and in vivo (Zuk, 2013). Previous in vitro immunochemistry studies have demonstrated the ability of MSCs to differentiate into cell types that comprise the neurovascular unit, including neurons, astrocytes (Wislet-Gendebien et al., 2004), and endothelial cells (Hess et al., 2002;Planat-Benard et al., 2004). Moreover, possible differentiation abilities toward oligodendrocyte lineage (NG2-positive cells; Shen et al., 2006), specific types of neurons, such as glutamatergic neurons (Yu et al., 2014), and smooth muscle cells of vessels (Kubis et al., 2007) have been demonstrated. In vivo studies have reported that graft cells detected in the lesion result from neuronal or glial differentiation (Guzman et al., 2008). However, one study demonstrated the vascular fate rather than differentiation to neural lineages (Kubis et al., 2007). To ensure the practical differentiation, in addition to these morphological, immunohistochemical, or genetic assessments, cells should be further examined. With respect to neural differentiation, neurotransmitter responsiveness or electrophysiological recording is required to examine their function as a neuron (Yang et al., 2011). Moreover, when MSCs are employed, absence of cell fusion also should be excluded. Though the MSC's rate of spontaneous cell fusion is only 2-11 clones per million cells (Terada et al., 2002), and the mechanism may also participate in the tissue repair, nonetheless, biologically it should be distinguished from differentiation. BMSC and ASC are observed the neural differentiation that can show neural function in earlier studies. First, Ashjian et al. (2003) recorded K + current on neuronal cells induced from ASC. Cho et al. (2005) reported synaptic transmission, and Wislet-Gendebien et al. (2005) showed action potential of the neuron-like cells differentiated from BMSC. AUTOLOGOUS OR ALLOGENIC? With the exception of the acute phase after ischemic insult, both allogenic and autologous grafting of MSCs can be prepared. Although the efficacy of technologies has improved, besides the advantage of MSCs in immunomodulation, theoretically allogenic grafts cannot ameliorate all concerns regarding transinfection or immunological side effects. Autologous grafts can overcome the problems related to allogenic cells. Nonetheless, at the present stage, other than obtaining the major MSCs, the use of both BMSCs and ASCs requires invasive procedures. Bone marrow aspiration and harvesting of adipose tissue are considered safe and established techniques; however, because ischemic stroke patients usually take antiplatelet or anticoagulant agents, and in some case, the patient may be intolerant to other conditions, further less invasive methods, such as the use of peripheral blood, present alternative sources of cells. As mentioned in the previous section, each type of MSCs from different cell sources tend to exhibit original traits or abilities, although they meet the criteria of MSCs. Knowledge regarding defined factors/conditions for MSC-fate regulation could enable the preparation of homogenous MSCs, even from peripheral blood (Meng et al., 2013). Autologous grafts may have an additional advantage over allogenic grafts. In preclinical observations, MSCs reportedly developed function following contact with a conditioned media (Egashira et al., 2013), serum (Honmou et al., 2011), or cerebrospinal fluid from patients (Orito et al., 2010), which is reflected in the biological responses to invasive stimulation. It is possible that MSCs may achieve proper function in reaction to insults (Kurozumi et al., 2005;Xin et al., 2013). Therefore, graft cells harvested from ischemic stroke patients may gain more favorable function than allogenic grafts from those who are not affected by ischemic insults. Strikingly, the first nonrandomized clinical trial for a protocol with autologous BMSCs and serum has been shown to be safe and effective (Bang et al., 2005;Lee et al., 2010;Honmou et al., 2011). A 5-year randomized trial also began in 2012, which will provide further information regarding autologous stem cell therapy (Kim et al., 2013). POSSIBILITY OF ADVANCED MSC THERAPIES AS A SOLUTION OF QUESTIONS MSC MODIFICATION AND IDENTIFICATION BY DEFINED FACTORS RELATED TO CELL FATE REGULATION From a pharmacological viewpoint, the actions of agents should be confirmed after administration. If MSCs are regarded as a type of biological drug, then differences in differentiation ability should be better clarified. Emerging induced pluripotent stem cells (iPSC) studies have shown promising benefits in the field of regenerative medicine that could have at least two major impacts on MSC studies. These findings may be useful to settle the controversies listed above, particularly those regarding the direction of differentiation of graft cells in the host and differences in the characteristics of MSCs originating from the cell source. First, the appearance of iPSCs indicates the potential of multipotency in somatic cells (Takahashi and Yamanaka, 2006), which is supported by observations of differentiation into either neural or endothelial cells in MSCs. Although many reports Frontiers in Cellular Neuroscience www.frontiersin.org Transcriptional factors microRNA * (Yang et al., 2011;Abdullah et al., 2012;Kim et al., 2012;Lujan and Wernig, 2012;Shi and Jiao, 2012) * (Feng and Feng, 2011;Pham and Gallicano, 2012;Bian et al., 2013) have demonstrated the ability MSCs of mesodermal origin to differentiate into other type of germ cells of ectodermal lineages (neural cells) and endodermal lineages (insulin-producing cells), which could indicate multipotency, the defined conditions for MSCs to differentiate into neural cells remain uncertain. In the infancy of stem cell research, cell fusion and contamination of neural crest cells were suggested as the mechanism of a graft cell to express neural markers in the host tissue after cell administration (Wrage et al., 2008;Maltman et al., 2011). If the postulates reveal to be the main mechanism, neural marker expression can't be called neural differentiation, which unable MSC to be called "stem cell." Therefore, until recently, the term "MSC" containing the term "stem cell" had its pros and cons, and thus, MSCs were called stromal cells. However, successful reprogramming of skin fibroblasts to the multipotent state has provided more information to support the multipotency of MSCs. Second, induction techniques may contribute to further elucidate the quality control mechanisms for the use of MSCs. Protocols for chemical induction to neuron or glia had been developed recently (Safford and Rice, 2005;Franco Lambert et al., 2009;Yu et al., 2011). Following the publication of methods to harness and propagate iPSCs, other methods related to direct conversion from fibroblasts to neuronal cells by defined transcription factors have been reported (Vierbuchen et al., 2010;Yang et al., 2013). The neural lineage is composed of induced neuronal (iN) cells, induced neural progenitor cells (iNPCs), and induced NSCs (iNSCs; Yang et al., 2011;Abdullah et al., 2012;Corti et al., 2012;Shi and Jiao, 2012). Moreover, iPSC-derived MSCs (iPSC-MSCs) were identified (Jung et al., 2012). There are multiple pathways for neural induction. As listed in the Table 2, in addition to defined transcriptional factors for direct conversion, microRNA (Feng and Feng, 2011;Pham and Gallicano, 2012;Bian et al., 2013) or other epigenetic factors (Namihira and Nakashima, 2011) can contribute to differentiation. The definitive conditions to propagate/identify iN cells, iNSCs, iNPCs, or iPSC-MSCs may be useful to propose a standard protocol for the required type of MSCs. Lancaster et al.'s (2013) team developed a three-dimensional brain tissue from iPSCs by the floating culture method. To obtain functional recovery in vivo, several groups have shown that tissue regeneration or replacement of damaged tissue with ex vivo materials is not always necessary ( Table 1). Particularly in the brain tissue, repair of the neural circuitry is required to improve function. Nonetheless, tissue engineering using scaffolds (Mahmood et al., 2013) or novel organogenesis methods present possible transplantation treatments to recover neurological deficits. CONCLUSION Since the first report of MSC (Pittenger et al., 1999), investigators have revealed favorable cell characteristics for cell therapies and have shown evidence for feasible stem cell therapy using MSCs in order to achieve safe applications in clinical settings. However, there are limited methods to ensure reliable treatment. Nevertheless, further studies combined with developments in other biological and/or engineering fields may solve these present problems, and establish an ideal stem cell therapy beyond categorization of MSCs. ACKNOWLEDGMENT This work was supported in part by Japan Society for the Promotion of Science (JSPS KAKENHI Grant Number 24700824) to Yuka Ikegame. Frontiers in Cellular Neuroscience www.frontiersin.org
4,628.4
2014-10-21T00:00:00.000
[ "Biology", "Medicine" ]
Abundant Symmetry-Breaking Solutions of the Nonlocal Alice–Bob Benjamin–Ono System +e Benjamin–Ono equation is a useful model to describe the long internal gravity waves in deep stratified fluids. In this paper, the nonlocal Alice–Bob Benjamin–Ono system is induced via the parity and time-reversal symmetry reduction. By introducing an extended Bäcklund transformation, the symmetry-breaking soliton, breather, and lump solutions for this system are obtained through the derived Hirota bilinear form. By taking suitable constants in the involved ansatz functions, abundant fascinating symmetry-breaking structures of the related explicit solutions are shown. Introduction In the recent years, studying the local excitations in the nonlinear evolution equations (NEEs) has become great significance since the complex nonlinear phenomena related to the NEEs involve in fluid dynamics, plasma physics, superconducting physics, condensed matter physics, and optical problems [1][2][3][4][5][6]. In fact, researchers have discovered many powerful methods for studying these aspects, such as the Hirota bilinear method [7][8][9], the inverse scattering method [10,11], the Painlevé analysis approach [12][13][14], the Bäcklund transformation [15], and the Darboux transformation [16][17][18]. Furthermore, the investigation of the solitary waves and solitons taking one or more of the above approaches for the NEEs has become more and more important and attractive. Meanwhile, one of the proposed two-place nonlocal models, the nonlinear Schrödinger (NLS) equation (1) (where P and C are the usual parity and charge conjugation operators) had been investigated [19]. Recently, Lou proposed the Alice-Bob (AB) systems to describe two-place physical problems [20,21]. e parity, time reversal, charge conjugation, and their suitable combinations were conserved for most of the above problems [20][21][22][23][24][25][26][27][28][29][30]. However, these AB symmetries exist in various physical models, although they are not directly used to solve the nonlinear physical systems, especially the P-C-T symmetries [22]. Using the Bäcklund transformation, some types of PT symmetry-breaking solutions including soliton and rogue wave solutions were explicitly obtained. In addition to nonlocal nonlinear Schrödinger equation (1), there are many other types of two-place nonlocal models, such as the nonlocal modified KdV systems [20] and the nonlocal Boussinesq-KdV systems [23]. In this work, we consider the (1 + 1)-dimensional Benjamin-Ono (BO) equation where β is the nonlinear term coefficient and c is a dispersion coefficient. e BO equation is one of the most important nonlinear equations that describes one-dimensional internal waves in deep water [31]. Ono had developed the Benjamin theory to obtain a species of the NEEs [32]. e two/fourplace nonlocal Benjamin-Ono equation was explicitly solved for special types of multiple soliton solutions via P-C-T symmetric-antisymmetric separation approach [22]. Many authors also studied different properties for equation (2), such as the Bäcklund transformation, the existence of quasiperiodic solutions and the nontrivial time-periodic solutions, infinitely many conservation laws, and other integrable properties [33][34][35][36][37][38]. e outline of this paper is as follows: in Section 2, the AB-BO system and its Lax pair are introduced, and its bilinear form is written through an extended Bäcklund transformation. In Section 3, the symmetry-breaking soliton, breather, and lump solutions are presented through the derived Hirota bilinear form. According to the taken constants in the involved ansatz functions, some sets of the fascinating symmetry-breaking structures of the related explicit solutions are shown, correspondingly. Summary and conclusions are given in the last section. The AB-BO System and Its Lax Pair, Bäcklund Transformation, and Bilinear Form Based on the principle of the AB system [20,21], after substituting u � (A + B)/2 into equation (2), the nonlocal AB-BO system is derived as follows: Equation (3) can be split into the coupled equations where and equations (4a) and (4b) are reduced to the following AB-BO system: (7b) Obviously, systems (7a) and (7b) are integrable, and their Lax pair can be written as λ 1 and λ 2 being arbitrary constants. Now, we introduce an extended Bäcklund transformation with b 1 , b 2 , α being arbitrary constants, and F ≡ F(x, t) is an undetermined real function of variables x, t and satisfies where D 2 t , D 2 x , and D 4 x are the bilinear derivative operators defined by [8,9]. According to the properties of bilinear operator D, equation (12) is equal to which is a bilinear form of equation (2). Symmetry-Breaking Soliton, Breather, and Lump Solutions to the AB-BO System In this section, we turn our attention to the Hirota bilinear form (12) of AB-BO systems (7a) and (7b) to derive the symmetry-breaking soliton, symmetry-breaking breather, and symmetry-breaking lump solutions. (12), we can first determine the symmetry-breaking soliton and breather solutions through the Bäcklund transformation (10) of AB-BO systems (7a) and (7b) with the function F be written as a summation of some special functions [20,21,23]: Symmetry-Breaking Soliton and Breather Solutions to the AB-BO System. Based on the bilinear form where the summation of ] We notice that when η i0 � 0, the invariant condition For N � 1, equation (15) possesses the form We have the single soliton solution (18) Figure 1 shows the profile of the single soliton solution to the AB-BO system. e velocity of this solitary wave is equal to − 3 after our choice of the free parameters. At the same time, we also know that the amplitude of the solitary wave increases with the increase of parameters b 1 and b 2 . For N � 2, equation (15) becomes where e two-soliton solution is obtained by substituting equations (19) and (20) into equation (10): After the two-soliton is generated by equation (21). e corresponding structures are plotted in Figure 2. Figure 2(a) shows the wave shape, wave velocity, and amplitude are unchanged after two solitons' head-on collision. In earlier works [33], by some constraints to the parameters on the two solitons, a family of analytical breather solutions can be obtained. Inspired by this technique, we give the breather solution to equation (19) by setting en, F can be written as Complexity and the corresponding t-breather solution is obtained which is shown in Figure 3. Similar to the two-soliton solution, we also give the second-order breather solution. In this case, we set the parameters in equation (29) as follows: en, F can be written as e corresponding second-order t-breather structures are depicted in Figure 7. We set the parameters in equation (29) as follows: en, F can be written as cos(x) Complexity and the corresponding second-order x-breather solution is obtained which is shown in Figure 8. Summary and Conclusion It is believed that the two-place correlated physical events widely exist in the field of natural science, and discussing AB physics has a profound influence on other scientific fields. In this article, we studied the nonlocal BO equation coupled with an AB system. First of all, one established a special AB-BO system via the parity with a shift of the space variable and time reversal with a delay. At the same time, with the derived extended Bäcklund transformation and the corresponding Hirota bilinear form, the symmetry-breaking soliton, symmetry-breaking breather, and symmetry-breaking lump solutions were presented. Finally, by choosing special parameters, these solutions of the AB-BO system were discussed in detail. e coefficients with shifted parity and delayed time reversal in the nonlocal AB-BO system were discussed, from which the abundant symmetry-breaking solutions were illustrated by changing parameters of events A and B. Data Availability e data used to support the findings of this study are included within the article. For more details, the data are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
1,740.2
2020-05-31T00:00:00.000
[ "Physics" ]
A Low Complexity Near-Optimal Iterative Linear Detector for Massive MIMO in Realistic Radio Channels of 5G Communication Systems Massive multiple-input multiple-output (M-MIMO) is a substantial pillar in fifth generation (5G) mobile communication systems. Although the maximum likelihood (ML) detector attains the optimum performance, it has an exponential complexity. Linear detectors are one of the substitutions and they are comparatively simple to implement. Unfortunately, they sustain a considerable performance loss in high loaded systems. They also include a matrix inversion which is not hardware-friendly. In addition, if the channel matrix is singular or nearly singular, the system will be classified as an ill-conditioned and hence, the signal cannot be equalized. To defeat the inherent noise enhancement, iterative matrix inversion methods are used in the detectors’ design where approximate matrix inversion is replacing the exact computation. In this paper, we study a linear detector based on iterative matrix inversion methods in realistic radio channels called QUAsi Deterministic RadIo channel GenerAtor (QuaDRiGa) package. Numerical results illustrate that the conjugate-gradient (CG) method is numerically robust and obtains the best performance with lowest number of multiplications. In the QuaDRiGA environment, iterative methods crave large n to obtain a pleasurable performance. This paper also shows that when the ratio between the user antennas and base station (BS) antennas (β) is close to 1, iterative matrix inversion methods are not attaining a good detector’s performance. Introduction The number of mobile devices is remarkably growing year over year. For instance, the number of mobile devices reached 8.6 billion devices at the end of 2017, up from 7.3 billion devices at the end of 2014 and it is expected to exceed 12.3 billion devices at the end of 2022. Furthermore, the global mobile data traffic was almost 15 exabytes per month at the end of 2018, up from 3.7 exabytes per month at the end of 2015 and it is projected to be 77.5 exabytes at the end of 2022. It is also foreseeable that over 400 million devices are going to be fifth generation (5G) capable and about 12% of a global mobile data will be on the 5G cellular connectivity by 2022 [1][2][3]. 5G networks will attain 1.5 billion subscriptions in 2024 [4]. Massive multiple-input multiple-output (M-MIMO), together with other technologies, is a auspicious technology to meet high data rate, ultra-low latency, broader coverage 5G system requirements [5]. M-MIMO can also reinforce the spectrum and power efficiencies [6][7][8]. It also increases the throughput of wireless networks. For instance, when the number of user terminals is large, channel time spent on channel state information (CSI) feedback can bury the channel time spent of transmission of data. Therefore, a scalable user selection mechanism has been proposed in dense use population to enhance the overall throughput [9]. In addition, distributed MIMO eliminates the interference and improves the throughput in wireless communication networks [10]. The capacity enhancement is also possible by furnishing the base station (BS) with extra antennas [11]. In [12], influence of different number of antennas on the performance is comprehensively illustrated. However, along with attractive advantages of the M-MIMO system, the optimal detection methods such as the maximum likelihood (ML), encounter a high complexity in case of higher constellations (i.e., 64QAM) and higher number of antennas (> 16 and more), which prohibits the ML in realization. The literature is rich with M-MIMO detection schemes to balance the performance and the computational complexity. For example, a survey dated 2015 [13] offered a comprehensive illustration of MIMO detection basics and concepts, and illustrated the half-a-century history of detection schemes for MIMO technology. Another comprehensive paper can be found [14] wherein an intensive comparison between linear and non linear methods-based M-MIMO detection have been provided. For instance, a detector based on sphere decoding (SD) can be found in [15][16][17][18]. The fixed-complexity SD would also suffer from high complexity with large number of antenna elements [19]. In [20], dominance conditions are taken into consideration to propose an efficient king SD algorithm where the computational complexity is significantly reduced. In [21], the branch-and-bound algorithm is also developed with the dominance conditions where the channel matrix properties are exploited to reduce the complexity. In [22], SD and single tree-search scheme is proposed where extrinsic log-likelihood ratios (LLRs) are used to achieve convinced balance between the performance and the complexity. Approximate expectation propagation (EP) is proposed in [23]. In [24], a detector based on likelihood ascent search (LAS) was proposed for M-MIMO. Although they achieve a good performance, the complexity is still high. A class of linear detection methods attracted the researchers' attention because of low complexity. However, linear detectors have a considerable performance loss and high complexity in ill-conditioned environment. They also affords inverse of the matrix which is not a hardware-friendly. In the literature, approximate matrix inversion methods are illustrated to avoid the burden of exact computation of the matrix inversion [25][26][27][28][29]. In [30,31], a discrete sorting optimization scheme with QR decomposition is proposed for UL M-MIMO system where all simulation was conducted in QuaDRiGa. In this paper, several iterative matrix inversion methods are exploited to detect the signal and is illustrated in real scenarios. QUAsi Deterministic RadIo channel GenerAtor (QuaDRiGa) package [32] is used in the simulation to compare among iterative matrix inversion methods. In realistic scenario, we provide a comparison between the Neumann series (NS), the Gauss-Seidel (GS), the successive overrelaxation (SOR) method, the Jacobi (JA) method, the Richardson (RI) method, the optimized coordinate descent (OCD) method, and the conjugate-gradient (CG) method. In the QuaDRiGA, large n is required to obtain an acceptable performance. This paper also shows that when β ≈ 1, iterative matrix inversion methods are not attaining a good performance, where β is the ratio between the user antennas and BS antennas. This paper is arranged as: Section 2 illustrates the M-MIMO model, definitions, and fundamentals of linear detectors. Section 3 exhibits the approximate matrix inversion methods. Section 4 shows the complexity analysis of iterative matrix inversion methods. In Section 5, numerical results are presented. Section 6 presents the future trend, research challenges, and concludes the paper. Table 1 illustrates the notations and corresponding full meaning. Overview The fundamental communications theoretic concepts of MIMO detection date back to 1960s, although the term was not used at that time. During the last half a century, significant research efforts were made on MIMO detection by the wireless communication researchers [33]. A detail discussion on the history of MIMO detection is presented in [13]. A plethora of MIMO detector implementation can be found in the literature. The first MIMO detector implementation is presented in [34]. Wong et al. exploited a breadth first k-best tree search MIMO detection for a 4 × 4 MIMO configuration. Garett et al. presented a soft output optimal detector using a parallel architecture [35]. Garett et al. also proposed a depth first sphere decoding (SD) algorithm for 4 × 4 MIMO systems and 16-QAM [36]. The first minimum-mean square estimation (MMSE) implementation can be credited to Burg et al who proposed an architecture of the 4 × 4 MMSE in [37]. Burg et al. also proposed the first architecture for the lattice reduction algorithm [38]. The long-term evolution (LTE) specific implementations can be found, e.g., in [39,40]. M-MIMO or large scale MIMO, is an expansion of the ordinary small scale MIMO systems [41,42] where large number of antennas at the BS avails concurrently numerous users with an elasticity to select what users to schedule for reception at any moment. The popular M-MIMO connotation postulates that the user terminals have a solely antenna (the M-MIMO is predominantly inaccurate, because of the single-antenna element. The system is a multiple-input single-output (MISO) downlink or a multiuser single-input multiple-output (SIMO) uplink (UL). As is accustomed in the literature, M-MIMO system will be used in this paper to refer both single and multiple antenna terminal) and that the number of served antennas at the user terminals is remarkably smaller than the number of antennas at the BS. M-MIMO technology is one of the key technologies in 5G and beyond 5G communication systems. M-MIMO system is substantial in implementation of many 5G applications such as the massive machine-type communications (mMTC) where large number of mobile apparatuses is sporadically active [43,44]. In M-MIMO systems, there are an interest in linear detectors because of relative simplicity and low complexity. In this section, the linear detection mechanism is illustrated. It is assumed that the massive MIMO BS antennas N is serving K single antenna user terminals where N K. The channel entries between N BS antennas and K users forms a channel matrix (H) as where h ij presents the channel coefficients (gain\loss) between ith receive antenna and jth transmit antenna. Each user transmit its symbols individually. The symbol vector x = [x 1 , x 2 , ....., x K ] T presents the transmitted symbols.The corrupted vector y = [y 1 , y 2 , ....., y N ] T is received by the BS receives. This system can be modelled as y = Hx + n, where n is the additive noise. The column vectors of H are assumed to be asymptotically orthogonal. Equation (2) is mostly used in detection approaches, where the channel state information (CSI) is supposed to be perfect at the BS with good synchronization. It is noteworthy that if the instantaneous values of H elements are known from the channel estimation, the detection of x belongs to the family of coherent detection. On the other hand, if the instantaneous channel state estimation is averted, the detection of x is said to be a noncoherent scheme. It should be noted that noncoherent detectors have high computational complexity and an enormous performance loss compared to the coherent detectors because of a degradation in the power efficiency. In M-MIMO detector, the transmitted vector x is retrieved from the received vector y. The ML sequence detection (MLSD) obtains the optimum solution but it exhaustively searches all possible signals aŝ The ML scheme has an exponential computational complexity in the number of decision variables O K and therefore, it is prohibitively complex in massive MIMO.For example, if a transmitter has four antennas and using 64-QAM scheme, it needs a 16.7 × 10 6 comparisons if the ML detection is used. Linear detectors can solve the problem in (3) with convex optimization methods to obtain the quasi-optimal solution. They are relatively simple in implementation, but they afford a considerable performance loss in high loaded systems. Furthermore, if the system size is large, the required matrix inversion becomes complex and approximations may be needed. In linear detectors, received signal y is multiplied with the equalization matrix A H ,x = S(A H y), followed by a slicer S(.) to quantize every element to the closest neighbour in the constellation [45]. In this section, we present the most popular linear detectors, i.e., the matched filter (MF), the zero-forcing (ZF) and the MMSE. MF-Based Detector In MF, the estimated signal is given asx The MF works truly when N is much larger than K but it obtains the worst performance compared to other linear detectors. The MF-based detector maximizes the received signal-to-noise ratio (SNR) of each stream by ignoring the impact of interference. In case of ill-conditioned channels, the performance is badly deteriorated for a square M-MIMO system [46]. ZF-Based Detector In the ZF-based detector, the aim is to make the received signal-to-interference ration (SINR) as large as possible. However, the channel matrix H is inverted and hence, taking off the impact of the channel [47]. The equalization matrix depends on the Moore-Penrose pseudo-inverse (H + ) and it is given as However, to avoid the square channel matrix (H) scenario, β has to be small. In the ZF-based detector, the signal can be estimated asx The ZF detector discards the noise effects and it works fairly in interference-limited scenarios with high computational complexity. In a small-valued coefficient channel, the ZF-and MF-based detectors may have a noise enhancement. Therefore, the minimum mean square error (MMSE)-based detector is proposed to take the noise effect in the equalization process. MMSE-Based Detector In an MMSE detector, the mean square error (MSE) between x and H H y is minimized as The MMSE detector takes into consideration the impact of noise as where I is the identity matrix. In MMSE detector, the signal is estimated aŝ The MMSE in (8) relies on a reduction of the noise enhancement and needs an awareness of the SNR [48]. Thus, the MMSE outperforms the ZF-and MF-based detectors. As mentioned earlier, the column vectors of H are asymptotically orthogonal, thus, the MMSE detector achieves near-optimal performance. Matrix Inversion Methods A matrix inversion of the Gram matrix (G) is mandatory to estimate the signal. However, the computational complexity of linear detectors grows as the size of M-MIMO system rises. In 2013, a new class of detection techniques for M-MIMO is introduced by Wu et al. and approximate matrix inversion method-based UL detector is illustrated in [49]. This detection class has been the most popular of detectors since its initiation in 2013. In M-MIMO, the channel hardening is used to repeal the characteristics of a small scale fading and is being dominant when the number of receive antennas (N) is much higher than the number of served users (K). For instance, the diagonal entries of H H H grow gradually stronger compered to the non-diagonal entries when the size of the M-MIMO system gets larger [23]. The diagonlisation of the elements in the Gram matrix G = H H H, where the non-diagonal entries lean to zeros and diagonal components are close to N [50,51]. To compute the G −1 , a complexity of O N 3 is required, which is not hardware-friendly for M-MIMO. This section presents the concepts of several iterative matrix inversion methods which can be used in low-complexity detectors. It also discusses the pros and cons of each method. Neumann Series Neumann series (NS) is a leading solution to approximate the matrix inversion in M-MIMO detector. It takes the benefit of iterative structure to progressively enhance the computing precision of the matrix inversion [52]. The Gram matrix G = H H H decomposed into G = D + E, where D is the main diagonal entries and E is the non-diagonal elements [53,54]. The Gram matrix inversion can be approximated as which converges to G −1 if lim i→∞ −D −1 E = 0, is fulfilled. In real time applications, a sum of finite terms (i) is exploited (10) and hence, fixed number of iterations (n) is required where n is critical in obtaining good accuracy of the matrix inverse which affects the complexity. In [55], channel-aware decision fusion over MIMO channels is illustrated. low complexity sub optimal solution is proposed based on the NS solution. However, it should be noted that the NS method has a abundant performance loss when β ≈ 1. In addition, the convergence of NS method is slow with large number of user terminals. In other words, higher complexity is required which leads to inaccurate matrix inversion and hence, the detector experiences a significant loss. Gauss-Seidel The GS or the successive displacement, is an iterative method to avoid the matrix inversion [56]. In each iteration, it uses the most up-to-date estimation from the previous iteration. In the GS detector, the Hermitian positive definite matrix (A) is decomposed into A = D + L + U where D, L and U are the diagonal elements, the strictly lower triangular entries, and the strictly upper triangular entries, respectively. GS iterative method can estimate the signal (x) aŝ wherex MF is the output of MF method. To obtain fast convergence rate and hence, reduce the complexity, initialization is mandatory in GS detector. If the initial valuesx (0) are not well known, they can be considered zeros [57]. The GS detector is not desired in parallel implementation because of the internal sequential iterations structure [58]. The GS detector achieves better performance than the NS detector with lower complexity. Successive Overrelaxation The GS detector has a performance loss. Therefore, the SOR method is used because of the flexibility to achieve a good performance [59]. The SOR method is also an iterative method to avoid matrix inversion where the signal is estimated aŝ where ω is the relaxation parameter and has a considerable impact in obtaining a high performance within a small n. A suitable value of ω is required for convergence. If ω = 1, the SOR method will be equivalent to the GS method. In general, the SOR method is convergent when 0 < ω < 2 [60]. In addition, the GS and SOR methods are not readily implemented on parallel computing platforms because the triangular systems have to be solved at every iteration. Jacobi Method In JA method, the signal estimation in a diagonally dominant system is obtained aŝ which holds if: lim The initial estimation (x (0) ) can be used asx The JA detector obtains a good performance when β is small. In general, it can be easily implemented for parallel computation. In numerical methods, it is well known that the convergence speed of the JA method is slower than the convergence speed of the GS and SOR methods. In [61], the convergence speed of the conventional JA method has been improved by a decision-aided JA method. Conjugate-Gradient Method The CG is another method to avoid the matrix inversion and is an example par excellence of a Krylov subspace method. The CG detector can estimate the transmitted signal aŝ where p (n) is the conjugate direction with paying attention to A, i.e., and and α (n) is a scalar parameter as The CG method is numerically robust and can operate much better under close to ill-conditioned channel than the other algorithms [62]. It usually performs better than the NS and JA methods. It is also implemented in Xilinx Virtex-7 FPGA for a 128 × 8 in [63]. However, it suffers from low parallelism and considerable correlation issues [64]. Richardson Method In the RI method, symmetric matrices are used and defined as positive at their execution. Similar to the SOR method, it is overly sensitive to a relaxation parameter (ω) to achieve faster convergence where 0 < ω ≤ 2 λ and λ is the largest eigenvalue of H [64]. The signal is estimated as In a Richardson-based detector, the initial solution x (0) can be set as a zero vector without loss of generality as no prior knowledge of the final solution is available [65]. A constant-valued relaxation parameter (ω) has high impact to achieve a satisfactory performance [66,67]. The value of ω can be determined by the eigenvalues. A detector based on the RI method is a hardware-friendly and decreases the complexity from O K 3 to O K 2 [68]. However, a satisfactory performance can be achieved when n is large which increases the complexity as well. Optimized Coordinate Descent Method Coordinate descent (CD) attains an approximate solution of large number of convex optimization using series of coordinate-wise updates. It employs the single-variable to refine the estimated signals sequentially where the estimated signal is given aŝ A pre-processing and refinements are offered to reduce the operations within each iteration and this is called optimized CD (OCD). A low complexity detector based on the OCD method is implemented in a high-throughput FPGA design for M-MIMO systems with high use efficiency [69,70]. Complexity Analysis In complexity analysis, the most dominant mathematical operations are the number of divisions and number of multiplications. To compute the D −1 , K real number of divisions are required. The computational complexity of the NS method is O K 3 while the RI, the SOR, the GS, the JA, and the CG methods require O K 2 . The OCD-based detector requires the lowest complexity of O (K). Table 2 compares between the complexity of detectors based on several approximate matrix inversion methods. Results and Discussion The performance and the complexity of the NS, GS, SOR, JA, RI, OCD, and CG detectors will be described. A comparison between the iterative matrix inversion methods-based M-MIMO detector will be provided in bit-error-rate (BER) performance, the SNR, and the number of multiplications. In all simulations, we consider urban macro-cell line of sight (LOS) channels generated by QuaDRiGA to generate realistic radio channel impulse responses for system-level simulations of mobile radio networks. Depending on the angular spread and the amount of diffuse scattering, the typical value of clusters is around 10 clusters for the line-of-sight (LOS) propagation environment and 20 clusters for non-LOS. The angular spread values around 20-90 degrees and the carrier frequency is 2 GHz. The configuration of M-MIMO systems with user terminals and BS antennas are 16 × 128, 32 × 128, and 64 × 128 and the modulation scheme is 64QAM. Figure 1 shows the performance of a detector based on several approximate matrix inversion methods at 16 × 128 MIMO where β = 16 128 = 0.125 1). In such scenario, a detector based on approximate matrix inversion methods needs high n (i.e., n > 10) to achieve a satisfactory performance. At n = 15, the detector based on the CG method achieved a good performance while other methods required higher n to achieve an acceptable performance. The CG method is numerically robust even when the channel is ill-conditioned. The SOR and OCD methods are obtained BER = 10 −2 at SNR = 15 dB at n = 20 and n = 25, respectively. It is noteworthy that the NS method, RI method and JA method are not attaining a satisfactory performance in realistic radio channels. In this case, unsatisfactory performance is obtained. However, the CG method achieves the MMSE performance at high n (i.e.,n = 60). A detector based on other methods does not obtain a satisfactory performance even in case of high n. Figure 3 shows that the performance of several approximate matrix inversion methods when β = 64 128 = 0.5. It is clear that the performance of the detector is not satisfactory and the approximate matrix inversion methods are not numerically robust when the number of users is relatively high compared to the number of antennas at the BS. However, high n is required to achieve BER = 10 −1 . Figure 4 presents the number of multiplications among the approximate matrix inversion methods. It is clear that the CG method has the lowest number of multiplications and it has the best performance as mentioned in Figure 1. On the other hand, the NS method and the OCD method had the highest number of multiplications and they need more iterations to converge. Conclusions and Future Directions This research could be extended by investigating different M-MIMO setup. For instance, different number of LOS and non-LOS clusters, different angular spread values, and different carrier frequencies could be considered. To improve the performance-complexity trade-off, a deep learning (DL)-based sphere decoding (SD) for M-MIMO UL data detection has to be studied where the radius of the hypersphere could be intelligently learnt by a deep neural network (DNN). In addition, use of approximate matrix inversion methods, such as the NS and Newton iteration (NI) methods, should be investigated to reduce the computational complexity by reducing the searched space which make the SD algorithm more efficient. We expect that use of the DNN and approximate matrix inversion methods will achieve a quasi-optimal performance with low computational complexity. Machine learning can be used to select the best algorithm to be applied rather than finding the best signal estimation. In addition, the performance of a sparsity-based M-MIMO detection has to be investigated with the iterative matrix inversion methods. This paper studied a detector based on several iterative matrix inversion methods in realistic radio channel, QuaDRiGA. It is illustrated that such methods required high n to achieve a satisfactory performance when β 1. The CG method achieves better performance over other methods and is robust in realistic radio channels, while NS method, RI method and JA method did not achieve satisfactory performance. However, such methods are not numerically robust and did not achieve a satisfactory performance when the number of users is relatively high compared to the number of antennas at the base station. In complexity analysis, the CG method achieves lowest number of multiplications and has the best performance, while NS method and the OCD method have the highest number of multiplication. Conflicts of Interest: The authors declare no conflict of interest.
5,703.6
2020-03-28T00:00:00.000
[ "Engineering", "Computer Science" ]
An Automated Segmentation Method for Lung Parenchyma Image Sequences Based on Fractal Geometry and Convex Hull Algorithm Statistically solitary pulmonary nodules are about 6% to 17% of juxtapleural nodules. The accurate segmentation of lung parenchyma sequences of juxtapleural nodules is the basis of subsequent pulmonary nodule segmentation and detection. In order to solve the problem of incomplete segmentation of the juxtapleural nodules and segmentation inefficiency, this paper proposes an automated framework to combine the threshold iteration method to segment the lung parenchyma images and the fractal geometry method to detect the depression boundary. The framework includes an improved convex hull repair to complete the accurate segmentation of the lung parenchyma. The evaluation results confirm that the proposed method can segment juxtapleural lung parenchymal images accurately and efficiently. Introduction Lung cancer is a type of tumor with a very high morbidity and mortality in the clinical practice.Lung cancer at an early stage has a higher curable possibility than late-stage tumors, thus early diagnosis and treatment are significantly important to improve the patients' clinical situation [1].Statistically, early lung cancers usually present as a solitary pulmonary nodule (SPN).Clinically, it is crucial to segment the sequence of lung parenchymal images rapidly without compromising accuracy in order to subsequently achieve pulmonary nodule segmentation and diagnose benign and malignant features [2].Lung parenchyma segmentation refers to the division of the lung parenchyma in a number of areas of interest with specific properties.In terms of image processing, the use of computer-aided diagnosis of pulmonary diseases allows the identification of suspected pulmonary nodules.Pulmonary nodules appear as spheroidal abnormal tissue located in a complex juxtapleural structure in lungs' Computed Tomography (CT) images.The surrounding tissues, such as chest wall and blood vessels, may be attached to the pulmonary nodules, thus preventing their detection or segmentation. Statistically solitary pulmonary nodules are about 6% to 17% of juxtapleural nodules [3].Because these nodules are attached to the pleura and their grayness and density are similar to those of the pleura, most of the existing segmentation methods are based on gray-level thresholding.When segmenting, juxtapleural nodules often appear associated with a depression area in the lung parenchyma, termed juxtapleural nodular depression.This type of pulmonary nodule is the most difficult to segment, but the segmentation results have a significant impact on the accuracy of image analysis, auxiliary processing, and other post-processing steps.In addition, most of the existing methods of lung parenchyma segmentation are subject to either under-segmentation or edge leakage when applied to complex structures of lung parenchyma.Consequently, we cannot obtain the required area of interest, while retaining a good edge and the details of the outline of the lungs.In order to be able to remove a complete lung parenchyma containing the juxtapleural nodules and to improve the effectiveness of ancillary diagnosis, the depressed area must be repaired.The aim of this paper is the automated sequencing of CT images of juxtapleural nodules.To achieve this goal, this paper proposes an automated segmentation sequencing algorithm of CT images.The proposed algorithm is based on the fractal geometry and the improved convex hull algorithm.On completion of juxtapleural nodule segmentation, the proposed algorithm uses the fractal geometry method to detect the concave boundary for the concave area under study and, thereafter, it uses the improved convex hull algorithm to repair it, aiming at producing an accurate segmentation of the lung parenchyma.In addition, the proposed algorithm makes full use of the correlation between the sequence of CT images and the automated threshold segmentation sequence of CT images.To verify the effectiveness of the algorithm, 97 cases were chosen from 5800 sequential CT images for the segmentation test.The evaluation results confirmed that the proposed segmentation method significantly improved the segmentation speed reaching a satisfactory 92.45% Pixel Accuracy (PA) and 95.9% Intersection over Union (IoU) of lung parenchyma segmentation of juxtapleural nodules. Related Work There are a number of existing models and algorithms in the area of lung parenchyma segmentation and related to the work of this paper.The existing work can be mainly classified according to the methods used, which are the threshold [4][5][6][7][8], clustering [9], region-growing [10], graph theory-based [11], and active contour model [12,13] methods.Sudha and Jayasheree [14] proposed a method of partial lung cutting based on the combination of the threshold method and the opening operation in morphology in order to extract pulmonary nodules.However, when there are pulmonary nodules presenting lung adhesion, these nodular areas are excluded by the method of threshold segmentation and the region growing.The reason for this is the similar gray level of the pulmonary nodules and the surrounding lung parenchyma.In addition, the lung boundary appears concave, making difficult the extraction and identification of the tumor, blood vessels, and trachea.Therefore, it is necessary to analyze and repair the concave and depressed lung parenchyma boundary. Retico et al. [15] demonstrated that the rolling sphere CT image processing method of pulmonary nodules can effectively reduce false positives when analyzing juxtapleural stretch-type pulmonary nodules.Bian and Yan [16] combined the methods of threshold and regional growth to segment the lung parenchyma.They used the ball method to repair the extracted lung boundary.Messay et al. [17] used morphological methods to detect and segment the pulmonary nodules.Their computer-aided diagnostic system was sensitive in 82.66% of cases.In Zhou et al. [18] method of image segmentation for lung CT images with juxtapleural nodules, the iterative adaptive averaging algorithm and adaptive curvature threshold method was used to reinsert the missing juxtapleural nodules.Gong et al. [19] proposed a lung parenchyma segmentation algorithm, which was based on gray integral projection and fuzzy C means clustering combined with the rolling sphere method to repair the boundary area.However, the selection of the sphere radius was a critical problem.If the radius was too large, the lung parenchyma would be over-segmented.If the radius was too small, it would be under-segmentated, and the repaired lung boundary would be incomplete.Wei et al. [20] proposed a method to repair lung parenchyma boundary by combining an improved chain code algorithm and the Bresenham algorithm.Li et al. [21] presented a lung parenchyma segmentation algorithm combining the regional growth and the morphology methods.They also proposed a two-dimensional convex hull algorithm to repair the profile of the lung parenchyma.Compared to the previous work, this algorithm had a higher accuracy, and the depression of the juxtapleural nodules could be accurately repaired.However, similar to the previous convex hull algorithm, the proposed one was still unable to repair vascular depression. On the basis of the above review, the rolling-ball method and morphology-based method are the most commonly used methods of boundary restoration, characterized by simple and fast implementation.However, because of the randomness of the shape and size of pulmonary nodules, it is difficult to find morphological templates of appropriate size.Small-size templates cannot include all lung nodules, whereas oversizing can include a too large non-lung area and other border areas.The curvature-based method [22][23][24] is also a commonly used method, which considers the sudden change of the boundary curvature corresponding to the defective boundary region.However, this assumption is not suitable in case of sudden changes in noise and local curvature.The boundary repair of by this method is not satisfactory when these sudden changes are present in the pulmonary boundary. An observation regarding the existing lung parenchymal segmentation algorithms is that the majority of the images are segmented as single-slice CT images, ignoring the correlation between adjacent images in a sequence.In recent years, a large number of scholars have studied the segmentation methods of sequences of lung parenchymal images. Geng et al. [25] used the grayscale threshold iteration method to rapidly and automatically select seed points for the region growth and to extract each lung real image in the sequence of CT images.However, the algorithm is sensitive to the background noise.Liming et al. [26] proposed a new framework of lung parenchyma segmentation.The optimized thresholding method and boundary tracking algorithm were used to segment the lung parenchyma.The method could effectively eliminate the influence of background noise, but at the same time it could also eliminate some of the lung parenchyma during processing.Yan-hua et al. [27] proposed a method of 3D connectivity growth.The method used the adaptive thresholds to select seed points, the 3D connectivity markers to identify the lung parenchyma, and the morphological method to remove tracheal noise.In the final output, the lung parenchyma mask was generated, and the lung parenchymal images were removed.The produced segmentation results were satisfactory, but the image processing of the juxtapleural traction characteristics of the lung parenchyma was not effective.Luo et al. [28] used an improved active contour model algorithm.They needed to manually circle the initial contour and semi-automatically segment the edges of lung real images in the serial CT images.Their segmentation results were satisfactory, but the method was not time-efficient. The Method This section proposes an algorithm to perform an accurate segmentation of lung parenchyma presenting juxtapleural traction nodules.On the basis of fractal geometry and the improved convex hull repair of pleura-type depressions, this algorithm includes an initial segmentation of CT images with automated threshold iteration, removal of bronchial tissue, and detection of pulmonary border.The lung parenchyma algorithm flow is shown in Figure 1. This section proposes an algorithm to perform an accurate segmentation of lung parenchyma presenting juxtapleural traction nodules.On the basis of fractal geometry and the improved convex hull repair of pleura-type depressions, this algorithm includes an initial segmentation of CT images with automated threshold iteration, removal of bronchial tissue, and detection of pulmonary border.The lung parenchyma algorithm flow is shown in Figure 1. The Construction of the Initial Contour Among the existing algorithms of lung parenchyma segmentation, the threshold segmentation algorithm has the advantage of high calculation speed and accuracy.Considering the high correlation between adjacent slices of sequential CT images, we chose the threshold segmentation method to pre-process the original images to obtain the initial contour of sequential plural traction images.The initial outline presented some noise derived from the trachea, bronchus, and other tissues.In order not to miss any important information of the lung parenchyma segmentation, such noise was tolerated.In this section, the open and close operations were used to smooth and fill the edge and interior of the lung parenchyma. The Automated Threshold Iterative Segmentation of Sequential CT Image It is difficult to choose a suitable global threshold to obtain the ideal initial contour because of the differences in the gray levels of lung nodules.Taking into account the high correlation between adjacent slices of sequential CT images, the automated threshold iteration method was chosen to construct the initial contour of the lung parenchyma of sequential CT images, the algorithm is shown in Algorithm 1.In Figure 2, using (a-d) to display a sequence of images, the first line shows the original of sequential CT image and the second line results of the automated threshold iterative segmentation of lung parenchyma.On completion of the initial segmentation, a lung parenchyma image could be obtained by the automated threshold iteration method.The corresponding globally optimal threshold T could be obtained simultaneously.Image , can be converted to a binary image , through 2. Because of the high correlation between sequential CT images, an optimal threshold of i image as the initial threshold of i + 1 image is necessary.This can decrease the number of iterations and significantly improve the speed of segmentation. The Removal of Bronchus and Trachea Noise On completion of the initial segmentation, a lung parenchyma image could be obtained by the automated threshold iteration method.The corresponding globally optimal threshold T could be obtained simultaneously.Image I(x, y) can be converted to a binary image I bin (x, y) through formula (3) On completion of the above series of operations, it is possible to obtain the contours of some interfering objects and determine the noise.In our previous work [29], we improved the region growth method and the open and close operations to obtain final lung parenchyma images by smoothing and filling the edges and the interior of the pulmonary nodules.The process of trachea and bronchus removal and lung contour refining is shown in Figure 3. On completion of the above series of operations, it is possible to obtain the contours of some interfering objects and determine the noise.In our previous work [29], we improved the region growth method and the open and close operations to obtain final lung parenchyma images by smoothing and filling the edges and the interior of the pulmonary nodules.The process of trachea and bronchus removal and lung contour refining is shown in Figure 3. Fractal Geometry-Based Pulmonary Boundary Detection Because of the similarity of the gray values between the juxtapleural nodules and the chest, some nodules overlap with the juxtapleural blood vessels.Therefore, in the extracted lung region, the pulmonary nodule region, attached to the lung wall is often excluded.However, if this part of the region is excluded, the accuracy of the computer-aided diagnosis system will be greatly affected.Thus, it is essentially necessary to repair the area attached to the lung wall. Adaptive Meshing Although the existing patching methods are effective, all requires human intervention and do not have good adaptability to different samples.These repair methods only take into account the local properties without considering the other boundary properties.In order to improve the efficiency of the segmentation, we meshed only the smallest circumscribed rectangle (a × b) containing lung parenchyma.Since the two-dimensional contour of the lung varied greatly from top to bottom in the entire CT sequence of images, this section presents an adaptive network for the CT images, as shown in formula (4). In formula (4), N 1 is the number of connected regions in the image, A 1 is the area of all lung regions, and A r is the area of the smallest circumscribed rectangle. Fractal Geometry-Based Pulmonary Boundary Detection The lung border that is produced by the segregation shows a curve varying according to a certain rule.The fractal theory can investigate, describe, and analyze complex, random, disordered, incomprehensible, or difficult-to-quantify objects to a deeper level.In particular, the use of the fractal dimension is an effective way to investigate and describe the degree of a fractal to space filling [30].To do that, the entire image is abstracted as a set F in two-dimensional space.The analysis of the fractal features of its boundary is equivalent to calculating the fractal dimension of the two-dimensional figure.The box dimension is a simple and automated method, which can work with or without self-similarity [31].This method is used to calculate the fractal dimension and detect the points to be repaired and is fast and suitable for the juxtapleural traction nodules of our work in this paper. According to the Graham method, CT images are searched to obtain the boundary points, and the grids containing the boundary points are stored as a grid set Grid = {Grid i |i = 1, 2, 3 . . .n}. The number of squares N (s) represents each network intersecting with the image, and the length s represents the sides of all the squares [32].The curvature of the line defined by ln(N(s)) and ln(1/s) is the fractal dimension, and the linear regression equation that was used to estimate the fractal dimension is as formula (5): The fractal dimension of the statistical histogram for the area block is shown in Figure 4. From Figure 4 it is evident that all the fractal dimensions show a trend of polarization.The fractal dimension is small because of the smooth pulmonary border and the complex boundary of the lung.The boundary of the lung has high randomness and complexity due to the irregularities caused by pulmonary nodules.The fractal value is larger. Adaptive Threshold-Based Defect Boundary Selection Because the proportion of defective boundaries caused by adhesive pulmonary nodules was small, the number of small regions containing this part of the border was small as well.In order to select the boundary of defect accurately, an adaptive threshold was proposed as formula (6) for sequencing CT images.γ Among them, the i-th network represents the fractal dimension, which is the average value of the fractal dimension of the grid, representing the variance.The mean square error (MSE) was calculated by using formula (7) [33]: On the basis of the results of a number of experiments, the mean square error is small when n is 1.However, fewer nodules could be identified by setting n = 1.This is because the extent of the pulmonary border affected by pulmonary nodules was relatively small.Therefore, the final adaptive threshold was determined by formula (8). For a grid block in a grid, if the fractal dimension of internal boundary is larger than the Adaptive Threshold-Based Defect Boundary Selection Because the proportion of defective boundaries caused by adhesive pulmonary nodules was small, the number of small regions containing this part of the border was small as well.In order to select the boundary of defect accurately, an adaptive threshold was proposed as formula (6) for sequencing CT images. Among them, the i-th network represents the fractal dimension, which is the average value of the fractal dimension of the grid, representing the variance.The mean square error (MSE) was calculated by using formula (7) [33]: On the basis of the results of a number of experiments, the mean square error is small when n is 1.However, fewer nodules could be identified by setting n = 1.This is because the extent of the pulmonary border affected by pulmonary nodules was relatively small.Therefore, the final adaptive threshold was determined by formula (8). For a grid block in a grid, if the fractal dimension of internal boundary is larger than the threshold, the lung boundaries in the block need to be repaired and stored in the grid.If the fractal dimension of the internal boundary is less than the threshold T f , the lung boundaries within the region block need not be patched. Through the fractal geometry method, the defect boundary was obtained, and the repair rate and accuracy were improved. The Improved Convex Hull Repair Algorithm On the basis of the review of existing work, this paper proposes a lenticular edge repair algorithm.This algorithm can repair well not only the juxtapleural nodular depression, but also the depression between the two lungs, adjacent to the heart and mediastinum.According to the convex hull principle, the algorithm avoids over-repair and under-repair, which exist in the rolling-ball method or morphological method.The detailed steps of the proposed algorithm are shown in Algorithm 2. for: p 0 to p n Select p 0 , p 1 , and p 2 from T To be repaired between two points removed from the P i The final result is the set of edge points of the lungs in Grid r , to be combined with the unpatched grid blocks and representing the repaired lung parenchyma images. The Dataset In order to verify the effectiveness and the real-time characteristics of the proposed feature extraction method, we have applied it to the Lung Image Database Consortium image collection (LIDC-IDRI) dataset.The LIDC-IDRI database is the largest open lung nodule database in the world, which contains 1080 cases.For each of the images in the samples, four experienced chest radiologists performed a two-stage diagnosis.In the first stage, each radiologist independently diagnosed and marked the location of the patient.In the subsequent second stage, a radiologist independently reviewed the other three radiologists' marks and gave his/her own final diagnosis.Such a two-stage process can verify all results as completely as possible without the interference of other radiologists.Therefore, the diagnosis results can be used as a gold standard reference.According to the diagnostic results of corresponding lesions, XML files were marked, and the CT images of 50 cases of sequential juxtapleural nodules were selected.At the same time, some of the image datasets used in the experiment were collected from the positron emission tomography/computed tomography (PET/CT) detection center of a collaborating hospital.In the experiment, the imaging data of 47 patients in the hospital were selected.For each patient there were 299 serial lung CT images, and the sequential images of juxtapleural-drawn pulmonary nodules were selected for the experiment. Analysis of the Experimental Results In the experiment, 50 CT images were selected for training and 47 images were used to test the accuracy and time complexity of the algorithm.In order to verify the accuracy of the segmentation algorithm, we used the proposed method, region-growing, watershed, and rolling-ball methods to segment the CT images of juxtapleural nodules.Considering the interference of multiple factors, the accuracy of segmentation was not judged in the experimental process.Therefore, the results of various segmentation algorithms were compared with the comprehensive lung parenchyma area manually segmented by four experienced radiologists.In the course of the experiment, the manually segmented images in the medical records were the ultimate gold standard. Qualitative Assessment The CT sequence contained a large number of images.In this section, the segmentation of the same sequence of juxtapleural nodules by four kinds of algorithms was performed, as shown in The CT sequence contained a large number of images.In this section, the segmentation of the same sequence of juxtapleural nodules by four kinds of algorithms was performed, as shown in For the CT images with intact lung parenchyma, as shown in the first row and the third row of Figure 5, the segmentation by all the algorithms could ensure the integrity of the segmentation.However, on the basis of the automated threshold segmentation CT sequence images, seed points could be selected manually, and the threshold could be passed, greatly improving the efficiency of For the CT images with intact lung parenchyma, as shown in the first row and the third row of Figure 5, the segmentation by all the algorithms could ensure the integrity of the segmentation.However, on the basis of the automated threshold segmentation CT sequence images, seed points could be selected manually, and the threshold could be passed, greatly improving the efficiency of segmentation.For the sequence images of lung parenchyma having juxtapleural nodules, as shown in the second row of Figure 5, the RG and Watershed algorithm lost the juxtapleural nodules, and the repair results of ball-rolling method lost some parts of the juxtapleural nodules.Promisingly, our method could ensure the integrity of lung parenchyma segmentation (f).For the not connective area of lung parenchyma, the regional growth algorithm lost some parts of the lung parenchyma, while the other algorithms' segmentation results were more complete. The sample, could contain multiple nodules in the same CT slice at the same time.We chose the same methods as in Figure 5 for an experimental comparison, and the results are shown in Figure 6.The results showed that the regional growth algorithm still lost some parts of the lung parenchyma for the not connected areas.The comparison between column (e) and column (f) showed that the rolling-ball method provided better repair results for a single large juxtapleural nodule.The method described in this paper could repair multiple juxtapleural nodules at the same time and provided better results. In order to verify the repair ability of our method for juxtapleural concave areas, we compared the proposed algorithm and the "ball-and-roll" method for the same sequence of slices, as shown Figure 7.In the ball-rolling method, the radius of repair had an important effect on the result.For the concave area caused by the juxtapleural nodules, if the radius of the ball-rolling method was too small, it caused under-segmentation.Because of the uncertainty of the size of a juxtapleural nodule, setting the threshold manually was useless and the segmentation results were unsatisfactory.In contrast, Figure 7 shows the effectiveness of the algorithm proposed in this paper. Quantitative Analysis In this paper, the pixel accuracy, cross-ratio, and time complexity were used as the quantitative metric for the evaluation of segmentation results.In the following analysis, we assumed that the actual segmented image was P and the reference segmented image was G. Pixel accuracy Pixel Accuracy (PA) is expressed as the ratio of correctly labeled pixels to total pixels.Assume that P i = {p i1 , p i2, . . . ,p im } is the number of pixels with the correct marks, and G j = g j1 , g j2, . . ., g jm is the gold standard number of pixels.The higher the pixel accuracy, the better the image segmentation; the formula is as follows: According to formula (9), the higher the value of PA, the more overlapped the results and the gold standard.The PA curve for lung parenchymal image segmentation with juxtapleural nodules is shown in Figure 8. It is clearly shown that either RG or Watershed had a lower curve than the other methods.Thus, both methods lost some juxtapleural nodules, resulting in a P i with less pixels.However, the overall trend of the PA of the rolling-ball method was toward higher values than that of other algorithms.In a number of previous experiments, r = 65 was usually used.When comparing with the algorithm described in this paper, the repair result of the juxtapleural indentation was still deficient. The PA curve values obtained with the method described in this paper represent the most accurate segmentation results. is the gold standard number of pixels.The higher the pixel accuracy, the better the image segmentation; the formula is as follows: According to formula (9), the higher the value of PA, the more overlapped the results and the gold standard.The PA curve for lung parenchymal image segmentation with juxtapleural nodules is shown in Figure 8.It is clearly shown that either RG or Watershed had a lower curve than the other methods.Thus, both methods lost some juxtapleural nodules, resulting in a with less pixels.However, the overall trend of the PA of the rolling-ball method was toward higher values than that of other algorithms.In a number of previous experiments, r = 65 was usually used.When comparing with the algorithm described in this paper, the repair result of the juxtapleural indentation was still deficient.The PA curve values obtained with the method described in this paper represent the most accurate segmentation results. Intersection over Union Intersection over Union (IoU) is a probability value used to compare the similarity and dispersion between sample sets and reflect the degree of coincidence between two segmented images.The higher the cross ratio, the better the image segmentation results.Assuming that the gold standard Intersection over Union Intersection over Union (IoU) is a probability value used to compare the similarity and dispersion between sample sets and reflect the degree of coincidence between two segmented images.The higher the cross ratio, the better the image segmentation results.Assuming that the gold standard of the nodules in the image is G and the segmentation result of an automated method is P, the calculation of IoU is represented by formula (10). From Figure 9, it is evident that the IoU curve of the proposed algorithm and that of the rolling method were significantly higher than those of the RG and Watershed methods.Furthermore, the IoU curve of the algorithm described in this paper was slightly higher than that of the ball-rolling method.At the same time, the segmentation results of the algorithm proposed in this paper were the most similar to the gold standard results. of the nodules in the image is G and the segmentation result of an automated method is P, the calculation of IoU is represented by formula (10). IoU ∩ ∪ (10) From Figure 9, it is evident that the IoU curve of the proposed algorithm and that of the rolling method were significantly higher than those of the RG and Watershed methods.Furthermore, the IoU curve of the algorithm described in this paper was slightly higher than that of the ball-rolling method.At the same time, the segmentation results of the algorithm proposed in this paper were the most similar to the gold standard results. Time complexity The algorithm was also evaluated for time complexity.In this paper, taking i of the correlation of sequential images into account, we compared the shortest time, the longest time, and the average Time complexity The algorithm was also evaluated for time complexity.In this paper, taking i of the correlation of sequential images into account, we compared the shortest time, the longest time, and the average time that was consumed for segmenting the sequence of CT images.The results are shown in Figure 10.As shown in Figure 10, the proposed method had the shortest processing time.The watershed algorithm had the longest processing time and the region-growing algorithm had a processing time between the previous two, to some extent.The average time for processing leaflets was 0.75 s in As shown in Figure 10, the proposed method had the shortest processing time.The watershed algorithm had the longest processing time and the region-growing algorithm had a processing time between the previous two, to some extent.The average time for processing leaflets was 0.75 s in Figure 10a and 0.63 s in Figure 10b.These average times were significantly shorter than those of the other two methods.Moreover, the average processing time decreased with the increase of the number of sequence images.This feature demonstrated that the automated threshold transfer took the correlation of the adjacent images into consideration and was able to find the threshold to adapt all the images and speed up the sequence image segmentation. In summary, this article compares the evaluation results of the four methods of segmentation results, as shown in Table 1.The experimental results demonstrated that for the segmentation of juxtapleural nodules, both the method of this paper and the rolling-ball method were more efficient than the regional growth and watershed methods.In addition, the segmentation time of the method proposed in this paper was much lower than that of the other three methods. Conclusions Juxtapleural nodules have a very high probability of being malignant.Sequence image segmentation of the lung parenchyma with juxtapleural nodules is the basis for the subsequent pulmonary nodule segmentation and detection [36][37][38].On the basis of fractal geometry and the improved convex hull algorithm, an automated segmentation algorithm of sequential CT images is Figure 1 . Figure 1.Flow diagram of the lung parenchyma segmentation algorithm. Figure 1 . Figure 1.Flow diagram of the lung parenchyma segmentation algorithm. Figure 2 . Figure 2. Results of the automated threshold iterative segmentation of lung parenchyma.3.1.2.The Removal of Bronchus and Trachea Noise Figure 2 . Figure 2. Results of the automated threshold iterative segmentation of lung parenchyma. Figure 3 . Figure 3. Process of trachea and bronchus removal and lung contour refining.(a) Binarization of the coarse lung image; (b) Extraction of minimum bounding rectangle; (c) Selection of seed points with the left and right scanning algorithm; (d) Final lung mask. Figure 3 . Figure 3. Process of trachea and bronchus removal and lung contour refining.(a) Binarization of the coarse lung image; (b) Extraction of minimum bounding rectangle; (c) Selection of seed points with the left and right scanning algorithm; (d) Final lung mask. Figure 4 . Figure 4. Fractal dimension of the statistical histogram. Figure 4 . Figure 4. Fractal dimension of the statistical histogram. Algorithm 2 . The improved convex hull repair algorithm.Input: Grid block to be patched Output: Patched parenchyma edge set 1: The boundaries within the grid block to be modified are stored in the point set Q = {}.2: Update P i with convex hull theory. Figure 5 . The images were classified in the following groups: (a) original CT image; (b) manual segmentation results obtained by a doctor; (c) improved segmentation algorithm; (d) segmentation results of the improved region growth (RG) algorithm; (e) regional growth and rolling-ball method repair algorithm segmentation results; (f) algorithm segmentation results.Appl.Sci.2018, 8, x FOR PEER REVIEW 9 of 15 4.2.1.Qualitative Assessment Figure 5 .Figure 5 . Figure 5. Segmentation of the same sequence of juxtapleural nodules by four kinds of algorithms. Figure 5 . Figure 5. Segmentation of the same sequence of juxtapleural nodules by four kinds of algorithms. Figure 6 . Figure 6.Segmentation of images containing two juxtapleural nodules using the four algorithms.Figure 6. Segmentation of images containing two juxtapleural nodules using the four algorithms. Figure 6 . Figure 6.Segmentation of images containing two juxtapleural nodules using the four algorithms.Figure 6. Segmentation of images containing two juxtapleural nodules using the four algorithms. Figure 6 . Figure 6.Segmentation of images containing two juxtapleural nodules using the four algorithms. Figure 7 . Figure 7.Comparison between the proposed algorithm and the rolling-ball method to repair the same sequence: (a) Six consecutive CT images; (b) The gold standard; (c) r = 35 patch results; (d) r = 65 patch results; (e) patch results of our method. Figure 7 . Figure 7.Comparison between the proposed algorithm and the rolling-ball method to repair the same sequence: (a) Six consecutive CT images; (b) The gold standard; (c) r = 35 patch results; (d) r = 65 patch results; (e) patch results of our method. Figure 8 . Figure 8.Comparison of the curves of pixel accuracy for the four methods. Figure 8 . Figure 8.Comparison of the curves of pixel accuracy for the four methods. Figure 9 . Figure 9.Comparison of the curves of Intersection over Union (IoU) for the four methods. Figure 9 . Figure 9.Comparison of the curves of Intersection over Union (IoU) for the four methods. Figure 9 . Figure 9.Comparison of the curves of Intersection over Union (IoU) for the four methods. Figure 10 . Figure 10.Processing time of three methods.(a) Average processing time for each set of 40 images; (b) Average processing time for each set of 60 images. Figure 10 . Figure 10.Processing time of three methods.(a) Average processing time for each set of 40 images; (b) Average processing time for each set of 60 images. Table 1 . The segmentation results of the four methods.
7,754.6
2018-05-21T00:00:00.000
[ "Medicine", "Computer Science" ]
Identifying quantum effects in seeded QED cascades via laser-driven residual gas in vacuum The discrete and stochastic nature of the processes in the strong-field quantum electrodynamics (SF-QED) regime distinguishes them from classical ones. An important approach to identifying the SF-QED features is through the interaction of extremely intense lasers with plasma. Here, we investigate the seeded QED cascades driven by two counter-propagating laser pulses in the background of residual gases in a vacuum chamber via numerical simulations. We focus on the statistical distributions of positron yields from repeated simulations under various conditions. By increasing the gas density, the positron yields become more deterministic. Although the distribution stems from both the quantum stochastic effects and the fluctuations of the environment, the quantum stochastic effects can be identified via the width of the distribution and the exceptional yields, both of which are higher than the quantum-averaged results. The proposed method provides a statistical approach to identifying the quantum stochastic signatures in SFQED processes using high-power lasers and residual gases in the vacuum chamber. Introduction High-intensity lasers can provide extreme conditions as a powerful tool for plasma-based accelerators and novel radiation sources [1][2][3][4].It is expected that the focal intensities of 10-100 PW class lasers could approach beyond 10 23 W cm −2 , where strong-field quantum electrodynamics Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Spontaneous electron-positron pair creation out of the vacuum can take place when the field strength is higher than the Sauter-Schwinger critical field E S ≈ 1.3 × 10 16 V cm −1 [39].In strong laser fields, pair creation can be triggered at a much lower field strength [5,40].When the field strength is sufficiently strong, electron-positron pairs which will emit photons capable of decaying into new pairs, leading to QED cascades.It has been shown that the onset of seeded cascade (with one electron at the beginning) can be facilitated at an intensity around 10 24 W cm −2 [18][19][20]41].Electronpositron plasma can be exponentially generated via selfsustained gamma photon radiation and pair-production with even one electron in the propagating or standing wave formed by ultra-intense laser pulses [17,18,21].As the plasma density grows, the laser can eventually be absorbed [42], which determines the upper limit of the strong field in a non-ideal vacuum [17,43].Since seeded QED cascades couple both the stochasticity of photon radiation and pair-production, a strong quantum nature emerges.For instance, the positron yield is stochastic in a non-ideal vacuum setup [20,21]. In this article, we are going to show that the positron yields in seeded QED cascades can be a convenient signal to identify the stochasticity of QED cascades.The statistical distribution of the yields among multiple simulations conforms to a specific distribution, the width of which is larger than that of the photon radiation alone (without coupling) and much larger than the QED-average/semi-classical results.This signature indicates the stochastic nature of QED cascades and the coupling effect of photon radiation and pair-production.The cascade can be triggered by the residual gas in vacuum chambers without the need to fix or inject electrons at the laser focuses, which can potentially test the strong-field theory in the 100 PW-class laser systems [1,3,4].The quantum stochastic effects of QED cascades can be identified by evaluating the positron yield distribution of multiple laser shots hitting the residual gas through a statistical method. QED Monte-Carlo cascades The two basic QED processes, nonlinear Compton scattering (NCS) and nonlinear BW process, play the most important roles in QED cascades.Under the assumption of a strong field, the instantaneous photon emission rate is [5,44]: where α f ≈ 1/137 the fine structure constant, γ e the electron relativistic factor, τ c = h mec 2 , h the reduced Planck constant, m e the electron mass, c the speed of light, χ e = F µν Es pν mec and χ γ = F µν Es hkν mec the Lorentz-invariant quantum parameter of the electron and the photon, F µν the electromagnetic field tensor, p ν the electron four-momentum (the photon fourmomentum hk ν ) and E S ≈ 1.32 × 10 18 V m −1 the Sauter-Schwinger critical field [39].K 1 3 (s), K 2 3 (y) the modified Bessel functions and y = χ γ 3χ e(χ e−χ γ ) , respectively.Similarly, the rate of photons decaying into e + e − pairs is [5,44]: (2) where γ e is the relativistic factor of the newborn electron, and γ γ the photon relativistic factor. For the commonly used codes to simulate interactions with QED processes, the point-like QED events take place on the classical trajectory and are implemented by the Monte-Carlo method [44][45][46] in each time step.Each electron is randomly assigned an optical depth which decreases according to the radiation probability rate and the radiation event is triggered when the optical depth decreases below zero.The photon energy is then determined via inverse sampling of the photon spectrum equation (1).BW pair production follows a similar process where photons decay into pairs of electrons and positrons. QED-averaged cascades For the purpose of clarifying the quantum stochastic effects, a QED-averaged estimation of positron yield during cascades [17,21,40,42] is carried out.By using the analytical growth rate of the seeded cascade, the differences between QED-MC and QED-averaged cascades can then be attributed to the stochastic effects of QED and it can decouple the extra randomness of the seeded electrons' initial distribution.The growth rate of pairs in a cascade induced by two linear polarized laser pulses has an empirical fit of [42]: where α ≈ e 2 hc with e the electron charge, K 1 √ a S , χ e ∼ = 1.24µ 3 2 with µ = a αas .Here a = eE meω0c is the normalized laser amplitude with E being the peak electric field, ω 0 the laser angular frequency and a S = eES meω0c .The number of pairs produced by a single electron can then be estimated by N ± ∼ e Γt − 1.In the standing wave formed by two collision laser pulses, the peak field strength experienced by electrons near the collision plane can be approximated by f a (r i ) = a 0 e −r 2 i /w 2 0 with r i the initial radial position.For N seed electrons distributed at different r i , the final electron-positron yield is therefore altered by r i : The parameter t eff is inferred from the QED-MC results for consistency consideration by equating N ± with the mean yield of 1000 QED-MC results. Test-particle simulation Obtaining the distribution of the positron yields requires thousands of repetitive simulations of QED cascades.It is computationally expensive to carry out particle-in-cell (PIC) simulations.Therefore, a test-particle simulation is adopted. The test-particle algorithm solves the Lorentz equation with the Boris pusher and simulates photon emission and pairproduction with Monte-Carlo method, which is adopted by most QED-PIC codes.Unlike PIC, which solves the Maxwell equations in gridded space and interpolates fields onto shaped macro-particles, the test-particle algorithm evaluates the electromagnetic fields at instantaneous particle positions according to predefined laser profiles.We adopt the focused Gaussian pulses [47] in our calculations, which are linearly polarized (LP) along the y-direction and propagate along the x-direction with a wavelength of λ = 800 nm.Since the test-particle algorithm ignores the interaction between particles, simulation parameters such as pulse length and field strength are controlled at the onset of the QED cascade region where the number of generated electron-positron pairs is not too high and a large number of simulations is possible.In such situations, collective plasma effects can be ignored due to the relatively low plasma density.In the following simulations, we choose a field strength of a 0 = 700, spot size of w 0 = 4 µm and pulse length of τ L = 4T 0 , where w 0 and τ L corresponds to the 1/e of the Gaussian profile.The simulation time step is dt = T0 100 , which is proved to be sufficient to model NCS and BW processes in the parameter region [45].We chose a relatively small spot size and short pulse length to suppress further exponential growth of produced pairs [19,22] for computational consideration.According to our findings to be presented, longer or larger pulses will both result in stronger stochastic signatures. Fixed seeded electrons We start from the simplest situation where two colliding LP laser pulses with electrons are fixed at the origin to demonstrate the quantum stochastic effects and statistic scaling law in QED cascades.More realistic considerations will be discussed later.The evolution of the positron yields for a different number of seeded electrons N seed =1, 10, 100, 1000 is shown in figure 1.One can see that although each set of simulations has the same initial condition, the growth and final yields may significantly differ from each other when the number of seeds is low, indicating the quantum stochastic feature of the QED cascades [21].It should be noted that for N seed = 1 cascades are not triggered in most simulations for the considered parameters, and the positron yields distribution gathers near zero.By increasing the number of seeded electrons, the positron yield coverages near the mean value N marked by the dashedlines, indicating a more deterministic behavior of the QED cascades.One can find that N increases almost linearly with N seed in statistics, which could be modeled by the analytical calculation [21,42]. Another noticeable feature is the exceptional positron yields that significantly exceed the mean yield and other results, as indicated in figure 1.For N seed = 1 some simulations generate more than 50 positrons and one simulation generates about ∼100 positrons, much larger than the mean value at about N ≈ 7.As N seed increases, such a deviation is significantly depressed, and the statistics become more deterministic, as shown by figures 1(g) and (h).It can be predicted that the QED cascades triggered by thin foil [42,48,49] or gases of moderate density [19,22,50] are much more predictable and the exceptional yields will be absent. The positron yield distribution and the exceptional shots reflect the quantum/stochastic nature of photon radiation, radiation-reaction, pair-production, and their coupling.It requires a certain probability for electrons/positrons to emit high-energy photons and for those photons to decay into pairs at specific phases, resulting in abundant positron yields that stem from a sequence of incessant improbable QED events.It should be emphasized that individual photon radiation or pair-production does not lead to similar effects, which will be discussed later.To quantitatively describe the quantum effects, the normalized positron yield distributions are illustrated in figure 2(a).The relative width of the distribution can be modeled by the standard deviation σ which is shown in figure 2(b) along with the mean yield N. For N seed = 1, the distribution displays the strongest quantum stochasticity, wherein cascading is not triggered in most simulations, but it produces a maximum yield that is ten times greater than the mean yield.As N seed increases, both the relative widths and exceptional yields shrink and the statistic distributions become more concentrated and deterministic, as shown by the black-dotted line in figure 2(b).This can be interpreted as the transition of the statistics from quantum to classical, where each particle exhibits quantum behaviour but the statistics of the particle system behaves more classically as the particle number increases.It should be noted that photon radiation and pair-production are quantum processes with no analogy in classical physics, but the statistics can be deterministic and classical.At the same time, N increases linearly to N seed as expected, following equation ( 4), which models the quantum-averaged positron yields. The above results reflect the quantum stochastic nature of the coupling between the photon radiation and pair-production as mentioned before, and individual photon radiation or pairproduction processes will not induce stronger stochasticity.The normalized photon yields of seed electrons without BW pair production are shown in figure 3 that presents the individual stochasticity of photon radiation.The photon yield distributions are more concentrated than the results in figure 2, indicating lower uncertainty of the individual processes.In cascades, the number and energy of photons radiated by electrons are distributed in a wide range, which will induce a wider distribution of the pairs produced by these photons.The uncertainty of photon radiation and pair-production are then coupled and exhibit higher stochasticity. On the other hand, the quantum stochastic effects can be further coupled with the classical stochastic accelerations [51], in which the particle's stochastic acceleration is triggered by the random-walk-like motion (wandering path) in the standing wave.These classical effects are naturally included in the calculation by solving the Lorentz equation, which are then imprinted to the radiated photons.As the number of seeding particles increases, the positron yield converges, denoting the limit of classical stochasticity.It is also interesting that the stochastic photon emission could reduce the classical stochastic heating, which could contribute to the existence of attractors in the phase space and even show a transition from chaotic to regular dynamics as the field strength increases [8,50,52,53]. Experimental considerations Increasing the number of fixed seeded electrons in the focus is only an analog of the transition from quantum statistics to classical statistics, which is not experimentally possible.Previous studies have focused on injecting high-energy seed particles into the strong laser fields to initiate the cascades [41,54,55], but this is also challenging to achieve in practice.Here we propose a different approach, where we use the low-density residual gas in the vacuum chamber as the seed particles.In our modeling, the gas molecules (100% N 2 for simplicity) are pre-ionized by the strong laser fields and randomly distributed around the laser focal area.The electrons of N 2 are randomly distributed in the simulation box of 48 µm × 60 µm × 60 µm and the number of electrons is determined by the gas number density n gas (or vacuity) and the volume of the interaction region.The counter-propagating laser pulses enter the interaction region from opposite sides of the simulation box along +x and −x direction and trigger cascading. We simulate seeded cascades with four different gas densities n gas = 10 15 , 10 16 , 10 17 , 10 18 m −3 , in which n gas = 10 15 m −3 is close to the lowest attainable vacuity of the vacuum chamber for PW laser facilities.Figure 4(a) shows the normalized positron yield distribution for 1000 simulations.For low gas densities (⩽10 17 m −3 ) the exponential distribution shows a strong stochastic signature since cascades in most simulations are not initiated due to the low densities.The highest yield is as high as Nmean > 100 at n gas = 10 15 m −3 , much higher than the fixed seeded electrons, since the mean yield of low-density gas is lower than that of the fixed electrons.At higher density (∼ 10 18 m −3 ) the statistic distribution shows a classically convergent trend, similar to the results in figure 2(a). However, the high yield at low densities originates from the additional stochasticity induced by the randomness of the initial distribution of the electrons.For the sole estimation of this aspect, the semi-classical (QED-averaged cascade) results, as introduced in the methods section, are shown in figure 4(b) where the stochastic effects from QED are averaged and the yield distribution can be solely attributed to the randomness of the initial electron locations.By comparing the widths of the distributions in figures 4(a) and (b), we find that the location-induced randomness is significantly lower than the QED-induced randomness.The widths σ and mean yields N are compared in figure 4(c), where the mean yields of the QED and QED-averaged results coincide but the widths of the yield distribution of the latter are one-order-of-magnitude lower than the former, indicating that the strength of QED stochasticity exceeds the randomness of initial distribution of the locations and that the gap inbetween represents the stochasticity of QED. Therefore, two experimental signatures can provide proof of quantum stochastic effects: (1) the statistical distribution and the transition of statistics from quantum stochastic to classical, derived from the interaction between the coupling stochastic effects of quantum nature and the statistically convergent effect induced by multiple seeds.The QED-induced distribution is distinctly different and broader than the semiclassical/QED-averaged calculation; (2) the exceptional prolific shots with much higher yield than the mean expectation for thin gas, where few pairs should be produced in semiclassical/QED-averaged calculations.The yield may be two orders higher than the mean value, thus it can be easily identified by statistical methods. Considering that 1000 shots is a heavy task for high-power (>100 PW) laser systems due to their low repetition rates, we estimate the minimum required shots to reproduce the distribution of the deviation σ.As shown in figure 5, the black lines represent the widths of yields of QED (solid line) and QED-averaged (dashed line) results for N shots = 1000, and the same in figure 4(c), and the shaded areas represent the 5% and 95% percentiles of possible σ that can be observed for shot numbers of N shots = 10 (gray), 50 (red) and 100 (green).For very few shots like N shots = 10, it is more probable to produce zero-results at low gas densities and the possible σ can be lower than expected.It should take more than 100 shots to resolve the statistical deviation at low density.As the gas density increases, the percentile area shrinks rapidly for both the QED and QED-averaged results, i.e. the transition to classical.The QED results can be well distinguished from the QEDaveraged results for N shots ⩾ 50.Therefore, more than 50 shots are required to capture the accurate deviation of positron yields and the transition from quantum to classical as the gas density increases. The optimal laser parameter Here, we estimate the stability of the statistical law for three densities n gas = 10 16 , 10 17 , 10 18 m −3 with different laser parameters, in order to find the optimal experimental conditions for detection.We consider realistic parameters for near-future experiments: (a) a 0 varies from 650 to 800 and τ 0 = 4T 0 , w 0 = 4 µm, (b) τ 0 varies from 2T 0 to 5T 0 and a 0 = 700, w 0 = 4 µm, (c) a 0 = 700, τ 0 = 4T 0 with w 0 varies from 2 µm to 5 µm.The results are shown in figure 6 for three gas densities. In general, increasing a 0 , τ 0 or w 0 all lead to higher mean yield of cascades (red lines), which means more QED events, and more significant stochastic features can be observed.This can be verified by the gray lines.It is worth noting that increasing w 0 increases the number of electrons in the colliding volume, which is equivalent to increasing the gas density.On the other hand, it increases the interaction time of cascades in laser fields, which is similar to increasing the pulse length.As already shown in figures 5 and 6(b), the dependence of σ on gas density and pulse length are opposite.As a result, σ shows relatively weak dependence on w 0 as shown by the gray lines in figure 6(c).Therefore, tightly focused lasers are preferable for the observation of the stochastic effects of QED cascades.This is different from [19,22] that for optimal cascade development a larger laser spot size is needed.The pulse lengths and field strengths are constrained in the selected region for computational consideration.Predictions can be extrapolated from the results for longer pulse lengths and lower field strengths accessible in future 100 PW-class laser systems [1][2][3][4]. The effect of laser pointing instability Studies have shown that the time delay between the pulses has a minor impact on the cascades, but the transverse mismatch between pulses can prevent cascading [22].Taking into consideration the pointing instability, we assume that each pulse has an independent offset ∆ in each transverse direction (y and z) that follows ∆ = r • δw, where r is a random number that follows a normal distribution.We simulate cascades for δw = 0, 1, 2, 3, 4 µm and w 0 = 4 µm.The results in figure 7 show that the average yields are suppressed when the misalignment increases since the number of electrons in the interaction region decrease when the two colliding pulses are misaligned.Although misalignment of laser spots adds extra randomness to the process, the trend of increased σ for lower gas densities remains.Moreover, as there is a trend of insufficient shots for statistics in figure 5 at lower density, the pointing instability requires more results to guarantee more reliable statistics.However, to better observe the effects, larger yields are preferred for a higher signal-noise ratio, and the laser misalignment should be well controlled. The pre-pulse influences For a PW laser system one must consider the influence of prepulses.For the investigated field strengths, the pre-pulses can fully ionize low-Z atoms like hydrogen, but relatively high-Z atoms (like oxygen and nitrogen) may only get partially ionized before the main pulse arrives.This effect has been discussed in several works such as [19,22], in which the hydrogen/oxygen gas target is used for QED cascade.Since the pre-pulses are unable to fully ionize the inner-shell electrons, it is hard for the pre-pulses to sweep out the electrons in the interaction region.In a recent study on vacuum cleaning experiments [56] using pre-pulse-like lasers to ionize and eliminate the residual gas near the laser focus, it is found that while these ionized particles are pushed outward, a significant amount of residual particles also drift into the laser propagation path, indicating negligible effects of pre-pulses.The prepulses overall act as a modification of the gas densities near the laser focus, and increasing the gas density could compensate for the decrease in electrons. Conclusion In summary, we studied the seeded QED cascades driven by two counter-propagating laser pulses, and found that the positron yields follow specific distributions among multiple simulations, which reflects the stochastic nature of QED cascades.The quantum stochastic effect is greatly enhanced by the coupling between photon radiation and pair-production in the seeded cascade.The enhanced effect can be observed in the collision between ultra-intense laser pulses and thin gases, which are usually residual molecules in the vacuum chamber.The quantum stochastic effect can be quantified by the width of the distribution and exceptional events of the positron yields.Quantum effects are significant when the gas density is moderately low and the results become more deterministic at higher densities.For the experimental observation of quantum stochastic effects, a tightly focused laser is preferred and the pointing stability of the laser pulses should be well controlled for optimal yields.The proposed scheme can be validated with the 100 PW laser systems. Figure 2 . Figure 2. (a) The positrons' yield distribution of 4 sets of simulations (N seed =1, 10, 100, 1000), each of which is normalized by the mean yield of 1000 independent simulations.(b) The normalized distributions' standard deviation σ (black dots) and mean yield N (red dots) versus the number of seed N seed . Figure 3 . Figure 3.The normalized photons' yield distribution without BW pair production. Figure 4 . Figure 4. (a) The positrons' yield distributions of 4 sets of simulations (ngas = 10 15 , 10 16 , 10 17 , 10 18 m −3 ), each of which is normalized by the mean yield N. (b) The positrons' yield distributions of the semi-classical calculations that only consider the randomness of the initial electron locations.(c) The normalized standard deviation σ (black) and the mean yield N (red) versus the gas density ngas, where the solid-dotted lines are from simulations and the dashed-triangle lines are from semi-classical (QED-averaged) calculations. Figure 5 . Figure 5.The standard deviation σ of normalized positron yields at different densities ngas for limited number of laser shots of N shots = 10 (gray), 50 (red) and 100 (green).The lines are the results of N shots = 1000, same in figure 4(c).The shaded areas represent the 5% and 95% percentiles of possible σ of N shots shots.
5,597.2
2024-03-15T00:00:00.000
[ "Physics" ]
A Crossing-Sensitive Third-Order Factorization for Dependency Parsing Parsers that parametrize over wider scopes are generally more accurate than edge-factored models. For graph-based non-projective parsers, wider factorizations have so far implied large increases in the computational complexity of the parsing problem. This paper introduces a “crossing-sensitive” generalization of a third-order factorization that trades off complexity in the model structure (i.e., scoring with features over multiple edges) with complexity in the output structure (i.e., producing crossing edges). Under this model, the optimal 1-Endpoint-Crossing tree can be found in O(n4) time, matching the asymptotic run-time of both the third-order projective parser and the edge-factored 1-Endpoint-Crossing parser. The crossing-sensitive third-order parser is significantly more accurate than the third-order projective parser under many experimental settings and significantly less accurate on none. Introduction Conditioning on wider syntactic contexts than simply individual head-modifier relationships improves parsing accuracy in a wide variety of parsers and frameworks (Charniak and Johnson, 2005;McDonald and Pereira, 2006;Hall, 2007;Carreras, 2007;Martins et al., 2009;Zhang and Nivre, 2011;Bohnet and Kuhn, 2012;Martins et al., 2013). This paper proposes a new graphbased dependency parser that efficiently produces * The majority of this work was done while at the University of Pennsylvania. the globally optimal dependency tree according to a third-order model (that includes features over grandparents and siblings in the tree) in the class of 1-Endpoint-Crossing trees (that includes all projective trees and the vast majority of non-projective structures seen in dependency treebanks). Within graph-based projective parsing, the thirdorder parser of has a runtime of O(n 4 ), just one factor of n more expensive than the edge-factored model of Eisner (2000). Incorporating richer features and producing trees with crossing edges has traditionally been a challenge, however, for graph-based dependency parsers. If parsing is posed as the problem of finding the optimal scoring directed spanning tree, then the problem becomes NP-hard when trees are scored with a grandparent and/or sibling factorization (McDonald and Pereira, 2006;McDonald and Satta, 2007). For various definitions of mildly non-projective trees, even edge-factored versions are expensive, with edge-factored running times between O(n 4 ) and O(n 7 ) (Gómez- Rodríguez et al., 2011;Pitler et al., 2012;Pitler et al., 2013;Satta and Kuhlmann, 2013). The third-order projective parser of and the edge-factored 1-Endpoint-Crossing parser described in Pitler et al. (2013) have some similarities: both use O(n 4 ) time and O(n 3 ) space, using sub-problems over intervals with one exterior vertex, which are constructed using one free split point. The two parsers differ in how the exterior vertex is used: use the exterior vertex to store a grandparent index, while Pitler et al. (2013) use the exterior vertex to introduce crossed edges between the point and Table 1: Parsing time for various output spaces and model factorizations. CS-GSib refers to the (crossing-sensitive) grand-sibling factorization described in this paper. the interval. This paper proposes merging the two parsers to achieve the best of both worlds -producing the best tree in the wider range of 1-Endpoint-Crossing trees while incorporating the identity of the grandparent and/or sibling of the child in the score of an edge whenever the local neighborhood of the edge does not contain crossing edges. The crossing-sensitive grandparent-sibling 1-Endpoint-Crossing parser proposed here takes O(n 4 ) time, matching the runtime of both the third-order projective parser and of the edge-factored 1-Endpoint-Crossing parser (see Table 1). The parsing algorithms of and Pitler et al. (2013) are reviewed in Section 2. The proposed crossing-sensitive factorization is defined in Section 3. The parsing algorithm that finds the optimal 1-Endpoint-Crossing tree according to this factorization is described in Section 4. The implemented parser is significantly more accurate than the third-order projective parser in a variety of languages and treebank representations (Section 5). Section 6 discusses the proposed approach in the context of prior work on non-projective parsing. Preliminaries In a projective dependency tree, each subtree forms one consecutive interval in the sequence of input words; equivalently (assuming an artificial root node placed as either the first or last token), when all edges are drawn in the half-plane above the sentence, no two edges cross (Kübler et al., 2009). Two vertex-disjoint edges cross if their endpoints interleave. A 1-Endpoint-Crossing tree is a dependency tree such that for each edge, all edges that cross it share a common vertex (Pitler et al., 2013). Note that the class of projective trees is properly included within the class of 1-Endpoint-Crossing trees. To avoid confusion between intervals and edges, e ij denotes the directed edge from i to j (i.e., i is the parent of j). Interval notation ((i, j), [i, j], (i, j], or [i, j)) is used to denote sets of vertices between i and j, with square brackets indicating closed intervals and round brackets indicating open intervals. Grand-Sibling Projective Parsing A grand-sibling factorization allows features over 4-tuples of (g, h, m, s), where h is the parent of m, g is m's grandparent, and s is m's adjacent inner sibling. Features over these grand-sibling 4tuples are referred to as "third-order" because they scope over three edges simultaneously ( e gh , e hs , and e hm ). The parser of produces the highest-scoring projective tree according to this grand-sibling model by adding an external grandparent index to each of the sub-problems used in the sibling factorization (McDonald and Pereira, 2006). Figure 6 in provided a pictorial view of the algorithm; for convenience, it is replicated in Figure 1. An edge e hm is added to the tree in the "trapezoid" step ( Figure 1b); this allows the edge to be scored conditioned on m's grandparent (g) and its adjacent inner sibling (s), as all four relevant indices are accessible. Edge-factored 1-Endpoint-Crossing Parsing The edge-factored 1-Endpoint-Crossing parser of Pitler et al. (2013) produces the highest scoring 1-* Which cars do Americans 0 1 2 3 4 ? days favor most these 9 8 7 6 5 Figure 2: A 1-Endpoint-Crossing non-projective English sentence from the WSJ Penn Treebank (Marcus et al., 1993), converted to dependencies with PennConverter (Johansson and Nugues, 2007 Endpoint-Crossing tree with each edge e hm scored according to Score(Edge(h, m)). The 1-Endpoint-Crossing property allows the tree to be built up in edge-disjoint pieces each consisting of intervals with one exterior point that has edges into the interval. For example, the tree in Figure 2 would be built up with the sub-problems shown in Figure 3. To ensure that crossings within a sub-problem are consistent with the crossings that happen as a result of combination steps, the algorithm uses four different "types" of sub-problems, indicating whether the edges incident to the exterior point may be internally crossed by edges incident to the left boundary point (L), the right (R), either (LR), or neither (N ). In Figure 3, the sub-problem over [*, do] ∪ {favor} would be of type R, and [favor, ?] ∪ {do} of type L. Naïve Approach to Including Grandparent Features The example in Figure 3 illustrates the difficulty of incorporating grandparents into the scoring of all edges in 1-Endpoint-Crossing parsing. The vertex favor has a parent or child in all three of the subproblems. In order to use grandparent scoring for the edges from favor to favor's children in the other two sub-problems, we would need to augment the problems with the grandparent index do. We also must add the parent index do to the middle subproblem to ensure consistency (i.e., that do is in fact the parent assigned). Thus, a first attempt to score all edges with grandparent features within 1-Endpoint-Crossing trees raises the runtime from O(n 4 ) to O(n 7 ) (all of the four indices need a "predicted parent" index; at least one edge is always implied so one of these additional indices can be dropped). Crossing-Sensitive Factorization Factorizations for projective dependency parsing have often been designed to allow efficient parsing. For example, the algorithms in Eisner (2000) and McDonald and Pereira (2006) achieve their efficiency by assuming that children to the left of the parent and to the right of the parent are independent of each other. The algorithms of Carreras (2007) and Model 2 in include grandparents for only the outermost grand-children of each parent for efficiency reasons. In a similar spirit, this paper introduces a variant of the Grand-Sib factorization that scores crossed edges independently (as a CrossedEdge part) and uncrossed edges under either a grandparent-sibling, grandparent, sibling, or edge-factored model depending on whether relevant edges in its local neighborhood are crossed. A few auxiliary definitions are required. For any parent h and grandparent g, h's children are partitioned into interior children (those between g and h) and exterior children (the complementary set of children). 1 Interior children are numbered from closest to h through furthest from h; exterior children are first numbered on the side closer to h from closest to h through furthest, then the enumeration wraps around to include the vertices on the side closer to g. Figure 4 shows a parent h, its grandparent g, and a possible sequence of three interior and four exterior children. Note that for a projective tree, there would not be any children on the far side of g. Figure 4: The exterior children are numbered first beginning on the side closest to the parent, then the side closest to the grandparent. There must be a path from the root to g, so the edges from h to its exterior children on the far side of g are guaranteed to be crossed. ¬Crossed ( e hs ) ¬GProj ( e hm ) Edge(h, m) Sib(h, m, s) GProj ( e hm ) Grand(g, h, m) GrandSib(g, h, m, s) Uncrossed GProj edges include the grandparent in the part. The part includes the sibling if the edge e hs from the parent to the sibling is not crossed. Table 2 gives the factorization for uncrossed edges. The parser in this paper finds the optimal 1-Endpoint-Crossing tree according to this factorized form. A fully projective tree would decompose into exclusively GrandSib parts (as all edges would be uncrossed and GProj ). As all projective trees are within the 1-Endpoint-Crossing search space, the optimization problem that the parser solves includes all projective trees scored with grand-sibling features everywhere. Projective parsing with grandsibling scores can be seen as a special case, as the crossing-sensitive 1-Endpoint-Crossing parser can simulate a grand-sibling projective parser by setting all Crossed(h, m) scores to −∞. In Figure 2, the edge from do to Americans is not GProj because Condition (1) is violated, while the edge from favor to most is not GProj because Condition (2) is violated. Under this definition, the vertices do and favor (which have children in multiple sub-problems) do not need external grandparent indices in any of their sub-problems. Table 3 CrossedEdge(*,do) Sib(cars, Which, -) CrossedEdge(favor,cars) Sib(do, Americans, -) Sib(do, favor, Americans) CrossedEdge(do,?) Sib(favor, most, -) Sib(favor, days, most) GSib(favor, days, these, -) Figure 2 according to the crossing-sensitive third-order factorization described in Section 3. Null inner siblings are indicated with -. lists the parts in the tree in Figure 2 according to this crossing-sensitive third-order factorization. Parsing Algorithm The parser finds the maximum scoring 1-Endpoint-Crossing tree according to the factorization in Section 3 with a dynamic programming procedure reminiscent of (for scoring uncrossed edges with grandparent and/or sibling features) and of Pitler et al. (2013) (for including crossed edges). The parser also uses novel subproblems for transitioning between portions of the tree with and without crossed edges. This formulation of the parsing problem presents two difficulties: 1. The parser must know whether an edge is crossed when it is added. 2. For uncrossed edges, the parser must use the appropriate part for scoring according to whether other edges are crossed (Table 2). Difficulty 1 is solved by adding crossed and uncrossed edges to the tree in distinct sub-problems (Section 4.1). Difficulty 2 is solved by producing different versions of subtrees over the same sets of vertices, both with and without a grandparent index, which differ in their assumptions about the tree outside of that set (Section 4.2). The list of all subproblems with their invariants and the full dynamic program are provided in the supplementary material. Enforcing Crossing Edges The parser adds crossed and uncrossed edges in distinct portions of the dynamic program. Uncrossed edges are added only through trapezoid subproblems (that may or may not have a grandparent index), while crossed edges are added in nontrapezoid sub-problems. To add all uncrossed edges in trapezoid sub-problems, the parser (a) enforces that any edge added anywhere else must be crossed, and (b) includes transitional sub-problems to build trapezoids when the edge e hm is not crossed, but the edge to its inner sibling e hs is (and so the construction step shown in Figure 1b cannot be used). Pitler et al. (2013) included crossing edges by using "crossing region" sub-problems over intervals with an external vertex that optionally contained edges between the interval and the external vertex. An uncrossed edge could then be included either by a derivation that prohibited it from being crossed or a derivation which allowed (but did not force) it to be crossed. This ambiguity is removed by enforcing that (1) each crossing region contains at least one edge incident to the exterior vertex, and (2) all such edges are crossed by edges in another sub-problem. For example, by requiring at least one edge between do and (favor, ?] and also between favor and (*, do), the edges in the two sets are guaranteed to cross. Trapezoids with Edge to Inner Sibling Crossed To add all uncrossed edges in trapezoid-style subproblems, we must be able to construct a trapezoid over vertices [h, m] whenever the edge e hm is not crossed. The construction used in , repeated graphically in Figure 5a, cannot be used if the edge e hs is crossed, as there would then exist edges between (h, s) and (s, m), making s an invalid split point. The parser therefore includes some "transitional glue" to allow alternative ways to construct the trapezoid over [h, m] when e hm is not crossed but the edge e hs to m's inner sibling is. The two additional ways of building trapezoids are shown graphically in Figures 5b and 5c. Consider the "chain of crossing edges" that includes the edge e hs . If none of these edges are in the subtree rooted at m, then we can build the tree involving m and its inner descendants separately (Figure 5b) from the rest of the tree rooted at h. Within the interval [h, e − 1] the furthest edge incident to h ( e hs ) must be crossed: these intervals are parsed choosing s and the crossing point of e hs simultaneously (as in Figure 4 in Pitler et al. (2013)). Otherwise, the sub-tree rooted at m is involved in Chains of crossing edges are constructed by repeatedly applying two specialized types of L items that alternate between adding an edge from the interval to the exterior point (right-to-left) or from the exterior point to the interval (left-to-right) (Figure 6). The boundary edges of the chain can be crossed more times without violating the 1-Endpoint-Crossing property, and so the beginning and end of the chain can be unrestricted crossing regions. These specialized chain sub-problems are also used to construct boxes (Figure 1c) over [s, m] with shared parent h when neither edge e hs nor e hm is crossed, but the subtrees rooted at m and at s cross each other (Figure 7). Lemma 1. The GrandSib-Crossing parser adds all uncrossed edges and only uncrossed edges in a tree in a "trapezoid" sub-problem. Proof. The only part is easy: when a trapezoid is built over an interval [h, m], all edges are internal to the interval, so no earlier edges could cross e hm . Af- ter the trapezoid is built, only the interval endpoints h and m are accessible for the rest of the dynamic program, and so an edge between a vertex in (h, m) and a vertex / ∈ [h, m] can never be added. The Crossing Conditions ensure that every edge added in a non-trapezoid sub-problem is crossed. Proof. All trees that could have been built in Pitler et al. (2013) are still possible. It can be verified that the additional sub-problems added all obey the 1-Endpoint-Crossing property. Reduced Context in Presence of Crossings A crossed edge (added in a non-trapezoid subproblem) is scored as a CrossedEdge part. An uncrossed edge added in a trapezoid sub-problem, however, may need to be scored according to a GrandSib, Grand, Sib, or Edge part, depending on whether the relevant other edges are crossed. In this section we show that sibling and grandparent features are included in the GrandSib-Crossing parser as specified by Table 2. Proof. Whether the edge to an uncrossed edge's inner sibling is crossed is known bottom-up through how the trapezoid is constructed, since the inner sibling is internal to the sub-problem. When e hs is not crossed, the trapezoid is constructed as in Figure 5a, using the inner sibling as the split point. When the edge e hs is crossed, the trapezoid is constructed as in Figure 5b or 5c; note that both ways force the edge to the inner sibling to be crossed. Grandparent Features for GProj Edges Koo and Collins (2010) include an external grandparent index for each of the sub-problems that the edges within use for scoring. We want to avoid adding such an external grandparent index to any of the crossing region sub-problems (to stay within the desired time and space constraints) or to interval sub-problems when the external context would make all internal edges ¬GProj . For each interval sub-problem, the parser constructs versions both with and without a grandparent index (Figure 8). Which version is used depends on the external context. In a bad context, all edges to children within an interval are guaranteed to be ¬GProj . This section shows that all boundary points in crossing regions are placed in bad contexts, and then that edges are scored with grandparent features if and only if they are GProj . Bad Contexts for Interval Boundary Points For exterior vertex boundary points, all edges from it to its children will be crossed (Section 4.1.1), so it does not need a grandparent index. Lemma 4. If a boundary point i's parent (call it g) is within a sub-problem over vertices [i, j] or [i, j] ∪ {x}, then for all uncrossed edges e im with m in the sub-problem, the tree outside of the sub-problem is irrelevant to whether e im is GProj . Proof. The sub-problem contains the edge e gi , so Condition (1) is checked internally. m cannot be x, since e im is uncrossed. If g is x, then e im is ¬GProj regardless of the outer context. If both g and m ∈ (i, j], then Outer (m) ⊆ (i, j]: If m is an interior child of i (m ∈ (i, g)) then Outer (m) ⊆ (m, g) ⊆ (i, j]. Otherwise, if m is an exterior child (m ∈ (g, j]), by the "wrapping around" definition of Outer , Outer (m) ⊆ (g, m) ⊆ (i, j]. Thus Condition (2) is also checked internally. We can therefore focus on interval boundary points with their parent outside of the sub-problem. BadContext R (i, j) is defined symmetrically regarding j and j's parent and children. Corollary 1. If BadContext L (i, j), then for all e im with m ∈ (i, j], e im is ¬GProj . Similarly, if BadContext R (i, j), for all e jm with m ∈ [i, j), e jm is ¬GProj . No Grandparent Indices for Crossing Regions We would exceed the desired O(n 4 ) run-time if any crossing region sub-problems needed any grandparent indices. In Pitler et al. (2013), LR subproblems with edges from the exterior point crossed by both the left and the right boundary points were constructed by concatenating an L and an R subproblem. Since the split point was not necessarily incident to a crossed edge, the split point might have GProj edges to children on the side other than where it gets its parent; accommodating this would add another factor of n to the running time and space x k j x i j = + k i x Figure 9: For all split points k, the edge from k's parent to k is crossed, so all edges from k to children on either side were ¬GProj . The case when the split point's parent is from the right is symmetric. Figure 10: The edge e kx is guaranteed to be crossed, so k is in a BadContext for whichever side it does not get its parent from. to store the split point's parent. To avoid this increase in running time, they are instead built up as in Figure 9, which chooses the split point so that the edge from the parent of the split point to it is crossed. Lemma 5. For all crossing region sub-problems Proof. Crossing region sub-problems either combine to form intervals or larger crossing regions. When they combine to form intervals as in Figure 3, it can be verified that all boundary points are in a bad context. LR sub-problems were discussed above. Split points for the L/R/N sub-problems by construction are incident to a crossed edge to a further vertex. If that edge is from the split point's parent to the split point, then the grand-edge is crossed and so both sides are in a bad context. If the crossed edge is from the split point to a child, then that child is Outer to all other children on the side in which it does not get its parent (see Figure 10). Corollary 2. No grandparent indices are needed for any crossing region sub-problem. Triangles and Trapezoids with and without Grandparent Indices The presentation that follows assumes left-headed versions. Uncrossed edges are added in two distinct types of trapezoids: (1) TrapG[h, m, g, L] with an external grandparent index g, scores the edge e hm with grandpar-ent features, and (2) Trap[h, m, L] without a grandparent index, scores the edge e hm without grandparent features. Triangles also have versions with (TriG[h, e, g, L] and without (Tri[h, e, L]) a grandparent index. What follows shows that all GProj edges are added in TrapG sub-problems, and all ¬GProj uncrossed edges are added in Trap subproblems. Proof. BadContext L (i, j) implies either the edge from i's parent to i is crossed and/or an edge from i to a child of i outer to j is crossed. If the edge from i's parent to i is crossed, then BadContext L (i, k). If a child of i is outer to j, then since k ∈ (i, j), such a child is also outer to k. Lemma 7. All left-rooted triangle sub-problems Proof. All triangles without grandparent indices are either placed immediately into a bad context (by adding a crossed edge to the triangle's root from its parent, or a crossed edge from the root to an outer child) or are combined with other sub-trees to form larger crossing regions (and therefore the triangle is in a bad context, using Lemmas 5 and 6). If the triangle contains exterior children of h (e and g are on opposite sides of h), then it can either combine with a trapezoid to form another larger triangle (as in Figure 1a) or it can combine with another sub-problem to form a box with a grandparent index (Figure 1c or 7). Boxes with a grandparent index can only combine with another trapezoid to form a larger trapezoid (Figure 1b). Both cases force e gh to not be crossed and prevent h from having any outer crossed children, as h becomes an internal node within the larger sub-problem. If the triangle contains interior children of h (e lies between g and h), then it can either form a trapezoid from g to h by combining with a triangle (Figure 5b) or a chain of crossing edges (Figure 5c), or it can be used to build a box with a grandparent index (Figures 1c and 7), which then can only be used to form a trapezoid from g to h. In either case, a trapezoid is constructed from g to h, enforcing that e gh cannot be crossed. These steps prevent h from having any additional children between g and e (since h does not appear in the adjacent sub-problems at all whenever h = e), so again the children of h in (e, h) have no outer siblings. Lemma 9. In a TriG[h, e, g, L] sub-problem, if an edge e hm is not crossed and no edges from i to siblings of m in (m, e] are crossed, then e hm is GProj . Proof. This follows from (1) the edge e hm is not crossed, (2) the edge e gh is not crossed by Lemma 8, and (3) no outer siblings are crossed (outer siblings in (m, e] are not crossed by assumption and siblings outer to e are not crossed by Lemma 8). Lemma 10. An edge e hm scored with a GrandSib or Grand part (added through a TrapG [h, m, g, L] or TrapG[m, h, g, R] sub-problem) is GProj . Proof. A TrapG can either (1) combine with descendants of m to form a triangle with a grandparent index rooted at h (indicating that m is the outermost inner child of h) or (2) combine with descendants of m and of m's adjacent outer sibling (call it o), forming a trapezoid from h to o (indicating that e ho is not crossed). Such a trapezoid could again only combine with further uncrossed outer siblings until eventually the final triangle rooted at h with grandparent index g is built. As e hm was not crossed, no edges from h to outer siblings within the triangle are crossed, and e hm is within a TriG sub-problem, e hm is GProj by Lemma 9. Lemma 11. An uncrossed edge e hm scored with a Sib or Edge part (added through a Trap [h, m, L] or Trap[m, h, R] sub-problem) is ¬GProj . Proof. A Trap can only (1) form a triangle without a grandparent index, or (2) form a trapezoid to an outer sibling of m, until eventually a final triangle rooted at h without a grandparent index is built. This triangle without a grandparent index is then placed in a bad context (Lemma 7) and so e hm is ¬GProj (Corollary 1). Main Results Lemma 12. The crossing-sensitive third-order parser runs in O(n 4 ) time and O(n 3 ) space when the input is an unpruned graph. When the input to the parser is a pruned graph with at most k incoming edges per node, the crossing-sensitive thirdorder parser runs in O(kn 3 ) time and O(n 3 ) space. Proof. All sub-problems are either over intervals (two indices), intervals with a grandparent index (three indices), or crossing regions (three indices). No crossing regions require any grandparent indices (Corollary 2). The only sub-problems that require a maximization over two internal split points are over intervals and need no grandparent indices (as the furthest edges from each root are guaranteed to be crossed within the sub-problem). All steps either contain an edge in their construction step or in the invariant of the sub-problem, so with a pruned graph as input, the running time is the number of edges (O(kn)) times the number of possibilities for the other two free indices (O(n 2 )). The space is not reduced as there is not necessarily an edge relationship between the three stored vertices. Theorem 1. The GrandSib-Crossing parser correctly finds the maximum scoring 1-Endpoint-Crossing tree according to the crossing-sensitive third-order factorization (Section 3) in O(n 4 ) time and O(n 3 ) space. When the input to the parser is a pruned graph with at most k incoming edges per node, the GrandSib-Crossing parser correctly finds the maximum scoring 1-Endpoint-Crossing tree that uses only unpruned edges in O(kn 3 ) time and O(n 3 ) space. Proof. The correctness of scoring follows from Lemmas 3, 10, and 11. The search space of 1-Endpoint-Crossing trees was in Lemma 2 and the time and space complexity in Lemma 12. The parser produces the optimal tree in a welldefined output space. Pruning edges restricts the output space the same way that constraints enforcing projectivity or the 1-Endpoint-Crossing property also restrict the output space. Note that if the optimal unconstrained 1-Endpoint-Crossing tree does not include any pruned edges, then whether the parser uses pruning or not is irrelevant; both the pruned and unpruned parsers will produce the exact same tree. Experiments The crossing-sensitive third-order parser was implemented as an alternative parsing algorithm within dpo3 . 2 To ensure a fair comparison, all code relating to input/output, features, learning, etc. was re-used from the original projective implementation, and so the only substantive differences between the projective and 1-Endpoint-Crossing parsers are the dynamic programming charts, the parsing algorithms, and the routines that extract the maximum scoring tree from the completed chart. The treebanks used to prepare the CoNLL shared task data (Buchholz and Marsi, 2006;Nivre et al., 2007) vary widely in their conventions for representing conjunctions, modal verbs, determiners, and other decisions (Zeman et al., 2012). The experiments use the newly released HamleDT software (Zeman et al., 2012) that normalizes these treebanks into one standard format and also provides built-in transformations to other conjunction styles. The unnormalized treebanks input to HamleDT were from the CoNLL 2006 Shared Task (Buchholz and Marsi, 2006) for Danish, Dutch, Portuguese, and Swedish and from the CoNLL 2007 Shared Task (Nivre et al., 2007) for Czech. The experiments include the default Prague style (Böhmová et al., 2001), Mel'čukian style (Mel'čuk, 1988), and Stanford style (De Marneffe and Manning, 2008) for conjunctions. Under the grandparent-sibling factorization, the two words being conjoined would never appear in the same scope for the Prague style (as they are siblings on different sides of the conjunct head). In the Mel'čukian style, the two conjuncts are in a grandparent relationship and in the Stanford style the two conjuncts are in a sibling relationship, and so we would expect to see larger gains for including grandparents and siblings under the latter two representations. The experiments also include a nearly projective dataset, the English Penn Treebank (Marcus et al., 1993), converted to dependencies with PennConverter (Johansson and Nugues, 2007). The experiments use marginal-based pruning based on an edge-factored directed spanning tree model (McDonald et al., 2005). Each word's set of potential parents is limited to those with a marginal probability of at least .1 times the probability of the most probable parent, and cut off this list at a maximum of 20 potential parents per word. To ensure that there is always at least one projective and/or 1-Endpoint-Crossing tree achievable, the artificial root is always included as an option. The pruning parameters were chosen to keep 99.9% of the true edges on the English development set. Following Carreras (2007) and , before training the training set trees are transformed to be the best achievable within the model class (i.e., the closest projective tree or 1-Endpoint-Crossing tree). All models are trained for five iterations of averaged structured perceptron training. For English, the model after the iteration that performs best on the development set is used; for all other languages, the model after the fifth iteration is used. Results Results for edge-factored and (crossing-sensitive) grandparent-sibling factored models for both projective and 1-Endpoint-Crossing parsing are in Tables 4 and 5. In 14 out of the 16 experimental set-ups, the third-order 1-Endpoint-Crossing parser is more accurate than the third-order projective parser. It is significantly better than the projective parser in 9 of the set-ups and significantly worse in none. the parser is able to score with a sibling context more often than it is able to score with a grandparent, perhaps explaining why the datasets using the Stanford conjunction representation saw the largest gains from including the higher order factors into the 1-Endpoint-Crossing parser. Across languages, the third-order 1-Endpoint-Crossing parser runs 2.1-2.7 times slower than the third-order projective parser (71-104 words per second, compared with 183-268 words per second). Parsing speed is correlated with the amount of pruning. The level of pruning mentioned earlier is relatively permissive, retaining 39.0-60.7% of the edges in the complete graph; a higher level of pruning could likely achieve much faster parsing times with the same underlying parsing algorithms. Discussion There have been many other notable approaches to non-projective parsing with larger scopes than single edges, including transition-based parsers, directed spanning tree graph-based parsers, and mildly nonprojective graph-based parsers. Transition-based parsers score actions that the parser may take to transition between different configurations. These parsers typically use either greedy or beam search, and can condition on any tree context that is in the history of the parser's actions so far. Zhang and Nivre (2011) significantly improved the accuracy of an arc-eager transition system (Nivre, 2003) by adding several additional classes of features, including some thirdorder features. Basic arc-eager and arc-standard (Nivre, 2004) models that parse left-to-right using a stack produce projective trees, but transition-based parsers can be modified to produce crossing edges. Such modifications include pseudo-projective parsing in which the dependency labels encode transformations to be applied to the tree (Nivre and Nilsson, 2005), adding actions that add edges to words in the stack that are not the topmost item (Attardi, 2006), adding actions that swap the positions of words (Nivre, 2009), and adding a second stack (Gómez-Rodríguez and Nivre, 2010). Graph-based approaches to non-projective parsing either consider all directed spanning trees or restricted classes of mildly non-projective trees. Directed spanning tree approaches with higher order features either use approximate learning techniques, such as loopy belief propagation (Smith and Eisner, 2008), or use dual decomposition to solve relaxations of the problem Martins et al., 2013). While not guaranteed to produce optimal trees within a fixed number of iterations, these dual decomposition techniques do give certificates of optimality on the instances in which the relaxation is tight and the algorithm converges quickly. This paper described a mildly non-projective graph-based parser. Other parsers in this class find the optimal tree in the class of well-nested, block degree two trees (Gómez-Rodríguez et al., 2011), or in a class of trees further restricted based on gap inheritance (Pitler et al., 2012) or the head-split property (Satta and Kuhlmann, 2013), with edgefactored running times of O(n 5 ) − O(n 7 ). The factorization used in this paper is not immediately compatible with these parsers: the complex cases in these parsers are due to gaps, not crossings. However, there may be analogous "gap-sensitive" factorizations that could allow these parsers to be extended without large increases in running times. Conclusion This paper proposed an exact, graph-based algorithm for non-projective parsing with higher order features. The resulting parser has the same asymptotic run time as a third-order projective parser, and is significantly more accurate for many experimental settings. An exploration of other factorizations that facilitate non-projective parsing (for example, an analogous "gap-sensitive" variant) may be an interesting avenue for future work. Recent work has investigated faster variants for third-order graph-based projective parsing (Rush and Petrov, 2012;Zhang and McDonald, 2012) using structured prediction cascades (Weiss and Taskar, 2010) and cube pruning (Chiang, 2007). It would be interesting to extend these lines of work to the crossing-sensitive thirdorder parser as well.
8,299.2
2014-02-28T00:00:00.000
[ "Computer Science" ]
Antimicrobial resistance and pathogen distribution in hospitalized burn patients Abstract Burn infections pose a serious obstacle to recovery. To investigate and analyze the antimicrobial resistance and distribution of pathogenic bacteria among hospitalized burn patients. A 3-year retrospective study was conducted in the southeast of China. The electronic medical records system was used to collect all clinical data on 1449 hospitalized patients from Fujian Medical University Union Hospital, the 180th Hospital of Chinese People's Liberation Army (PLA), the 92nd Hospital of PLA, and the First Hospital of Longyan City. A total of 1891 strains of pathogenic bacteria were detected from 3835 clinical specimens, and the total detection rate was 49.3% (1891/3835). The main pathogens were gram-negative bacteria (1089 strains; 57.6%), followed by gram-positive bacteria (689 strains; 36.4%), and fungi (113 strains; 6.0%). The predominant five bacteria were Staphylococcus aureus (19.0%), Acinetobacter baumannii (17.6%), Pseudomonas aeruginosa (16.7%), Klebsiella pneumoniae (7.4%), and Enterococcus faecalis (4.5%). Methicillin-resistant Staphylococcus aureus (MRSA) accounted for 74.1% (265/359) of S aureus isolates. Staphylococcus epidermidis accounted for 40.6% (69/170) of coagulase-negative staphylococcal isolates, 72.5% (50/69) of which were methicillin-resistant Staphylococcus epidermidis (MRSE). Both MRSA and MRSE were 100% resistant to penicillin and ampicillin. A baumannii was the most commonly isolated strain of gram-negative bacteria with 100% resistance to ampicillin, amoxicillin, amoxicillin/clavulanic acid, and aztreonam. More than 80% of K pneumoniae isolates were resistant to ampicillin, amoxicillin and cefazolin. More than 80% of Escherichia coli isolates were resistant to ampicillin, piperacillin, cefazolin, amoxicillin, tetracycline, and sulfamethoxazole trimethoprim. The detection rates of extended-spectrum β-lactamases (ESBL) among K pneumoniae and E coli isolates were 44.6% (62/139) and 67.2% (41/61), respectively. Low-resistance antibiotics included teicoplanin, tigecycline, vancomycin, and linezolid. The pathogens presented high resistance to antimicrobial agents, especially MRSA and A baumannii. Monitoring of bacterial population dynamics should be established to inhibit the progression of bacterial resistance. Introduction Burn infections are a serious hindrance to patient recovery. Infections have been estimated to account for 75% of burn patient deaths. [1,2] The damage of protective skin barrier and the damage of humoral and cellular immunity accelerate the colonization of skin microorganism. [3] In addition, the gastrointestinal tract bacterial translocation and invasive diagnosis and treatment procedures, such as tracheal intubation, invasive central veins or ductus arteriosus, and catheterization, also contribute to the incidence of infection. [4] Moreover, a serious problem in China is antibiotic overuse, which promotes the emergence of antimicrobial-resistant pathogens. By reviewing the variable history of the burn wound bacterial ecology, we have observed changes with time [5] and climate. [6] Within the same hospital moreover, bacterial drug resistance varies in response to local or systemic medications. [7] Therefore, the timely and pre-emptive understanding of the bacterial epidemiologic distribution and antimicrobial-resistance patterns among burn patients is of critical importance. In this study, a retrospective analysis was conducted concerning the pathogen distribution and antimicrobial resistance of a total of 1891 isolates from 1449 patients with nosocomial infections in 4 burn wards in Fujian province (located in the southeast of China) from January 2013 to December 2015. This study could serve as a reference for the prevention or treatment of burn infections and the rational use of antimicrobials. Patients Patients treated in the burn wards of Fujian Medical University Union Hospital, the 180th Hospital of PLA, the 92nd Hospital of PLA, and the First Hospital of Longyan City between January 2013 and December 2015 were included in this retrospective analysis. Patients with incomplete data will be excluded. Fujian Medical University Union Hospital is a Class A tertiary provincial public hospital. Its burn department, established in 1976, is the burn center for Fujian province. The center has 3 subdivisions, including burn treatment, plastic surgery, and rehabilitation. The center consists of an independent outpatient department for wound treatment, independent operating rooms, a burn intensive care unit (BICU), and 82 ward beds, which include 8 sickbeds in the BICU and 6 for emergencies. Between 2013 and 2015, a total of 2552 new burn patients were admitted to this center. The 180th Hospital of Chinese People's Liberation Army (PLA), a Class A tertiary military hospital with 79 beds in its burn department including 7 sickbeds in the BICU, treated 4591 burn patients between 2013 and 2015. The 92nd Hospital of PLA is also a Class A tertiary military hospital with a total of 44 beds in its burn department, which treated 956 burn patients during the same period. The First Hospital of Longyan City is a Class A tertiary municipal public hospital and the only center of its type in the western Fujian province. Its burn department is equipped with 20 sickbeds and treated 727 new burn patients in 2013 to 2015. The condition and treatment of burn patients at these 4 burn wards are satisfactorily representative of the entire Fujian province. The inclusion criteria were as follows: burn data from the first admission were extracted using the International Classification of Diseases, Tenth Revision (ICD-10) codes in the X00 to X19 range; patients admitted to these 4 hospitals for >24 hours or who died after arrival at the hospital were studied. The following categories of patients were excluded: readmissions for scar contracture; outpatients; inpatients not diagnosed with a burn as the primary cause of admission; and incomplete clinical data. A total of 1449 patients were enrolled in our study (843 cases, 160 cases, 160 cases, and 286 cases, respectively, from the aforementioned 4 burn departments). The patients' ages ranged from 8 days to 94 years (median age, 32 years; interquartile range, 29 months to 48 years). The subjects included 996 men and 453 women. The total burn surface area (TBSA) ranged from 1% to 99% (median TBSA, 11%; interquartile range, 4%-24%). The burn depths were between II and III with 446 cases of flame burns, 732 cases of scalding, 108 cases of contact burns, 105 cases of electrical burns, 34 cases of chemical burns, and 24 cases of other burn etiologies. Mild burns accounted for 313 cases, whereas 709 cases were moderate burns, 216 cases were severe burns, and 211 cases were extremely severe burns. In total, 1891 strains of pathogenic bacteria were cultured from 3835 clinical specimens obtained from and distributed as follows: wound secretions, 1992; blood samples, 979; respiratory secretions, 595; central venous catheter specimens, 133; and other sources (such as urine, stool, and tissue fluid), 136. The ethics committee of Fujian Medical University and each collaborating institution reviewed and approved the study protocol. All collaborating institution provided written consent for their information to be collected and used for research, and they had the right to withdraw from the study at any time without prejudice. Sample collection Wound secretions were collected at patient admission and during hospitalization at least once; the required volume for each sample was >1 mL. Sputum secretions were collected from patients receiving preventive tracheotomy or ventilator support; the required sample volume for each was >2 mL. Central venous catheter samples were obtained at catheter replacement; the minimal required sample volume was 5 cm. Blood samples were obtained when the patient's temperature rose >38.5°C or was <36°C (sourced from 2 peripheral blood collection sites or 1 peripheral blood and 1 intraductal blood site using 2 sets of 4 bottles with aerobic and anaerobic cultures obtained for each sample; the required volume for each sample was 5-10 mL for adults and 2-5 mL for children). Urine samples were obtained when patients had irritative urinary tract symptoms (using 1 set of 2 bottles with aerobic and anaerobic cultures obtained for each sample; the required volume for each sample was 5-10 mL). Stool samples were obtained when patients had diarrhea (using 1 set of 2 bottles with aerobic and anaerobic cultures obtained for each sample; the required volume for each sample was at least 1 mL). For patients with suspected sepsis, all aforementioned sample cultures were obtained for 3 consecutive days. Patients were treated based on the revised guideline, Diagnostic Criteria and Treatment Guideline for Infection of Burns and Guideline for Diagnosis, Prevention and Treatment of Invasive Fungal Infection after Burn Injury. [8] Species identification and antibiotic sensitivity Species identification and antimicrobial sensitivities were assessed by the laboratory staff of the 4 hospitals using the Kirby-Bauer (K-B) disk diffusion method employing drugcontaining test disks, culture medium and quality control strains (Staphylococcus aureus ATCC 25923, Pseudomonas aeruginosa Annual distribution of pathogens in clinical samples A total of 1891 strains of pathogens were detected from 3835 clinical specimens collected from 1449 burn patients over a 3year period. The demographics of the 1449 patients were shown in Table 1. The total detection rate was 49.3% (1891/3835). During these 3 years, the wound specimen strain detection rate showed a downward trend from 65.2% to 57.3%. The blood specimen strain detection rate was low with a range between 7.4% and 31.4% (Tables 2 and 3). Resistance rates of gram-positive bacteria to antimicrobials The resistance rates of the isolated bacteria to commonly used antimicrobials were investigated ( Table 8). The resistance rates of MRSA to penicillin and ampicillin was 100%, whereas the resistance rates to erythromycin, clindamycin, gentamicin, tetracycline, and ciprofloxacin were >75%. In general, the high-resistance antibiotics were penicillin, ampicillin, and erythromycin, and the low-resistance antibiotics were teicoplanin, tigecycline, vancomycin, and linezolid. E faecalis was 100% resistant to quinupristin/dalfopristin synercid, whereas Enterococcus faecium was 100% sensitive to quinupristin/dalfopristin synercid. More than 50% of E faecalis was resistant to tetracycline, erythromycin, and rifampicin. Both enterococcal species were completely sensitive to vancomycin, tigecycline, and linezolid. Additionally, the frequency of E faecalis isolates was higher than that of E faecium. However, the drug resistance of E faecalis to penicillin, ciprofloxacin, levofloxacin, and ampicillin was lower than that of E faecium. Resistance rates of gram-negative bacteria to antimicrobials The detection rates of extended-spectrum b-lactamases (ESBL) among K pneumoniae and E coli isolates were 44.6% (62/139) and 67.2% (41/61), respectively. For A baumannii, the most commonly isolated gram-negative bacterial strain, the resistance rate was 100% for ampicillin, amoxicillin, amoxicillin/clavulanate, and aztreonam, and >80% for third-generation cephalosporins such as cefotaxime, ceftazidime, ceftriaxone, and cefepime. The A baumannii resistance rates for carbapenems such as imipenem and meropenem were 58.5% and 83.3%, respectively. In addition to resistance rates of K pneumoniae to ampicillin, amoxicillin, and cefazolin of 80%, these rates were 21% for meropenem and 15.4% for imipenem. More than 80% of E coli was resistant to ampicillin, piperacillin, cefazolin, amoxicillin, tetracycline, and sulfamethoxazole trimethoprim ( Table 9). Resistance rates of fungi to antimicrobial agents The resistance rates of C albicans to amphotericin B, 5fluorocytosine, and nystatin were 0%, and the resistance rate of Candida tropicalis to itraconazole was 50% (Table 10). Discussion Nosocomial infections (NIs) are more likely to occur among burn patients due to the immunocompromising effects of burns, the nature of burns themselves, the intensive diagnostic and therapeutic procedures, and prolonged hospital stays. [9] Infection rates are also associated with the burn wound degree, the need for surgery and age. [10] The application of antibiotics remains an effective approach to control burn infections. The widespread use of broad-spectrum antibiotics, particularly cephalosporins and Table 4 Annual pathogen distributions and percentages. carbapenems, has been associated with the emergence of multiple-drug-resistant bacteria. Therefore, the monitoring of potential microbial infections among inpatients is critically important. Pathogen detection rates and distribution characteristics In total, 1891 strains of pathogenic bacteria or fungi were cultured from wound secretions, blood, central venous catheter, [11] (64.7%-80.2% for wound secretions and 11.1%-51.5% for blood), who tested 571 pathogens from 1485 specimens. The differences in the detection rates likely resulted from the variations in the technology used for specimen collection and laboratory testing. Among all the microbes detected, 36.4% (689/1891) were gram-positive bacteria, 57.6% (1089/1891) were gram-negative bacteria, and 6.0% (113/1891) were fungi. These results were similar to a study from Turkey, [12] which was conducted retrospectively on a total of 250 microorganisms isolated from the burn-wound secretions of 179 patients between January 2009 and December 2011. However, gram-negative bacteria accounted for 64.4% (161/250) of the bacteria in that report, which was slightly higher than our findings. Furthermore, our results showed that gram-positive bacteria increased and gram-negative bacteria decreased over these 3 years. This likely because of the clinical application of broadspectrum antibiotics and the increased use of indwelling devices and other invasive procedures. In our study, clinical samples such as wound secretions, blood, sputum, and central venous catheters all exhibited a predominance of gram-negative bacteria. This trend differed from an earlier report by Cen et al, [13] who reported a predominance of gram-negative bacteria in wound secretions, whereas grampositive bacteria predominated from blood cultures. Moreover, fungi were most prevalent in our sputum cultures, a finding that is also inconsistent with the results reported by Cen et al [13] which indicated that fungi were most prevalent in urine cultures. Analysis of bacterial resistance rates to antimicrobials S aureus was the most prevalent bacterium observed. This result was similar to a previous study of 3615 microbial isolates from 114 patients with severe burns at the burn center of Shanghai Hospital (Shanghai, P.R. China) between 1998 and 2009, which found that S aureus accounted for 38.2% of all cases. In our study, MRSA represented 74.1% of S aureus isolates, approximating the result (73.0%) of another study in China by Wei et al [14] (Gansu Provincial Hospital, 2008-2010). However, the majority of S aureus cases in our study were methicillin-resistant, a preponderance much higher than reported elsewhere, for example, by Dokter et al [15] in New Zealand (0.4%), Fransén Table 8 Resistance rates of gram-positive bacteria to antimicrobials. [16] in Sweden (1.7%), Guggenheim et al [5] in Switzerland (3%-16%), and Bayram et al [12] in Turkey (19%). Generally, a relatively high percentage of S aureus isolates were MRSA in China. The resistance rates of MRSA to penicillin and ampicillin were 100% and 78.1% to 84% for other antibiotics such as erythromycin, tetracycline, and clindamycin. The resistance rates of MSSA and MSSE were relatively low to the aforementioned antibiotics except for penicillin and ampicillin. Vancomycin, teicoplanin, and tigecycline remained the most effective antibiotics against MRSA and MRSE with no strains resistant to these antibiotics. The policy named "search-and-destroy" has ensured a low prevalence of MRSA in health facilities and the population of the Netherlands, [17] thereby establishing a valuable reference for China. The coagulase-negative staphylococci detected included S epidermidis and Staphylococcus saprophyticus, both of which typically constitute part of the normal flora of the skin and mucous membranes. [17] These microorganisms are generally associated with mucosal carriage as well as asymptomatic skin but are paradoxically recognized as some of the most frequent causative agents of device-associated infection (DAI) and hospital-associated infection (HAI). [18][19][20] In our study, MRSE accounted for 70.0% (48/69) of S epidermis isolates, representing a relatively high detection rate. This finding should alert clinicians to control skin and mucosa disinfection strictly to avoid nosocomial infection. The Enterococcus genus is among the normal flora of humans and animals, and ectopic microflora will lead to infection. Recently, Kozuszko et al [21] suggested the presence of Enterococcus resistant to high concentrations of aminoglycosides; moreover, Faron et al [22] reported vancomycin-resistant enterococci (VRE). In our study, the prevalence of VRE was 0%, which was much lower than the prevalence reported in Germany (11.2%), the United Kingdom (8.5%-12.5%), and Italy (9%). [23] P aeruginosa was reported as a major pathogenic cause of infection after burn injuries in the United States. [2] Previous studies have reported P aeruginosa as the most common isolate with high prevalence in regions such as the southwest of China [10] (23.1%), Iran [24] (26.7%), Iraqi Kurdistan [25] (27%), Gaza [26] (50%), and India [27] (55%). The study by Ullah et al [28] indicated that the incidence of P aeruginosa in burn wards was higher than in other wards. However, this finding was inconsistent with our Table 9 Resistance rates of gram-negative bacteria to antimicrobials. Number of isolates tested. Antimicrobials Recently, A baumannii has become one of the crucial causes of nosocomial infections. [30] The occurrence of carbapenem-nonsusceptible A baumannii (CNSAb) infections is becoming a growing problem in hospitalized burn patients. [31] A baumannii was the most common bacterium detected in most studies, for example, by Bayram et al [12] in Turkey (23.6%), in Bahemia study [32] of 341 severe burn patients admitted to an adult BICU from January 1, 2008 to December 31, 2012 in South Africa and in the Keen EF study of 3507 bacterial isolates from 460 BICU patients in a USA military burn center from January 2003 to December 2008. [33] McDonald indicated that Acinetobacter spp. might be more prevalent in warm climates. [34] This finding was consistent with our study because the Fujian province climate is quite warm, the province being located at a north latitude between 23°33 0 and 28°20 0 with an east longitude between 115°50 0 and 120°40 0 and temperature ranges over the 4 seasons of 11°to 27°in spring, 22°to 37°in summer, 19°to 37°in autumn, and 8°to 18°in winter. Our observations revealed A baumannii was the prominent pathogen among gram-negative bacteria, which presented high resistance to many antibiotics. The resistance rate to imipenem, the most effective broadspectrum agent against gram-negative bacilli, was 83.3%. Moreover, diffusion disk testing using cefoperazone/sulbactamcontaining inhibitors also showed a relatively high resistance rate of 55.3%. The resistance rate of A baumannii to ampicillin, piperacillin, aztreonam, amoxicillin, and amoxicillin/clavulanic acid was 100% and was >80% to cephalosporins. Fungal infections have been found to occur after the widespread use of antibiotic therapies, which have the effect of killing the beneficial bacteria that normally suppress fungi. More recently, Candida species have emerged as an important cause of invasive infections among patients in intensive care units. [36] The most common fungi were C albicans, accounting for 38.1% (43/ 113) of fungal isolates. The antibiotic resistance of C tropicalis was higher than that of C albicans. A useful reference for China might be "Three steps to prevent invasive fungal diseases," as concluded by Pemán and Salavert. [37] Limitations This was a multicentre study to investigate the epidemiology of bacteria among hospitalized burned patients. However, because of limited resources, we only investigated 1 province in the southeast of China. Because it was conducted retrospectively, we were unable to include the following steps: to test the homology of the MRSA; to analyze the impact of prehospital wound treatment on bacterial detection; to analyze the effect of antimicrobial administration on bacterial detection; and to analyze the effect of long-term catheterization on these bacteriological profiles. Future efforts are required to perform multicenter, prospective studies to develop dynamic monitoring for bacterial resistance. Conclusions Gram-negative bacteria represented the majority of pathogens detected. S aureus, A baumannii, P aeruginosa, K pneumoniae, and E faecalis were the 5 most common bacteria detected in the 4 study burn units. The resistance of MRSA and A baumannii to antibiotics was relatively high in Fujian province. High-resistance antimicrobials included penicillin, ampicillin, amoxicillin, cefazolin, and cefotaxime. Low-resistance antimicrobials included teicoplanin, tigecycline, vancomycin, and linezolid. Dynamic bacterial monitoring should be established to restrain the development of bacterial resistance.
4,425.4
2018-08-01T00:00:00.000
[ "Medicine", "Biology" ]
Heat Transfer and Flow Characteristics of Pseudoplastic Nanomaterial Liquid Flowing over the Slender Cylinder with Variable Characteristics : The present article investigates heat transfer and pseudoplastic nanomaterial liquid flow over a vertical thin cylinder. The Buongiorno model is used for this analysis. The problem gains more significance when temperature-dependent variable viscosity is taken into account. Using suitable similarity variables, nonlinear flow equations are first converted into ordinary differential equations. The generating structure is solved by the MATLAB BVP4C algorithm. Newly developed physical parameters are focused. It is observed that the heat transfer rate and the skin friction coefficient is increased remarkably because of mixing nano-particles in the base fluid by considering γ b = 1, 2, 3, 4 and λ = 1, 1.5, 2, 2.5, 3. It is found that the temperature field increases by inclining the values of thermophoresis and Brownian motion parameters. It is also evaluated that the velocity field decreases by increasing the values of the curvature parameter, Weissenberg number and buoyancy ratio characteristics. Introduction A fluid that does not obey the viscosity law of Newton is known as a non-Newtonian fluid. Similar to many typically observed materials such as honey, starch, toothpaste and many salt solutions are non-Newtonian fluids. Non-Newtonian fluid drift has provided favourable results in fluid mechanics as it is common in the biological sciences and industry. Non-Newtonian fluids include polymer solutions, blood float, heavy lubrication oil and grease. The study of mass and heat transfer has important packages in various fields of engineering and technology such as milk production, engineering devices, blood oxygenators, dissolution processes, mixing mechanisms and many more. A nanomaterial liquid is a liquid that contains particles of nanometre size known as nanoparticles. The most impactful reason for adding nanoparticles to the base fluid reveals a remarkable increment of base fluid thermal properties. The nanoparticles that are usually used in nanofluids are carbides, metals, oxides and carbon nanotubes. Water and oil are common base fluids. Buongiorno model is utilized in the investigation of Brownian movement and thermophoresis impact Crystals 2022, 12, 27 2 of 12 on mass, flow, and transport of heat from the considered surface. The concept of nanofluids was initiated by Choi. [1] similar to that of nanoparticles. The truth that nanofluids have higher thermal conductivity than ordinary fluids due to their nanostructure has fascinated many theoretical and engineering scientists. Kuznetsov et al. [2] introduced the influence of nanomaterials liquid on the flow of natural convection through a flat surface. They reveal that decreasing the Nusselt number is a reduction feature of each of the succeeding characteristics: Brownian motion characteristic and buoyancy ratio characteristic. In addition, Prasher et al. [3] confirmed that convection is a motive for increasing the thermal conductivity of nanomaterials liquid due to the Brownian motion of the nanoparticles. Wang et al. [4] confirmed that the thermal conductivity growth dependence could be very vulnerable due to Brownian motion. Lee et al. [5] later located that with the particle volume fraction, the thermal conductivity of the nanomaterial liquid would enhance linearly. The slender cylinder is a special type of cylinder upon which, due to its slimness, we can easily research the liquid's boundary layer flow. Nadeem et al. [6] worked on cylinder and studied viscous nanofluid's heat transport and flow analysis. In [7][8][9][10][11][12][13], researchers paid attention to the mass and heat transport investigation by assuming different geometries such as a vertical cone, stretching sheet, stretching cylinder and circular cylinder under the thermal radiations and magnetohydrodynamic effects. Analysis of nanofluid flow problems is presented in [14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29] under boundary layer effects. From the above analysis and discussion, we conclude that this is an important area of fluid mechanics; therefore, we decided to study the Carreau-Yasuda nanofluid flow over a vertical slender cylinder. By using the boundary layer concept and equivalence transformations, the model equations are simplified. MATLAB BVP4C algorithm is used to find the solution. The Buongiorno model [30] is applied for this investigation. The Graphical behaviour and expressions for temperature, velocity and concentration are calculated. We obtained results for various parameters, i.e., curvature, Prandtl number, thermophoresis, buoyancy ratio, Weissenberg number, Brownian motion and Lewis number on flow. Fluid Model Carreau-Yasuda fluid's Cauchy stress tensor is where µ ∞ shows the infinite shear rate, µ o shows the zero shear viscosity rate, d, n and Γ are fluid characteristics of Carreau-Yasuda and . γ is defined as where Here, we assume µ ∞ = 0, Then First Rivlin Erickson tensor A 1 is Statement We consider a nanofluid incompressible flow along with a vertical slender cylinder of radius r o . Coordinates (x, r) will be used along the cylinder surface. The equations of mass conservation, momentum, energy transfer and nanoparticles concentration are The boundary conditions for the problem are given below [31]. where V is the constant velocity of injection (V > 0) or suction (V < 0). The similarity transformation is defined as follows: Here, U(x) = x l U ∞ is the mainstream velocity, υ is called the kinematic viscosity and is denoted as υ = µ ρ . Here, ρ denotes the fluid density. The temperature of the slender cylinder surface is T w (x) with the form T w − T ∞ = ∆T x l and concentration of the slender cylinder surface is ϕ w (x) with the form ϕ w − ϕ ∞ = ∆ϕ x l , where l is a characteristic length, U ∞ is the characteristic velocity, the temperature characteristic is ∆T and the nanoparticle concentration characteristic is ∆ϕ. Using the above transformations, Equation (6) is satisfied automatically and Equations (7)-(9) take the following form in which the N t = is the buoyancy ratio, We d = U The non-dimensional form of boundary conditions are where c o is any constant. The expression for the skin friction coefficient and the Nusselt number are defined as Numerical Solution By using BVP4C, the non-linear differential Equations (12)- (14) are solved numerically. We assume The equivalent equations become with conditions Graphical Results and Discussion The nonlinear partial differential equations of nanofluid heat transfer and the boundary layer flow over a vertical cylinder are shown. Figure 1 represents the geometry of the fluid flow problem. The governing equations are articulated by applying similarity transformations. Figures 2a-4a provide the behaviour of the velocity profile for the specific characteristic concerned. Figure 2a shows the behaviour of the curvature parameter γ b on the field of velocity. It is shown that by increasing the values of the curvature parameter, the velocity field decreases. Figure 2b describes the behaviour of N r on the field of velocity. The velocity profile declines by inclining the values of the buoyancy ratio. Figure 3a shows the influence of W e on the velocity field. The Weissenberg number differentiates the elastic forces from the viscous forces and it is the ratio of specific processes of time and time relaxation of fluid; therefore, by enlarging the values of the Weissenberg number, the specific process time decreases, and the velocity distribution also decreases. Figure 3b exhibit the impact of Lewis number on velocity distribution. It is easily observed that velocity profile expands by enlarging the values of Lewis number. Figure 4a indicates the influence of N t on the field of temperature. Temperature distribution rises through the growing amount of N t . Figure 4b shows the increasing result of temperature profile for N b . Figure 5a exhibits the behaviour of the Prandtl number towards temperature distribution. The increase in the Prandtl number is the main reason for the slow rate of thermal diffusion; therefore, it has been found that the field of temperature declines by enlarging values of Pr. Figure 5b describes the impact of the Lewis number on the temperature field. The profile of temperature first decreases and it later increases by enlarging the values of the Lewis number. Figure 6a express the behaviour of the Lewis number on the nanoparticle concentration profile. The Lewis number is the ratio between thermal and mass diffusivity, and results show that the concentration profile declines and it inclines by enhancing the values of the Lewis number. Figure 6b shows the impact of γ b over 1 2 C f R 1 2 e against the buoyancy parameter λ. Therefore, skin friction coefficient has increasing levels of behaviour for these parameters. Figure 7 shows the impact of γ b over N u R 1 2 e against the values of λ. Therefore, the Nusselt number increases in magnitude by increasing the values of the buoyancy parameter. Table 1 expresses the value of the Nusselt number for distinct characteristics γ b , Le, Pr, N b and N t . The Nusselt number expands for γ b but decrease for Le, N b and N t . Table 2 exhibits the values of the skin friction coefficient on distinct characteristics γ b , W e , n, λ, ϕ ∞ and N r . The skin friction coefficient expands by enlarging the amount of γ b and λ, but declines for W e , n, ϕ ∞ and N r . Table 3 displays the skin friction coefficient's numerical values for different values of λ vs. γ b . With an increase in these parameters, the skin friction coefficient inclines, whereas Table 4 growing amount of . Figure 4b shows the increasing result of temperature profile for . Figure 5a exhibits the behaviour of the Prandtl number towards temperature distribution. The increase in the Prandtl number is the main reason for the slow rate of thermal diffusion; therefore, it has been found that the field of temperature declines by enlarging values of . Figure 5b describes the impact of the Lewis number on the temperature field. The profile of temperature first decreases and it later increases by enlarging the values of the Lewis number. Figure 6a express the behaviour of the Lewis number on the nanoparticle concentration profile. The Lewis number is the ratio between thermal and mass diffusivity, and results show that the concentration profile declines and it inclines by enhancing the values of the Lewis number. Figure 6b shows the impact of over against the buoyancy parameter . Therefore, skin friction coefficient has increasing levels of behaviour for these parameters. Figure 7 shows the impact of over against the values of . Therefore, the Nusselt number increases in magnitude by increasing the values of the buoyancy parameter. Table 1 expresses the value of the Nusselt number for distinct characteristics , , , and . The Nusselt number expands for but decrease for , and . Table 2 exhibits the values of the skin friction coefficient on distinct characteristics , , , , and . The skin friction coefficient expands by enlarging the amount of and , but declines for , , and . Table 3 displays the skin friction coefficient's numerical values for different values of vs. . With an increase in these parameters, the skin friction coefficient inclines, whereas Table 4 Conclusions The impact of significant parameters on mass and heat transport characteristics are examined. The analysis made in this article exhibits that the mass and heat transport rate was found to be improved in the flow of pseudoplastic non-Newtonian nanomaterial liquid. The pseudoplastic nanomaterial liquid is applicable in all electronic gadgets for increasing their cooling or heating rate. Furthermore, pseudoplastic nanomaterial liquids are also applicable in reducing the skin friction coefficient. The fundamental conclusions received from the above evaluation are indexed below. 1. The temperature distribution decreases through a rise in the amount of Pr. 2. The field of temperature inclines by increasing the values of N t and N b . 3. The velocity field decreases by enhancing the values of N r , W e and γ b . 4. The temperature profile first decreases and it increases by enlarging the values of the Lewis number. 5. Profile of velocity increases by expanding the values of Lewis number. 6. The nanoparticle concentration distribution declines and it increases by increasing the values of Le. 7. The skin friction coefficient increases by expanding the amount of γ b and λ. 8. The heat transfer rate increases by enlarging the amount of γ b and λ.
2,849
2021-12-24T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
I, Robot: How Human Appearance and Mind Attribution Relate to the Perceived Danger of Robots Social robots become increasingly human-like in appearance and behaviour. However, a large body of research shows that these robots tend to elicit negative feelings of eeriness, danger, and threat. In the present study, we explored whether and how human-like appearance and mind-attribution contribute to these negative feelings and clarified possible underlying mechanisms. Participants were presented with pictures of mechanical, humanoid, and android robots, and physical anthropomorphism (Studies 1–3), attribution of mind perception of agency and experience (Studies 2 and 3), threat to human–machine distinctiveness, and damage to humans and their identity were assessed for all three robot types. Replicating earlier research, human–machine distinctiveness mediated the influence of anthropomorphic appearance on the perceived damage for humans and their identity, and this mediation was due to anthropomorphic appearance of the robot. Perceived agency and experience did not show similar mediating effects on human–machine distinctiveness, but a positive relation with perceived damage for humans and their identity. Possible explanations are discussed. I, Robot: How Human Appearance and Mind Attribution Relate to the Perceived Danger of Robots Watching the movie 'Ex Machina', you quickly perceive Ava, the android main character of the movie, as a real human with emotions and feelings.Anthropomorphising Ava in this way, that is to ascribe human-like characteristics and/or intentions to non-human agents, is a fundamental human process that spontaneously happens and increases our social connection with non-human agents [1].Although highly evolved robots seem a vision of the future, we already interact with artificial intelligent agents on a regular basis (e.g., Siri, Apple's speaking assistant, Amazon's Alexa, or CaMeLi, an avatar designed to help elderly in daily life).Developments in robot technology are proceeding rapidly: 'Social robots', i.e., robots that are designed to interact and communicate with people [2], feature increasingly more human-like appearances and behaviour.While, these technical developments are especially interesting when it comes to maintaining and improving our quality of life, for example in health care or education, a large body of research also shows that social robots tend to elicit negative feelings of eeriness, danger, and threat [3][4][5][6][7].In the present study, we investigated the factors that elicit these negative feelings and clarified possible underlying mechanisms, including the extent to which robots look human-like and the extent to which they are attributed with a mind. Anthropomorphism Anthropomorphism involves "going beyond behavioral descriptions of imagined or observable actions (e.g., the dog is affectionate) to represent an agent's mental or physical characteristics using humanlike descriptors (e.g., the dog loves me)" [1, page 865].Anthropomorphism for non-human agents can be elicited in two ways: First, by increasing the physical appearance with humans [8][9][10]. Initially introduced to describe the appearance of religious agents and gods [Hume, 1757, in 1], the term is now used to describe human characteristics towards animals or plants [12], objects and technical devices [13], and even geometric shapes [14].In fact, neuroscientific research has demonstrated that similar brain regions are activated when participants attribute mental states to non-human agents as when attributing mental states to other humans [15][16][17].In a predictive coding framework [18,19], which suggests that the brain continuously produce hypotheses that predict sensory input, anthropomorphism makes sense: when something looks like a human or moves or acts in a human-like way, it is more likely that your interaction with this agent will be efficient and smooth if you treat it as another human-being. Research has shown that when we anthropomorphise non-biological agents, this has profound effects on how we interact with these agents: it leads to more interpersonal closeness [20], increased moral care [21], and smoother interactions [11,22].In addition, people rate robots acting playfully as being more extroverted and outgoing than robots acting seriously [23,24].Moreover, avatars are judged to be more trustworthy, competent, sensitive, and warm when they are anthropomorphised [25], and similar stereotypes are applied to robots than to humans [26]. Negative Effects of Anthropomorphism However, perceptual similarity with humans can also elicit negative feelings towards robots: Robots whose physical appearance closely (but not perfectly) resembles human beings often evoke negative feelings, a phenomenon referred to as the uncanny valley [4].The uncanny valley hypothesis states that more realistic artificial agents elicit more positive reactions until they are very close (but not close enough) to the human ideal.This dimension of "realism", however, is not limited to realistic looks/appearances, but also realistic motor behaviours, such as imitation [4].Research showed that human-like appearance or motion can be responsible for the emergence of the uncanny valley [27,28].Furthermore, large inter-individual differences in the emergence of the uncanny valley have been found, suggesting that people can be more or be less sensitive towards negative feelings in response to a robot [5].Often, the concept of prediction error has been used to explain the uncanny valley [29]: when something looks like a human or moves or acts in a humanlike way, it is more likely that your interaction with this agent will be efficient and smooth if you treat it as another human-being.However, when a robot that looks extremely human-like moves in a mechanical way, our predictions are violated since their strong anthropomorphic appearance led us to expect them to follow biological movement patterns.Consequently, this high prediction error leads to a negative feeling of unease or fear. Interestingly, an additional explanation of the uncanny valley has been proposed recently, with empirical studies suggesting that this negativity for human-like robots can be explained by a violation of the need for distinctiveness [30,31].As humans, we feel unique and distinct by understanding how our own group differs from another group [32,33].However, when this feeling of uniqueness, and with it our intergroup boundaries, disappears, we feel threatened.Applied to robots, correlational research showed that higher feelings of eeriness and a decrease in felt warmth towards robots was positively related to whether participants perceived robots and humans as similar categories [31].In addition, it was causally demonstrated that too much perceived similarity between robots and humans undermines people's ideas about human uniqueness, and this subsequently leads people to perceive robots as more threatening and potentially damaging entities [30]. In line with this assumption, looking at manipulation of anthropomorphism, a recent study on human/avatar interactions found evidence for a so-called "uncanny valley of the mind" [6].In their study, participants watched interactions involving emotional responses between two digital characters, which were presented as either human-controlled versus computer-controlled, and scripted versus autonomous.Their results showed that levels of eeriness rose especially when participants thought the digital agents were autonomous artifical intelligences.These findings support the notion that attributions of a mind might lead to a decrease in human/ machine distinctiveness, subsequently leading to a negative feelings and perceived damage to someone's identity. A key feature by which we distinguish between humans and non-humans relates to mind attribution.Specifically, we attribute the minds of potential other agents along the lines of two constructs: experience and agency [34].Experience involves capacities to feel emotions, such as the ability to experience hunger or pleasure, whereas agency relates to capacities of being an autonomous agent, such as self-control and thought.Especially experience is seen as a unique human trait, as people ascribe a medium amount of agency but no experience towards robots [34].The question however is whether mind attribution influences the need for distinctiveness. In three studies, we aimed to replicate the link between robot-human similarity and threat-perceptions using a variety of stimuli (Studies 1-3).Furthermore, we aimed to extend the literature in this domain by investigating whether distinctiveness-negativity is evoked by perceived physical similarity, by similarity in mind attribution, or both (Study 2 and Study 3; see Fig. 1 for an overview of the mediation model).Similar to earlier research [30], participants were presented with pictures of mechanical, humanoid, and android robots, and physical anthropomorphism, mind attribution of agency and experience (Study 2 and Study 3), threat to the human-machine distinctiveness, and damage to humans and their identity were assessed for all three robot types.We expected that human-machine distinctiveness mediated the influence of robot type on the perceived damage for humans and their identity, and that this mediation would be due to (a) the anthropomorphic appearance of the robot, and (b) perceived experience and perceived agency from the robot. Participants and Design Fifty-five par ticipants (44 females, 11 males, M age = 19.50years, SD age = 1.40, age range 17-23 years) completed the experiment in exchange for course credits.The experiment had a 3 (Robot type: mechanical robot vs. humanoid robot vs. android robot) within-subjects design, with damage to humans and their identity as dependent variable, and physical anthropomorphism and threat to the human-machine distinctiveness as mediators.Data was acquired online using Inquisit 4 [35].Via an online participant pool from Radboud University, participants could sign up for the study, and received a link to complete the study online.Before participants could start with the experiment, they were asked to ensure they would not be interrupted for the duration of the experiment (approximately 30 min). Procedure and Materials Participants were instructed that they had to evaluate different types of robots: Participants received a self-paced evaluation task in which they indicated their attitudes towards three different categories of robots varying in human-like appearance; participants either saw mechanical, humanoid, or android robots.While mechanical robots were clearly machines (no legs, no facial features), humanoid robots were more humanlike by having legs, arms, a torso, and a head with a face.Still, they also possess clear similarities with a machine (e.g., no hair or skin).Android robots were very high in humanlike-ness and difficult to distinguish from real humans.All stimuli and measures used were derived from earlier research [30].Four different robots were used for each robot category, resulting in 12 robot evaluations. 1articipants had to rate each of the 12 robots on (1) physical anthropomorphism; (2) threat to human-machine distinctiveness; and (3) damage to humans and human identity.Firstly, physical anthropomorphism was assessed using a three-item scale (e.g., "I could easily mistake the robot for a real person"; Cronbach's α mechanical robot = .935;Cronbach's α humanoid robot = .887;Cronbach's α android robot = .800). 2econdly, threat to the human-machine distinctiveness was measured using a three-item scale (e.g., "Looking this kind of robot I ask myself what the differences are between robots and humans"; Cronbach's α mechanical robot = .948;Cronbach's α humanoid robot = .957;Cronbach's α android robot = .963).Thirdly, damage to humans and human identity was assessed using a four items scale (e.g., "I get the feeling that the robot could damage relations between people"; Cronbach's α mechanical robot = .880;Cronbach's α humanoid robot = .895;Cronbach's α android robot = .897).For all three questionnaires, participants could answer on a 7-point Likert scale (1 = 'totally not agree' to 7 = 'totally agree').Items were presented along with each robot photo respectively, and robot photos were presented in a random order.After completing the questionnaires, participants reported their age and gender, and were thanked for their participation. Results and Discussion Within-participant mediation analyses were conducted using MEMORE package in SPSS with 1000 bootstrap samples [36].In this analysis conclusions on mediation are based upon the correlation of the score-difference between conditions of the dependent variable and the score-difference between conditions of the proposed mediator.Since each participant responded to all three conditions, and we did not have specific hypotheses on certain comparisons, we Fig. 1 Overview of the tested mediation model reported all possible comparisons of three robot types (i.e., mechanical vs. humanoid, mechanical vs. android, and humanoid vs. android) while using Bonferroni correction to control Type one error (here we used 99% confidence interval).All mediation statistics (B and 99% CI) are depicted in Table 1.An overview about descriptive statistics can be found in Table 2. Mediation Effect of Distinctiveness on the Relation Between Robot Types and Damage to Humans and Their Identity We conducted a mediation analysis on the relation between robot types as IV and damage to humans and their identity as DV, with the human-machine distinctiveness of the different robot types as a proposed mediator.The model indicated significant indirect effects and non-significant direct effects for all three comparisons (mechanical with android, mechanical with humanoid, and humanoid with android robot).The relation between robot-types and damage to humans and their identity was therefore fully mediated by the human-machine distinctiveness of the different robot types. Mediation Effect of Anthropomorphic Appearance on the Relation Between Robot Type and Distinctiveness We conducted a mediation analysis on the relation between robot types as IV and human-machine distinctiveness as DV, with the anthropomorphic appearance of the different robot types as a proposed mediator.The model indicated significant indirect effects and non-significant direct effects for all three comparisons.The relation between robot-types and human-machine distinctiveness was therefore fully mediated by the anthropomorphic appearance of the different robot types. Our findings of Study 1 successfully replicate earlier research on the relationship between anthropomorphic appearance, human-machine distinctiveness, and perceived threat to humans and their identity [30]: Human-machine distinctiveness mediated the influence of robot type on the perceived damage for humans and their identity, and this mediation was due to anthropomorphic appearance of the robot.Thus, these results support the notion that too much physical appearance leads to negative perceptions of robots due to a decrease in distinctiveness [30]. In our following studies, we add the concept of mind perception to our model [34,37], a fundamental construct central for us to understand ourselves as living beings.The mind perception literature distinguishes between two essential constructs: agency and experience [34].Both agency and experience are critical components for being human, and how we are distinct from robots, avatars, and other non-living objects.Our aim in Study 2 was to replicate findings from Study 1, and additionally, to investigate whether agency and experience mediate the relation between robot-types and human machine distinctiveness. Results and Discussion All mediation statistics (B and 99% CI) are depicted in Table 3.An overview about descriptive statistics can be found in Table 4. Mediation Effect of Distinctiveness on the Relation Between Robot Types and Damage to Humans and Their Identity We conducted the same mediation analysis as in Study 1: We treated robot types as IV and damage to humans and their identity as DV, and human-machine distinctiveness of the different robot types as a proposed mediator.The same results showed that the relation between robot-types and damage to humans and their identity was fully mediated by the human-machine distinctiveness of the different robot types. Mediation Effect of Anthropomorphic Appearance on the Relation Between Robot Type and Distinctiveness We conducted the same mediation analysis as in Study 1 on the relation between robot types as IV and human-machine distinctiveness as DV, with the anthropomorphic appearance of the different robot types as a proposed mediator.We replicate our findings in Study 1: the relation between robot-types and human-machine distinctiveness was therefore fully mediated by the anthropomorphic appearance of the different robot types. Mediation Effect of Mind Attribution on the Relation Between Robot Type and Distinctiveness We conducted a mediation analysis on the relation between robot types as IV and human-machine distinctiveness as DV, with the mind attribution of the different robot types as a proposed mediator.The model indicated non-significant indirect effects of the experience component of mind attribution, thus no mediation effect was found. The mediation analysis that involved the agency component of mind attribution as a proposed mediator showed different results depending on which robot types were compared.In the comparisons between the mechanical versus android robot types, and between the mechanical versus humanoid robot types, the model indicated non-significant indirect effects, only significant direct effects.Furthermore, the model indicated significant indirect effects and significant direct effects in the comparison between the humanoid versus the android robot types.The relation between robottypes and human-machine distinctiveness might be partially mediated by the anthropomorphic appearance when comparing humanoid and android robots. Correlation Between Mind Attribution and Damage to Humans and Their Identity To explore whether the correlation between mind attribution and damage to humans and their identity vary differently between robot types, correlation analyses on each robot type were performed.The results showed that the correlation between agency and damage to humans and their identity for mechanical and humanoid robots were significant (r mechanical = .369,p = .004;r humanoid = .313,p = .015),but not for android robots (r android = .193,p = .140).The correlation between experience and damage to humans and their identity for mechanical robots was significant (r mechanical = .485,p < .001),but not significant for humanoid and android robots (r humanoid = .205,p = .116;r android = .171,p = .190). In line with earlier research [30], we could again support the notion that an anthropomorphic appearance mediates the human/machine distinctiveness, and thus gives room to feelings of threat and damage to humans.Additionally, we did not find mediating effect of experience on participants perception of human/machine distinctiveness, while agency might partially mediate the influence on human/machine distinctiveness only for the comparison between the humanoid and android robots.Interestingly, however, is the fact that the results showed a trend of positive correlation between both experience and agency and the damage to humans and their identity. As the correlations were not all significant, an interpretation is difficult.One could argue that especially the similarity between android robots and humans weakens a relationship between agency/experience perception with damage to humans and their identity: it is easier to identify with these robots and perceive them as closer to humans.Another explanation could be that agency and experience would be much more logical for android robots, while mechanical or humanoid robots do not need these capacities and provide a bigger threat when possessing them.To further strengthen and be able to interpret our findings of Study 2, we conducted a third study with the goal to replicate this pattern.To be sure that the current pattern is not due to the used stimuli, Procedure and Materials The same procedure as in Study 2 was used.However, in Study 3, new stimuli for the three robot categories (mechanical, humanoid, or android robots) were used.While mechanical robots were clearly machines (e.g., no legs), but in comparison to Study 1 and Study 2, they had arms and a head.Humanoid robots were more humanlike by having legs, arms, a torso, and a head with a face.Still, they also possess clear similarities with a machine (e.g., no hair or skin).Lastly, android robots were very high in humanlike-ness and difficult to distinguish from real humans perceptually. Results and Discussion All mediation statistics (B and 99% CI) are depicted in Table 5.An overview about descriptive statistics can be found in Table 6. Mediation Effect of Distinctiveness on the Relation Between Robot Types and Damage to Humans and Their Identity We conducted the same mediation analysis as in Study 1 and Study 2: We treated robot types as IV and damage to humans and their identity as DV, and human-machine distinctiveness of the different robot types as a proposed mediator.We replicated our findings of Study 1 and Study 2: the relation between robot-types and damage to humans and their identity was therefore fully mediated by the human-machine distinctiveness of the different robot types. Mediation Effect of Anthropomorphic Appearance on the Relation Between Robot Type and Distinctiveness We conducted a mediation analysis on the relation between robot types as IV and human-machine distinctiveness as DV, with the anthropomorphic appearance of the different robot types as a proposed mediator.Differently to Study 1 and Study 2, for the comparison between the mechanical with the humanoid robot, the model indicated non-significant indirect effect.All other comparisons indicated significant indirect effects. Mediation Effect of Mind Attribution on the Relation Between Robot Type and Distinctiveness We conducted a mediation analysis on the relation between robot types as IV and human-machine distinctiveness as DV, with the mind attribution of the different robot types as a proposed mediator.The mediation analysis that involved the experience component of mind attribution as a proposed mediator showed different results depending on which robot types were compared.Specifically, the model indicated significant indirect effects and significant direct effects in the comparison between the mechanical versus the android robot types.The relation between robot-types and human-machine distinctiveness was therefore partially mediated by difference between mechanical and android in the experience attribution.In the comparisons between the mechanical versus humanoid robot types, and the humanoid versus android robot types, the model indicated non-significant indirect effects, only significant direct effects. The mediation analysis that involved the agency component of mind attribution as a proposed mediator showed different results depending on which robot types were compared.Specifically, the model indicated significant indirect effects and significant direct effects in the comparison between the mechanical versus the android robot types, and the humanoid versus the android robot types.The relation between robot-types and human-machine distinctiveness was therefore partially mediated by the agency attribution of the different robot types.In the comparisons between the mechanical versus humanoid robot types the model indicated non-significant indirect effects, only significant direct effects. Correlation Between Mind Attribution and Damage to Humans and Their Identity Like in Study 2, we explored whether the correlation between mind attribution and damage to humans and their identity vary differently between robot types, correlation analyses on each robot type were performed.The results showed that the correlation between agency and damage to humans and their identity was not significant for mechanical robots (r mechanical = .128,p = .301),but the correlation for the humanoid and android robots were significant (r humanoid = .259,p = .034;r android = .304,p = .012).The correlation between experience and damage to humans and their identity for mechanical robots was significant (r mechanical = .252,p = .04),a non-significant trend was found for humanoid robots (r humanoid = .229,p = .063),and a significant correlation for android robots (r android = .309,p = .011). While we replicated earlier research on anthropomorphic appearance [30], our results on mind perception were less straightforward.Possible explanations of these findings and inconsistencies between Study 2 and Study 3 concerning mind perception will be explained in the following discussion section. General Discussion In the present study we investigated the factors that elicit negative feelings towards robots and clarified possible underlying mechanisms, including the extent to which robots look human-like and the extent to which they are attributed with a mind.More specifically, we expected that a felt decrease in distinctiveness between humans and machines would be the reason that participants feel a potential damage to their identity, and that this influence can be explained by the human-like appearance of a robot as well as the attribution of agency and experience to a robot.That is, when people attribute high levels of agency or experience to a robot, or when a robot looks very human-like, this makes the boundaries between humans and machines blurrier, leading to higher levels of perceived damage to ones' identity as human. Importantly, we could replicate earlier research [30] which demonstrated that anthropomorphic appearance indeed mediated the relationship between type of robot, human-machine distinctiveness, and perceived damage for humans and their identity.In three studies we were able to replicate earlier findings showing that when people feel that the distinction between humans and robots are blurred intergroup distinctiveness is threatened [30], both with similar and new stimuli of mechanical, humanoid, and android robots.This further strengthens the notion that although robot familiarity can be used to reach the goal of increasing robot acceptance, "this goal should however not conflict with the need for distinctiveness that typically characterizes intergroup comparisons" [30, page 299].Thus, when robots are designed to interact with people in their daily life, and when high acceptance is important for successful use (e.g., care taking robots for elderly), it is important to consider that human-like appearance can lead to resistance. The findings regarding the attribution of agency and experience were less consistent than the results on anthropomorphic appearance.Based on our findings, we cannot conclude that mind attribution has comparable effects on human/machine distinctiveness as anthropomorphic appearance and mediates the relationship between type of robot, human-machine distinctiveness, and damage to humans and their identity.In Study 2, one out of six comparisons showed a mediation effect for agency when comparing humanoid and android robots, which could be the result of chance.In Study 3 more significant results were found, with agency attributions mediating the influence on human/machine distinctiveness for the comparison between the mechanical and android robots, and the humanoid and android robots.Additionally, only in study 3, experience attribution mediated the effect on participants perception of human/machine distinctiveness, but for mechanical versus android robots only.Three explanations for why we did not find a stable pattern in results for mind attribution should be mentioned: Firstly, different stimuli were used between Study 2 and Study 3, with pictures in Study 3 being more controlled (e.g., same background) and only reflecting full-body postures.Research has demonstrated that robot perception is highly dependent on the sort of stimuli, and therefore, pictures in Study 3 lead to different effects.Whether these effects are more reliable needs to be replicated in future research.Secondly, because pictures were presented, it could be that participants in our studies could not imagine that the depicted robots really possess agency or experience.Thirdly, it might be that effects of agency and experience attribution are weaker than effects of anthropomorphic appearance, and therefore, the current research had not enough power to detect this influence.Therefore, future research should use a larger sample when investigating these effects. Interestingly, in both studies, experience and agency seemed to have some positive relation with the damage to humans and their identity, meaning that robots who were ascribed more agency and/or experience were considered more damaging to human identities.Although here an inconsistent pattern was found, the correlations fit well with research introducing an "uncanny valley of the mind" [6]: Digital agents that were perceived as autonomous artificial agents also elicited an increased feeling of eeriness in the observer.Our results suggest that it might be possible to distinguish the impact of 'mind attribution' on perceived human identity threat into two separate factors: agency and experience.Thus, attributions of agency and experience can lead to feelings of threat under certain circumstances.It can be speculated whether the ability of a robot to experience human-like emotions, such as pain or happiness, would indeed make the line between robots and humans blurrier; yet, it would also make a robot more relatable and familiar.In contrast, a robot that is able to carry out the most complex human cognitive functions without experiencing or feeling anything is not relatable to us at all.Our findings provide a first hint that these factors have a differential impact on feelings of threat to our human identity.However, definitely, further research is needed to corroborate this. The relationship between agency, experience, and damage to humans and their identity could point to other possible mediators instead of human-machine distinctiveness.For example, the extent to which humans can relate to a robot, or feel close to a robot could alleviate the potential damage to someone's identity experienced as a result of low distinctiveness.Whether mind perception leads to a decrease in human-machine distinctiveness in moving digital avatars, or which other factors might lead to the negative feelings felt is up for future investigations. One major limitation of the current research is that only pictures were presented, while films or interactions with robots (either real-life interactions or interactions using VR technologies, see [6]) would allow for more complexity in the design.It is therefore unclear whether seeing a robot in a picture leads to similar feelings and cognitions as seeing and interacting with an actual robot.It is likely that through real interaction, one's preliminary cognitions and feelings are updated based on experience, thus leading to more reliable and clear perceptions.Therefore, real-life interacting with robots is a promising venue for future research to explore.By using pictures, we were able to test our assumptions in a highly controlled environment, and allowed for comparisons with earlier research [30].Nevertheless, future studies should try to implement moving materials or real interactions to further validate what factors lead to a decrease in human-machine distinctiveness and negative feelings and perceived damage to ones' identity. Another limitation of the current study is that the design was only tested in a convenience sample of highly educated, young participants.In future research, it would be interesting to see whether the current results can also be found in a more diverse sample.As research has shown that elderly respond differently towards artificial agents [38], it might be and interesting target group: they might have less experience with seeing robots, and could therefore probably show stronger feelings of threat towards robots.In addition, it would be interesting to test how this relationship can be minimised.For example, research has shown that highlighting a shared goal can lead to less intergroup conflict [39].Thus, making it clear that an interaction with a robot is necessary to reach a desired state might help to increase acceptance. Our findings have important practical implications: Although a more human-like appearance and mind attribution can be beneficial for human-robot interactions [21], robots are better accepted when perceptual differences in appearance remain preserved.Thus, when designing robots for daily use, it is important users can clearly perceive nonhuman-robotic features in appearance and behaviour.This could for example be achieved by using a body that does not closely resemble human anatomy (such as for Cozmo or Roomba).Similarities in mind perception and experience are less threatening compared to similarities in appearance. The former does not affect the human-robot distinction as strongly as the latter.Nevertheless, robots that are perceived to have high agency and experience might evoke eeriness and unease.Therefore, while it is good that robots are able to demonstrate that they can make own decisions or have a basic experience of emotions and feelings (e.g., by verbally expressing understanding or explaining why they behave in a certain way), moderation is key if one wishes to avoid negative evaluations. Table 1 B 's and 99% confidence intervals (between brackets) of the mediation analyses of Study 1, as a function of comparison (mechanical vs. humanoid; mechanical vs. android; humanoid vs. android) Table 2 Descriptive statistics (Means, SD) of Study 2 for all variables (N = 55) 3.1.1ParticipantsSixtyparticipants (52 females, 5 males, 3 unknown, M age = 19.30years, SD age = 1.90, age range 17-24 years) completed the experiment for course credits.The experiment had a 3 (Robot type: mechanical robot vs. humanoid robot vs. android robot) within-subjects design, with damage to humans and their identity as dependent variable, and physical anthropomorphism, mind attribution, and threat to the human-machine distinctiveness as mediators.Data was acquired similar to Study 1. Table 3 B's and 99% confidence intervals (between brackets) of the mediation analyses of Study 2, as a function of comparison (mechanical vs. humanoid; mechanical vs. android; humanoid vs. android) Table 5 B's and 99% confidence intervals (between brackets) of the mediation analyses of Study 3, as a function of comparison (mechanical vs. humanoid; mechanical vs. android; humanoid vs. android)
7,130.2
2020-06-18T00:00:00.000
[ "Psychology", "Computer Science" ]
Correlation lengths for random polymer models and for some renewal sequences We consider models of directed polymers interacting with a one-dimensional defect line on which random charges are placed. More abstractly, one starts from renewal sequence on $\Z$ and gives a random (site-dependent) reward or penalty to the occurrence of a renewal at any given point of $\mathbb Z$. These models are known to undergo a delocalization-localization transition, and the free energy $\tf$ vanishes when the critical point is approached from the localized region. We prove that the quenched correlation length $\xi$, defined as the inverse of the rate of exponential decay of the two-point function, does not diverge faster than $ 1/\tf$. We prove also an exponentially decaying upper bound for the disorder-averaged two-point function, with a good control of the sub-exponential prefactor. We discuss how, in the particular case where disorder is absent, this result can be seen as a refinement of the classical renewal theorem, for a specific class of renewal sequences. Introduction and motivations The present work is motivated by the following two problems: • Critical behavior of the correlation lengths for directed polymers with (de-)pinning interactions. Take a homogeneous Markov chain {S n } n≥0 on some discrete state space Σ, with S 0 = 0 and law P. A trajectory of S is interpreted as the configuration of a directed polymer in the space Σ × N. In typical examples, S is a simple random walk on Σ = Z d or a simple random walk conditioned to be non-negative on Σ = Z + . Of particular interest is the case where the distribution of the first return time of S to zero, K(n) := P(min{k > 0 : S k = 0} = n), decays like a power of n for n large. This holds in particular in the case of the simple random walks mentioned above. We want to model the situation where the polymer gets a reward (or penalty) ω n each time it touches the line S ≡ 0 (which is called defect line). In other words, we introduce a polymer-line interaction energy of the form where N will tend to infinity in the thermodynamic limit. The defect line is attractive at points n where ω n > 0 and repulsive when ω n < 0. In particular, one is interested in the situation where ω n are IID quenched random variables. There is a large physics literature (cf. [ 1 models, due to their connection with, e.g., problems of (1 + 1)-dimensional wetting of a disordered wall or with the DNA denaturation transition. In the localized phase where the free energy (defined in next section) is positive and the number of contacts between the polymer and the defect line, |{1 ≤ n ≤ N : S n = 0}|, grows proportionally to N , one knows [10] that the two-point correlation function |P ∞,ω (S n+k = 0|S n = 0) − P ∞,ω (S n+k = 0)| (1.1) decays exponentially in k, for almost every disorder realization. Here, P ∞,ω (.) is the Gibbs measure for a given randomness realization and the index ∞ refers to the fact that the thermodynamic limit has been taken. The exponential decay of correlation functions has been applied, for instance, to prove sharp results on the maximal excursions lenght in the localized phase [10, Theorem 2.5] and bounds on the finite-size correction to the thermodynamic limit of the free energy [10,Theorem 2.8]. The inverse of the rate of decay is identified as a correlation length ξ. A natural question is the relation between ξ and the free energy f, in particular in proximity of the delocalization-localization critical point, where the free energy tends to zero (see next section) and the correlation length is expected to tend to infinity. The disorder average of the two-point function (1.1) is also known [10] to decay exponentially with k, possibly with a different rate [18]. The important role played by the correlation length, and by its relation with the free energy, in understanding the critical properties of disordered pinning models was emphasized in a recent work by K. Alexander [2]. , (1.2) with the convention that 1/∞ = 0. It is natural (and quite useful in practice, especially in queuing theory applications) to study the speed of convergence in (1.2). In this respect, it is known (cf. for instance [4,Chapter VII.2], [17]) that, if z max := sup{z > 0 : n∈N e zn p(n) < ∞} > 0, (1.3) then there exist r > 0 and C < ∞ such that |u n − u ∞ | ≤ Ce −rn . (1.4) However, the relation between z max and the largest possible r in Eq. (1.4), call it r max , is not known in general. A lot of effort has been put in investigating this point, and in various special cases, where p(.) satisfies some structural ordering properties, it has been proven that r max ≥ z max (see for instance [5], where power series methods are employed and explicit upper bounds on the prefactor C are given). In even more special cases, for instance when τ i are the return times of a Markov chain with some stochastic ordering properties, the optimal result r max = z max is proved (for details, see [15,18], which are based on coupling techniques). However, the equality r max = z max cannot be expected in general. In particular, if p(.) is a geometric distribution, p(n) = e −nc e c − 1 with c > 0, then one sees that u n = u ∞ for every n ∈ N so that r max = ∞, while z max = c. On the other hand, if for instance p(1) = p(2) = 1/2 and p(n) = 0 for n ≥ 3, then z max = ∞ while r max is finite. These and other nice counter-examples are discussed in [5]. The two problems are known to be strictly related: indeed, in the homogeneous situation (ω n ≡ const) the law of the collection {n : S n = 0} of points of polymer-defect contact is given, in the thermodynamic limit, by a renewal process of the type described above, with p(n) proportional to K(n)e −nf (cf., for instance, [9,Chapter 2]). In this case, therefore, the free energy f plays the role of z max above. With respect to the first problem listed above, the main result of this paper is that, in the limit where f tends to zero (i.e., when the parameters of the model are varied in such a way that the critical point is approached from the localized phase), the correlation length ξ is at most of order 1/f, for almost every disorder realization. An exponentially decaying upper bound, with a good control of the sub-exponential prefactor, is derived also for the disorder average of the two-point function (1.1), cf. Equation (2.17) of Theorem 2.1 and the discussion in Remark 2.2. As a corollary we obtain the following result for the second problem above: if the jump law p(.) of the renewal sequence is of the form p(n) = a zmax L(n) n α e −zmaxn , with 1 ≤ α < ∞ and L(.) a slowly varying function (not depending on z max ), then for z max small one has that r max z max and C z −c max for some positive constant c (see Theorem 2.1 and Remarks 2.2, 4.1 below for the precise statements). In particular, this means that |u n − u ∞ | starts decaying exponentially (with rate at least of order z max ) as soon as n ≫ 1/z max . Notations and main result We will define our "directed polymer" model in an abstract way where the Markov chain S mentioned in the introduction does not appear explicitly. In this way the intuitive picture of the Markov chain trajectory as representing a directed polymer configuration is somewhat hidden, but the advantage is that the connection with renewal theory becomes immediate. The link with the polymer model discussed in the introduction is made by identifying the renewal sequence τ below with the set of the return times of the Markov chain S to the site 0. Let K(.) be a probability distribution on N := {1, 2, . . .}, i.e., K(n) ≥ 0 for n ∈ N and n∈N K(n) = 1. (2.1) We assume that for some 1 ≤ α < ∞. Here, L(.) is a slowly varying function, i.e., a positive function L : R + ∋ x → L(x) ∈ (0, ∞) such that lim x→∞ L(xr)/L(x) = 1 for every r > 0. Given x ∈ Z, we construct a renewal process τ := {τ i } i∈N∪{0} with law P x as follows: τ 0 = x, and τ i − τ i−1 are IID integer-valued random variables with law K(.). P x can be naturally seen as a law on the set Note that, thanks to (2.1), τ is a recurrent renewal process (possibly, null-recurrent). Now we modify the law of the renewal by switching on a random interaction as follows. We let {ω n } n∈Z be a sequence of IID centered random variables with law P and E ω 2 0 = 1. For simplicity, we require also ω n to be bounded. Then, given h ∈ R, β ≥ 0, x, y ∈ Z with x < y and a realization of ω we let where, of course, and P x,y,ω is still a law on Ω x . Note that the normalization condition (2.1) is by no means a restriction: if we had Σ := n∈N K(n) < 1, we could perform the replacements 3) and the measure P x,y,ω would be unchanged. One defines the free energy as The convergence holds almost surely and in L 1 (P), and f(β, h) is P( dω)-a.s. constant (see [9,Chap. 4] and [3]). It is known that f(β, h) ≥ 0: to realize this, it is sufficient to observe that which tends to zero for N → ∞. One then decomposes the phase diagram into localized and delocalized regions defined as On the other hand, in D the density tends to zero with N : Another quantity which will play an important role in the following is As it is known (cf. [10, Theorem 2.5 and Appendix B]) for (β, h) ∈ L one has On the other hand, it is unknown whether the ratio f(β, h)/µ(β, h) remains bounded for h → h c (β). µ(β, h) is related to the maximal excursion length in the localized phase, in the sense that essentially ∆ N ≃ log N/µ(β, h), see [10, Theorem 2.5] (cf. also [1] for a proof of the same fact in a related model, the heteropolymer at a selective interface). As was proven in [10] (but see also [6] for the proof of the almost sure existence of the infinite-volume Gibbs measure for the heteropolymer model in the localized phase), the limit exists, P( dω)−a.s., for every (β, h) ∈ L and for every bounded local observable f , and is independent of the way the limits x → −∞, y → ∞ are performed. A bounded local observable is a bounded function f : {τ : τ ⊂ Z} → R for which there exists I, finite subset of Z, such that f (τ 1 ) = f (τ 2 ) whenever τ 1 ∩ I = τ 2 ∩ I. The smallest possible I is called support of f . An example of local observable is |{τ ∩ I}|, the number of points of τ which belong to I. On the other hand, τ 1 is not a local observable. A useful identity is the following: let a ∈ Z and f, g be two local observables, whose supports are contained in {. . . , a − 2, a − 1} and {a + 1, a + 2, . . .}, respectively. Then, if x < a < y, (2. 16) In other words, conditioning on the event that a belongs to τ makes the process to the left and to the right of a independent. This is easily checked from the definition (2.3) of the Boltzmann-Gibbs measure and from the IID character of Our first result is an exponentially decaying upper bound on the disorder-averaged two-point correlation function, in the localized phase: The constant C 1 (ǫ, β, h) does not vanish at the critical line: for every bounded subset Remark 2.2. Note that Theorem 2.1 is more than just a bound on the rate of exponential decay of the disorder-averaged two-point correlation. Indeed, thanks to the explicit bound on the prefactor in front of the exponential, Eq. (2.17) says that the exponential decay, with rate at least of order µ 1+ǫ , commences as soon as k ≫ µ −1−ǫ | log µ|. This observation reinforces the meaning of Eq. (2.17) as an upper bound on the correlation length of disorder-averaged correlations functions. It would be possible, via the Borel-Cantelli Lemma, to extract from Eq. (2.17) the almost-sure exponential decay of the disorder-dependent two-point function. However, from [18] one expects the almost-sure exponential decay to be related to f(β, h) rather than to µ(β, h). Indeed, we have the following: Recalling that f > µ, it is clear that Theorem 2.3 cannot be deduced from Theorem 2.1. Remark 2.4. It is quite tempting to expect that, in analogy with Theorem 2.1, the (random) prefactor C 2 (ω) is bounded above by for some random variable C 5 such that, say, EC 5 (ω, ǫ, β, h) ≤ c(B, ǫ) < ∞ for (β, h) belonging to a bounded set B ⊂ L. This would mean that the almost sure exponential decay with decay rate at least of order f 1+ǫ commences as soon as k ≫ n(ω)f −1−ǫ | log f|, with n(ω) random but typically of order one even close to the critical point. However, this kind of result seems to be out of reach with the present techniques. Once the exponential decay of the two-point function is proven, it is not difficult to obtain similar results for the correlation between any two given local observables (cf. Remark 5.1 below for some more details): Theorem 2.6. Let A and B be two bounded local observables, with supports S A and S B , and where C 1 and C 2 are as in Theorems 2.1 and 2.3. Sketch of the idea: auxiliary Markov process and coupling In this section, we give an informal sketch of the basic ideas underlying the proof of the upper bounds for the two-point function. The actual proof is somewhat involved and takes Sections 4 to 7. The basic trick is to associate to the renewal probability K(.) a Markov process {S t } t≥x such that, very roughly speaking, its trajectories are continuous "most of the time" and the random set of times {t ∈ Z ∩ [x, ∞) : S t = 0} has the same distribution as the discrete renewal process {τ i } i∈N∪{0} associated to K(.), with law P x . This construction is done in Section 4, where we see that S . is strictly related to the Bessel process [16] of dimension 2(α + 1). Once we have S . , we switch on the interaction and in the thermodynamic limit x → −∞, y → ∞ we obtain a new measureP ∞,ω on the paths {S t } t∈R . An important point will be that the process S . , underP ∞,ω , is still Markovian, and that the marginal distribution of τ := {t ∈ Z : S t = 0} is just the measure P ∞,ω defined in Eq. (2.15). At that point, we take two copies (S 1 . , S 2 . ) of the process, distributed according to the product measureP ⊗2 ∞,ω , and we define the coupling time T (S 1 , S 2 ) = inf{t ≥ 0 : Indeed, if the two paths meet before time k, we can let them proceed together from then on and they will either both touch zero at t = k, or both will not touch it. Note that at the left-hand side of (3. [18] a sharp result was proven in a specific case: if P is the law of the zeros of the one-dimensional simple random walk conditioned to be non-negative (but that proof works also for the unconditioned simple random walk), then the limit in (2.18) exists for (β, h) ∈ L and equal exactly f(β, h). Similarly, for the disorder-averaged two-point function the analogous limit exists and equals µ(β, h). The simplification that occurs in the situation considered in [18] is that two trajectories of the Markov chain which is naturally associated to K(.), i.e., of the simple random walk, must necessary meet whenever they cross each other. This avoids the construction of the auxiliary Markov chain and makes the coupling argument much more efficient. Let us emphasize that, in general, it is not even proven that the rate of exponential decay of the (averaged or not) two-point correlation function tends to zero when the critical point is approached (although this is very intuitive, and known for instance in the case considered in [18], as already mentioned). The Markov process t } t≥s be the Bessel process of dimension δ and denote its law by P (s) ρ . The Bessel process is actually well defined also for δ ≤ 2, but we will not need that here. For the application we have in mind, we choose the initial condition ρ . , which gives the probability of being in y at time t having started at x at time 0, is known explicitly [16]: its density in y with respect to the Lebesgue measure is given, for t, x > 0, by conditioned on T (s) < ∞. Finally, for n ∈ N we set K (δ) (n) :=P One can prove (cf. Appendix A; the proof is an immediate consequence of results in [13] and [12]) that the existence of the limit being part of the statement. Note thatρ (s) . We choose the parameter of the Bessel process as δ = 2(1 + α + ǫ), with ǫ > 0 (this is the same ǫ which appears in the statement of Theorem 2.1). Then, from Eqs. (4.3), (4.4) and (2.2) it is immediate to realize that there exists p = p(ǫ) with 0 < p < 1 such that, for every n ∈ N, whereK(n) ≥ 0 and, of course, n∈NK (n) = 1. The important point here is the nonnegativity ofK(n), which implies that both K(.) andK(.) are probabilities on N, to which renewal processes are naturally associated. Note for later convenience that, as a consequence of (B.2), Remark 4.1. Note that, if the slowly varying function L(n) in (2.2) tends to a positive constant for n → ∞, one can choose ǫ = 0 and in that case (4.7) can be improved into x = (0, 0). The process will satisfy the following two properties: u } u≤t ) a random variable Ψ which takes value 0 with probability (1 − p), and 1 with probability p (p being defined in Eq. (4.6)). At that point (see Figure 1): • If Ψ = 0, then we extract a random variable m ∈ N with probability lawK(.) and we let φ In the same time interval, we let ψ (x) u = Ψ = 0. At time t + m, we are back to condition (4.9) and we start again the procedure with an independent extraction of Ψ. • If Ψ = 1, then we let φ (x) u evolve like the processρ (t) u for u ∈ (t, t + T (t) ) where, we recall, T (t) is the (random, but almost surely finite) first time after t whenρ (t) equals 1/2. In particular, φ . At time T (t) we are back to condition (4.9) and we start again with an independent extraction of Ψ. The process S (x) . so constructed (whose law will be denoted byP x ), satisfies the following properties which are easily checked: u } u∈I , I bounded subset of R.) This is a consequence of the fact that in the localized region τ has a nonzero density in Z and that the limit exists for functions depending only on τ , as discussed in Section 2. We will call simply S . = (φ . , ψ . ) the limit process obtained as x → −∞, y → ∞, and τ = {t ∈ Z : φ t = 0}. D The process S . is Markovian. More precisely: if A is a local event supported on [u, ∞) thenP ∞,ω (A|{S t } t≤u ) =P ∞,ω (A|S u ). (This property is easily checked for x, y finite, and then passes to the thermodynamic limit). E Let again τ = {t ∈ Z : φ t = 0} and A a,b the event {a ∈ τ, b ∈ τ, {a+1, . . . , b−1}∩τ = ∅}, for a, b ∈ Z with x < a < b < y. Under the lawP x,y,ω , conditionally on A a,b , the variable ψ a+ (= ψ u for every u ∈ (a, b], from our construction of S . ) is independent of {S t } t∈(−∞,a)∪(b,∞) and is a Bernoulli variable which equals 0 with probability and 1 with probability where the lower bound follows from (4.7). As for {φ u } u∈(a,b] , conditionally on A a,b it is also independent of {S t } t∈(−∞,a)∪(b,∞) . If in addition we condition on ψ a+ = 0, then φ u = b − u, while if we condition on ψ a+ = 1 then {φ u } u∈(a,b] has the same law as a trajectory of ρ The coupling inequality Consider two independent copies S 1 . , S 2 . of the process S . , distributed according to the product measureP ⊗2 ∞,ω (.). As a consequence of property C of Section 4, we can rewrite Given two trajectories of S . , define their first coupling time after time zero as It is important to remark that we are not requiring T (S 1 , S 2 ) to be an integer. Then, from the Markov property of S it is clear that the r.h.s. of (5.1) equalŝ Therefore, we conclude that To proceed with the proof of Theorems 2.1 and 2.3 we are left with the task of giving upper bounds for the probability that the coupling time is large. This will be done in Section 7, but first we need results on the geometry of the set {t ∈ Z : φ t = 0} ∩ {1, . . . , k}, for k large and close to the critical line. The upper bounds of Section 7 on the probability of large coupling times imply therefore Theorem 2.6 (indeed, the proof of Eqs. (7.1) and (7.6) can be easily repeated in absence of the conditioning on the event φ 1 0 = 0.) Estimates on the distribution of returns in a long time interval Ideas similar to those employed in this section have been already used in Ref. [10] and, more recently, in [2]. To simplify notations, we will from now on set v := (β, h), µ := µ(v) and f := f(v). Also, in the following whenever a constant c(v) is such that for every bounded B ⊂ L one has 0 < c − (B) ≤ inf v∈B c(v) ≤ sup v∈B c(v) ≤ c + (B) < ∞, we will say with some abuse of language that it is independent of v. In particular, this means that c(v) cannot vanish or diverge when the critical line is approached. In this section we prove, roughly speaking, that if the interval {1, . . . , k} is large there are sufficiently many points of τ in it, and that these points are rather uniformly distributed. We will need also an analogous P( dω)-almost sure result. However, in this case the strategy has to be modified and {1, . . . , k} has to be divided into blocks whose lengths depend on ω: namely, let i 0 (ω) = 0, where A I is the event We can rewrite (in a unique way) B I := ∪ ℓ∈I B ℓ as a disjoint union of intervals, (If m(I) = 1, the formula is slightly modified in that the sum is only on x 1 ≤ i 1 and y 1 ≥ j 1 ; the estimates which follow hold also in this case). Here we are using the fact that the disorder variables are bounded, say, |ω n | ≤ ω max . To obtain (6.9) observe that, if i − r := max{τ i : τ i ≤ i r } and j + r := min{τ i : τ i ≥ j r }, P ∞,ω (A I ; i − r = x r , j + r = y r ∀r = 1, . . . , m(I)) (6.10) ≤ P ∞,ω (A I |i − r = x r , j + r = y r ∀r = 1, . . . , m(I)) ≤ m(I) r=1 K(y r − x r )e βωy r −h Z xr,yr,ω (6.11) where we used (2.16) in the last step. It is clear that, on the event A I , i − r ≥ i r − R if r > 1 (otherwise the block {i r − R, . . . , i r − 1} would be contained in B I , which is not possible due to i r ≥ j r +R) and similarly j + r ≤ j r +R if r < m(I). Then, (6.9) immediately follows. Note that by the first inequality in (B.3) one can bound Z xr,yr,ω ≥ Z xr,ir,ω Z ir,jr,ω Z jr,yr,ω . Therefore, using Eqs. (B.1), (B.2) and (B.4), we get that for some positive c 7 , c 8 . The factor µ −c 7 comes, through (B.4), from the sum A ω I := {B ω ℓ ∩ τ = ∅ for every ℓ ∈ I} ∩ {B ω ℓ ∩ τ = ∅ for every ℓ / ∈ I} (6.23) and rewrite B I := ∪ ℓ∈I B ω ℓ as {i xr (ω) + 1, . . . , i yr (ω)} where the indices x r , y r are chosen so that i xr (ω) ≥ i y r−1 (ω)+2. Then, with a conditioning argument similar to the one which led to Eq. (6.12), one finds for c sufficiently large In the third inequality we used, once more, Jensen's inequality for the logarithm function and in the fourth one the monotonicity of x → x log(1/x) for x > 0 small, plus Eq. (6.4) and the assumption that |I| ≥ ηM (ω). Considering all possible sets I of cardinality not smaller than ηM (ω), we see that the l.h.s. of (6.5) is bounded above by Finally, we can go back to the problem of estimating from above theP ⊗2 ∞,ω -probability that the coupling time is larger than k, cf. Section 5. This will conclude the proof of Theorems 2.1, 2.3 and 2.6. 7.1. The average case. We wish first of all to prove that To this purpose observe that, if τ a = {t ∈ Z : φ a t = 0}, a = 1, 2, This would be an immediate consequence of Proposition 6.1 if the conditioning on 0 ∈ τ 1 were absent. However, the proof of Proposition 6.1 can be repeated exactly in presence of conditioning, i.e., when the measure P ∞,ω (.) is replaced by P 0,∞,ω (.) := lim y→∞ P 0,y,ω (.) in Eq. (6.3). Therefore, +EP ⊗2 ∞,ω T (S 1 , S 2 ) > k + 1 U c , φ 1 0 = 0 , where U c is the complementary of the event U . On the other hand, provided that η is chosen sufficiently small (but independent of v) it is obvious that if the event U c occurs there exist at least, say, M/10 integers 1 < ℓ i < M such that ℓ i > ℓ i−1 + 2 and B r ∩ τ a = ∅, for every a ∈ {1, 2} and r ∈ {ℓ i −1, ℓ i , ℓ i +1}. The condition ℓ i > ℓ i−1 +2 simply guarantees that any two triplets of blocks of the kind {B ℓ i −1 , B ℓ i , B ℓ i +1 } are disjoint for different i, a condition we will need later in this section. We need to introduce the following definition: PSfrag replacements Using also Lemma 7.2 one has then that, conditionally on U c , theP ⊗2 ∞,ω -probability that T (S 1 , S 2 ) > k does not exceed Recalling the definitions (6.1) and (6.2) of R and M , one can bound this probability from above with exp −d 5 (ǫ)kµ 1+5ǫ /| log µ| 2 . A.1. Proof of Lemma 7.2. Let x, y be any pair of sites which satisfies the conditions required by Definition 7.1. Assume for definiteness that x ∈ τ 1 , y ∈ τ 2 . We assume also that x < y, otherwise the lemma is trivial. For technical reasons, it is also convenient to treat apart the case x = y − 1. In this case, the lemma follows immediately from (B.3). Indeed, from this is easily deduced in particular that, conditionally on y ∈ τ 2 , the probability that also y − 1 ∈ τ 2 is greater than some positive constant, independent of ω. As for the more difficult case where x < y − 1, it is clear that there exists x ≤ t ≤ y such that φ 1 t = φ 2 t whenever φ 2 x ≥ 1 (we assume that x = τ 2 , otherwise the existence of t such that φ 1 t = φ 2 t is trivial). This follows (see also Figure 2) from the observation that φ 1 x + = 1, φ 1 y ≥ 1/2 and that there exists y − 1 < s ≤ y with φ 2 s = 1/2, together with the fact that the trajectories of the Bessel process are continuous almost surely. Therefore, the Lemma follows if we can prove that the probability that φ 2 x ≥ 1 is bounded below by a positive constant. This is the content of (A.4) below. In order to state (A.4), we need to introduce the Bessel Bridge process of dimension δ [16, Chapter XI.3]. Given u ≥ 0 and a, v > 0, the Bessel Bridge is a continuous process {X t } t∈[0,a] (whose law is denoted by P a,δ u,v ) which starts from u at time 0, ends at v at time a and such that, given 0 < s 1 < . . . < s k < a, the law of (X s 1 , . . . , X s k ) has density Then, what we need is inf u,v≥1/2 Let p(x 1 , . . . , x 2n−1 ) be the probability density of (X 1/n , . . . , X (2n−1)/n ). Given x a := (x a 1 , . . . , x a 2n−1 ), x a j > 0, a = 1, 2, define x 1 ∨ x 2 := ((x 1 1 ∨ x 2 1 ), . . . , (x 1 2n−1 ∨ x 2 2n−1 )) and analogously x 1 ∧ x 2 . Then, from the continuity and Markov property of the Bessel Bridge process [16,Chapter XI.3] it is clear that p(x 1 ∨ x 2 )p(x 1 ∧ x 2 ) ≥ p(x 1 )p(x 2 ). This is just the FKG inequality, which implies in particular that the probability in (A.10), for any given n, is not smaller than P 2,δ u,v (X 1 ≥ 1). In this section we collect some technical estimates, which in very similar form have been already used in the previous literature. Let us notice at first that, for every x < y and uniformly in ω, Z x,y,ω ≥ e βωy−h K(y − x). (B.1) Also, Eq. (2.2) and the property of slow variation imply that for every ǫ > 0 there exist positive constants d 1 (ǫ), d 2 (ǫ) such that, for every n ∈ N, In Lemma A.1 of [10] it was proven that there exists c 1 , which in the case of bounded disorder can be chosen independent of ω, such that for every x < z < y Z x,z,ω Z z,y,ω ≤ Z x,y,ω ≤ c 1 ((z − x) ∧ (y − z)) c 1 Z x,z,ω Z z,y,ω . (B.3) As it was shown in [10, Proposition 2.7], this immediately implies that there exists c ′ 1 > 0 such that, for every y > x, 1 |y − x| E log Z x,y,ω − f(v) ≤ c ′ 1 log |y − x| |y − x| . Similarly, one can see that This work originated from discussions with Giambattista Giacomin, to whom I am very grateful for several suggestions. Partial supported by the GIP-ANR project JC05 42461 (POLINTBIO) is acknowledged.
7,919.8
2006-11-28T00:00:00.000
[ "Mathematics" ]
An FE investigation simulating intra-operative corrective forces applied to correct scoliosis deformity Background Adolescent idiopathic scoliosis (AIS) is a deformity of the spine, which may require surgical correction by attaching a rod to the patient’s spine using screws implanted in the vertebral bodies. Surgeons achieve an intra-operative reduction in the deformity by applying compressive forces across the intervertebral disc spaces while they secure the rod to the vertebra. We were interested to understand how the deformity correction is influenced by increasing magnitudes of surgical corrective forces and what tissue level stresses are predicted at the vertebral endplates due to the surgical correction. Methods Patient-specific finite element models of the osseoligamentous spine and ribcage of eight AIS patients who underwent single rod anterior scoliosis surgery were created using pre-operative computed tomography (CT) scans. The surgically altered spine, including titanium rod and vertebral screws, was simulated. The models were analysed using data for intra-operatively measured compressive forces – three load profiles representing the mean and upper and lower standard deviation of this data were analysed. Data for the clinically observed deformity correction (Cobb angle) were compared with the model-predicted correction and the model results investigated to better understand the influence of increased compressive forces on the biomechanics of the instrumented joints. Results The predicted corrected Cobb angle for seven of the eight FE models were within the 5° clinical Cobb measurement variability for at least one of the force profiles. The largest portion of overall correction was predicted at or near the apical intervertebral disc for all load profiles. Model predictions for four of the eight patients showed endplate-to-endplate contact was occurring on adjacent endplates of one or more intervertebral disc spaces in the instrumented curve following the surgical loading steps. Conclusion This study demonstrated there is a direct relationship between intra-operative joint compressive forces and the degree of deformity correction achieved. The majority of the deformity correction will occur at or in adjacent spinal levels to the apex of the deformity. This study highlighted the importance of the intervertebral disc space anatomy in governing the coronal plane deformity correction and the limit of this correction will be when bone-to-bone contact of the opposing vertebral endplates occurs. Background Scoliosis is a three dimensional deformity of the spine, involving a side-to-side curvature in the coronal plane and axial rotation of vertebrae in the transverse plane ( Figure 1A). Patients with severe or progressive deformities are generally treated surgically, and surgical correction aims to arrest curve progression and achieve the best possible improvement in deformity through a reduced Cobb angle, while minimizing the risk of surgical complications (Scoliosis Research Society, [1]). However despite improved 3rd generation implant designs, implant related complication rates are still high. A recent meta-analysis [2] reported an overall mean complication rate of 20% for 5,780 adolescent idiopathic scoliosis (AIS) patients who had undergone scoliosis corrective surgery. The anterior single rod correction procedure is one possible surgical technique [3] ( Figure 1B) for treating scoliosis. This procedure involves removing the deformed intervertebral discs, implanting material to promote fusion of the intervertebral joint space and securing metal rods to the spinal vertebra using screws [4]. The surgeon achieves an intra-operative reduction in the patient's deformity by applying compressive forces across the fused intervertebral disc spaces via pairs of adjacent screws, while securing the rod to the vertebra. Previous researchers have demonstrated the potential of computational methods [5] and in particular finite element (FE) models to investigate the mechanics of the scoliotic spine during surgery [6][7][8]. FE models which are personalized to include representations for the individual patient's soft and osseous anatomy and spinal loading conditions, have the potential to assist surgeons in planning the patient's surgery and in optimizing their treatment in order to obtain the best possible surgical outcomes. Irrespective of the aetiology of the spinal deformity, surgical treatment involves applying biomechanical corrective forces to the spine using implants attached to the spinal anatomy. Implant related complications involve mechanical failure of the spinal tissues, thus an investigation of the biomechanics of the surgically corrected spine lends itself to the use of the FE method which is able to predict stresses and strains in both implants and spinal tissues. The aim of this study is to use FE models derived from computed tomography data for the thoracolumbar spine and ribcage of AIS patients, to investigate the biomechanics of the surgically corrected spine during single rod A B Figure 1 Radiographs of an AIS patient's spine, pre-operatively (A) and post-operatively (B) after undergoing a single rod, anterior procedure. anterior scoliosis surgery. The question of interest is how the AIS deformityspecifically the coronal Cobb angle and disc space deformity -is reduced with increasing magnitudes of surgical corrective force. The ability of the FE model to predict tissue-level stresses was also used to predict the surgically induced contact stresses between adjacent vertebral endplates. Methods FE models for eight AIS patients who underwent single rod thoracoscopic anterior scoliosis surgery (Table 1) were individualized to the patient's osseous anatomy using clinically indicated, low-dose (2.0-3.7 mSv radiation dose) computed tomography (CT) scans. These CT scans were obtained pre-operatively for surgical planning purposes [9]. This study involved a retrospective investigation of FE simulated outcomes for this series of patients, who had previously been treated at the Mater Children's Hospital in Brisbane, Australia. In order to determine the effect of varying surgical corrective forces on the predicted deformity correction, these models were analysed using statistical data for intra-operative compressive forces measured in a recent experimental study by our group [10]. The FE models were analysed using ABAQUS 6.9-1 (Dassault Systemes, France) on an SGI Altix XE computational cluster (608 × 64 bit Cores, 1728 GB memory). Patient-specific anatomy and finite element (FE) models for AIS patients Our method for generating three-dimensional patientspecific osseo-ligamentous anatomy and FE model geometry for the thoracolumbar spine and ribcage has been described elsewhere [8,11], so will only be briefly presented here. Using custom-developed algorithms and image-processing software (Matlab R2007b, The Mathworks, Natick, MA) the co-ordinates for specific bony landmarks on the vertebrae, ribs and sternum/manubrium were manually selected from the thresholded CT datasets. These landmarks were imported into custom FE pre-processing software written in Python (Python 2.5, Python Software Foundation) which used parametric descriptions for the vertebral bodies and posterior bony elements, ribs (including costal cartilage), sternum, intervertebral discs (nucleus pulposus and collagen fibre-reinforced anulus fibrosus), facet joints and ligaments to create an osseo-ligamentous FE model of the thoracolumbar spine and ribcage ( Figure 2A) with anatomy personalized to the individual patient. Seven spinal ligaments were simulated at each vertebral level and were represented as either linear connectors or in the case of the anterior/posterior longitudinal ligaments, spring elements in series and parallel. The ligaments were defined between bony attachments, with no ligament wrapping simulated. Three-dimensional geometry for the vertebral bodies was interpolated between the vertebral endplates [12] and similarly, the intervertebral disc geometry was interpolated from the adjacent vertebral endplates. The curved transverse profile for the articulating surfaces of the facet joints was described using a sinusoidal curve and the curvatures of the ribs were defined using 5th order polynomials, with both derived from user-selected bony landmarks. Interfacial contact between the articulating surfaces of the facet joints was modelled using exponential softened contact (normal contact) and zero-friction tangential sliding. The costo-vertebral joints were represented in detail, since these structures are of key importance in governing the biomechanics of the spine [13,14]. Both the costovertebral and costo-transverse connections were represented and our method for simulating this joint has been described and validated in a previous study [15]. The elements and material parameters describing the simulated spinal structures are detailed in Table 2. There is a paucity of experimental data describing the mechanical behaviour of adolescent spinal tissues, much less tissues from AIS patients. The two main methods available to determine these properties are either direct measurement, which is highly challenging due to difficulties in accessing the tissues and obtaining precise biomechanical data, or inverse determination using data from preoperative flexibility assessments carried out by the patients. A recent biomechanical study measured the three dimensional load-displacement response across spinal joints for two AIS patients [16] using a highly innovative technique to obtain in vivo joint stiffness. However, the results of this study are preliminary with data for only two patients and using this technique, the resulting joint stiffness does not provide sufficient resolution to determine individual tissue behaviour. In our previous studies we have attempted to derive patient-specific soft tissue parameters for the spinal ligaments using clinical data from pre-operative flexibility assessments [17,18]. In these studies, due to the lack of literature for the adolescent tissues, an initial set of 'benchmark' tissue parameters were based on the mechanical properties derived from adult subjects ( Table 2) [19][20][21][22][23][24][25][26][27][28][29][30]. Modelling the surgically altered spine The eight patients modeled in the study had previously undergone a single rod, anterior scoliosis procedure and clinical follow-up data was available. In carrying out this procedure the surgeon produces an immediate (intra-operative) reduction in the spinal deformity by attaching a metal rod to the anterior spinal column. The surgery firstly involves the insertion of screws into the vertebral bodies within the primary structural curve. Screws are inserted on the convex side of the primary structural curve, directly into the lateral side each vertebral body. Following this, the discs within the limits of this curve are partially removed (within the limits of endoscopic surgical accesss) and the disc spaces are packed with bone graft to promote bony fusion after surgery. In a step-wise manner, the surgeon then applies compressive forces between the screw heads of adjacent pairs of vertebrae (starting at the most-caudal motion segment in the structural curve), to reduce the level-wise deformity at each motion segment, and then locks the screws onto the rod. Thus, these level-wise corrections produce a cumulative reduction in the overall spinal deformity, which is held in place by the screw heads being locked onto the rod. This surgical procedure was simulated for the eight patients included in this study, by adding the screws and rod to each patient-specific model, and by removing disc material from the models in the same manner as the surgically performed discectomies. Clinical data for the surgical procedure carried out on each patient was used to simulate the surgery in each patient FE modelthe portion of intervertebral disc elements removed from each simulated joint space was representative of the amount of disc material extracted clinically; the clinical spinal levels fused were used to define the vertebrae in which screws were simulated; and the geometry for the simulated surgical instruments was representative of the screw diameters and rod diameter implanted for each patient. As such, eight patient-specific surgically altered spines were simulated. The screws were assumed to be perfectly bonded to the surrounding vertebral bone, so the contact mechanics of the bone-implant interface was not considered in the models. In modelling the discectomies, the fused intervertebral levels were simulated by removing approximately two-thirds of the brick elements representing the anulus fibrosus and by removing the entire hydrostatic fluid cavity representing the nucleus pulposus for these discs. The bone graft material was not simulated in this study, since bony fusion does not occur until 3-6 months after surgery and the material offers minimal mechanical resistance during surgery. The contact interaction between the exposed vertebral endplates at the discectomy levels was modelled using an exponential, softened contact relationship for normal contact and Coulomb friction (μ = 0.3) for tangential sliding. This softened contact relationship simulated a cartilaginous endplate thickness of 0.1 mm, being the distance at which the contact pressure between adjacent endplates became non-zero. Simulating the intra-operative loadcase and boundary conditions Surgical forces There is limited biomechanical data available in the literature describing the surgical forces applied intraoperatively during anterior spinal deformity surgery. As such, in a recent in vivo biomechanical study by members of our group, Fairhurst et al. [10] presented intraoperatively measured force data for a series of 15 AIS patients who underwent the single rod anterior corrective procedure. This study presented descriptive mechanical data (mean and standard deviation) for the surgical corrective forces applied intra-operatively at each intervertebral level, normalized by vertebral level relative to the apex of the curve. (This study was performed with approval from the Mater Children's Hospital Ethics Committee). Due to the timing of the two studies, the 15 patients in this previous biomechanical study were not included in the patient series for the current study. While these biomechanical data could not be used to provide personalized force data for the eight AIS patient FE Titanium alloy models in the current study, the data was still invaluable in providing representative values for intra-operatively applied forces in a comparable patient data setdata which was heretofore unavailable in the literature. In keeping with the study by Fairhurst et al. [10], if the apex of the deformity for the patients in the current study was a vertebra (eg. T7), the disc space caudal to this was defined as the apical level (T7T8). Using these data for the mean and standard deviation, three separate compressive force profiles were defined in the current study (Table 3) and these forces were applied to the patient-specific models by normalizing the structural curve in each model using the same method presented by Fairhurst et al. [10]. The three different force profiles were used to investigate the sensitivity of the predicted deformity correction to intra-operative surgical forces. Boundary conditions To simulate the guided sliding movement of the screws along the rod during surgery, a 'no separation' normal contact and frictionless tangential contact definition were defined between the screw head and the surface of the rod. After the surgical force loading step had been applied for each pair of adjacent screws, this tangential contact definition was changed to roughened (bonded) contact to simulate the surgical procedure for locking the screws onto the rod. During the simulations the spine was fully constrained from rigid body motion at the inferior-most vertebral level (L5) and stabilized in the lateral direction at the superior-most vertebra to simulate the constraint provided by the operating table (since the patient is positioned in the lateral decubitus position on the operating table). Rod pre-bend Intra-operatively, the rod is pre-bent manually prior to being attached to the vertebrae [4]. The angle of rod pre-bend varies from patient to patient based on the surgeon's judgement of the achievable deformity correction and is not measured clinically. In the absence of measured values for the rod pre-bend angle in each case, the simulated pre-bend in the models was based on the preoperative coronal Cobb angle for each patient. In this study, a 'prebend' load step was performed in which the screw heads were fixed in space, and then the connector elements between the rod and the screw heads were reduced to zero length (these elements provide an axial link between the connected nodes on the rod and screw and have no associated stiffness), in order to bend the rod to conform to the pre-operative profile of the spine. Following the prebend load step, the fixed boundary constraint on the screw heads was removed in the second loadstep allowing elastic springback of the rod prior to the actual surgery simulation steps. Analysis As described above, each of the eight patient-specific models were analyzed using three separate intra-operative force profiles (Force profiles A, B and C in Table 3). These 24 analyses were performed using a quasi-static solver (no inertial effects) with the ABAQUS nonlinear geometry capability enabled. The predicted corrected Cobb angle for the instrumented curve was calculated for each analysis and compared with the clinically measured post-operative Cobb angle for each patient (using the 1 week post-operative standing x-ray). In comparing model predictions with clinical measurements, the accepted clinical radiograph measurement variability of ±5 o , [31] was taken into account. Since the intervertebral discs are the primary spinal structures which impart flexibility to the anterior column, anterior surgical correction of the spinal deformity primarily involves reduction of these disc spaces (height and/or wedge angle). To better understand how correction of the scoliosis deformity is achieved, the predicted change in disc space wedge angle in the coronal plane was calculated during each of the simulated surgical procedures ( Figure 3). For each model, the segmental correction (ie. change in disc space wedge angle, Δangle = α initial -α final , Figure 3) at each intervertebral disc space was expressed as a fraction of the total correction of the instrumented curve (referred to as Disc Space Correction Ratio), in order to determine the relative contribution of each disc space to the total coronal plane Cobb angle correction. Note that a positive wedge angle α represents an angle which is opening towards the convex side of the curve, while a negative α represents a wedge angle which opens towards the concavity of the curve. The simulated contact pair separation (normal distance between adjacent endplate surfaces), and also, in cases where the endplates were in contact, the pressure between adjacent pairs of endplates, were analysed at each of the intervertebral disc spaces in the instrumented spine. From this data it was determined whether the disc space wedge angle was closing/becoming negative (ie. non-zero contact pressure, Figure 3) or whether adjacent endplates were touching (i.e. contact separation indicated bone-to-bone contact, Figure 3) due to the simulated surgery. A non-zero contact pressure in the absence of bone-to-bone contact occurred when the endplates were closing, but the normal separation distance between adjacent endplates was within the separation range (0-0.1 mm) defined using the softened-exponential contact relationship. Overall and segmental coronal Cobb correction The predicted results for post-operative Cobb angle for seven of the eight patient-specific models were within the 5°clinical Cobb measurement variability ( Figure 4) for at least one of three force profiles. The predicted corrected Cobb angle for patient five was negative, indicating the applied corrective forces 'over-corrected' the spinal deformity for force profiles A and B. For all patients, there was an increase in the predicted corrected Cobb angle with increasing intra-operative compressive force. When comparing the predicted normalized disc space corrections for each of the three load profiles, the largest proportion of overall correction occurred at the apical intervertebral discs for force profiles A and B, with diminishing correction caudal and cephalic to this level ( Figure 5). For Force profile C, the largest proportion of correction was observed at the apical disc in three of the eight patients, and in either the cranial or caudal periapical disc for the remaining five patients ( Figure 5). Although the largest portion of overall correction was predicted at or near the apical intervertebral disc as presented in Figure 5, this was not necessarily the disc with the largest pre-operative wedge angle ( Figure 6). A comparison of the vertebral and intervertebral disc wedge angles in the coronal plane based on the model geometry before and after the surgical correction showed that between 2.6% and 64.5% of the initial coronal deformity (Cobb angle) was due to wedging in the intervertebral discs ( Figure 6). Model predictions for patients one, two, three and five showed endplate-to-endplate contact was occurring on adjacent endplates of one or more intervertebral disc spaces in the instrumented curve at the end of the surgical loading steps. For these disc spaces, the disc wedge A B C A B C A B C A B C A B C A B C A B C A B C Disc Space Correction Ratio +2 +3 Figure 5 Normalized disc space correction for each spinal level within the instrumented curve. (The disc space level was normalized relative to the apical disc, so that the eight models could be compared.) A denotes force profile A, B denotes force profile B and C denotes force profile C. (Note that a negative correction ratio indicates the joint space wedge angle had increased compared to the pre-operative angle, however, for patients 2,3,4 and 6, this increase was less than 0.5 degrees, indicating the disc space angle was essentially the same after the simulated surgery). angle was negative ( Figure 3). Note, the initial coronal deformity for patients two, three and five was a result of primarily vertebral wedging rather than disc wedging ( Figure 6). For all except patient one, with increasing corrective forces the number of intervertebral disc spaces with contact pressure on the endplates increased (eg. Patient 2, total of six disc spacescontact pressure on six disc spaces due to Force profile A, contact pressure on three disc spaces due to Force profile B, contact pressure on one disc space due to Force profile C). The analysis for patient two predicted contact between adjacent endplates for all cleared disc spaces in the instrumented curve due to force profile A (Figure 7). Collating the results for the cumulative change in intervertebral disc space wedge angle during the levelby-level compression steps of the simulated surgery demonstrated two typical response patterns (an example of each is shown in Figure 8). Firstly, in the case of patients two, three and five, the initial cumulative disc wedge angles for these patients accounted for only 2%, 20% and 12%, respectively of the initial major Cobb angles ( Figure 6, angles given above bars in chart). During the simulated surgical steps, the wedge angle in the disc spaces was progressively reduced, resulting in a cumulative reduction in the overall Cobb angle in which thefinal coronal wedge angle for the majority of the disc spaces was negative (α as described in the Methods section) ( Figure 8A). Secondly, in the case of the remaining patients (one, four, six, seven and eight), the majority of the disc spaces initially demonstrated a positive wedge angle (α as described in the Methods section) which was progressively reduced with each simulated surgical load step to result in a cumulative reduction in the overall Cobb angle, however the final coronal wedge angle for the majority of the discs was still positive ( Figure 8B). Discussion Using patient-specific FE models of the osseoligamentous thoracolumbar spine, this study investigates the biomechanical response of eight AIS patients to surgical corrective forces applied during single rod, anterior scoliosis surgery. Each FE model was subjected to three corrective force profiles in the range of experimentally measured values, and the resulting model response was investigated with particular focus on the predicted coronal plane correction occurring in the intervertebral disc spaces following partial discectomy and single anterior rod instrumentation. A limitation of this study is that the passive osseoligamentous models of the spine and ribcage used herein do not provide biomechanical insights on the response of the spine to post-operative loading conditions which Figure 6 The pre-operative coronal wedge angle for both the vertebra (green) and intervertebral discs (yellow), shown cumulatively for each patient. Note the sum of the vertebra and intervertebral disc wedge angles for all spinal levels in a particular patient gives the overall pre-operative coronal Cobb angle (Major Cobb angle shown above the bar). Note also that negative wedge angles mean the disc or vertebra was wedged in the opposite direction to that of the major curve. involve muscle activation. While the spinal muscles may play a role in passively resisting loads applied to the spine while the patient is anaesthetized [32], the current study assumes this contribution to spinal flexibility is minimal in comparison to that of the ligamentous and cartilaginous tissues of the spine. Post-operatively, the corrected Cobb angle is normally measured clinically using standing radiographs obtained one week after surgery. However, the comparison between the clinical and predicted corrected Cobb angle in the current study (Figure 4) was based on model predictions which were analysed for the surgical loadcase only, thus assumed the patient was still supine. Ideally, supine radiographs obtained immediately after surgery, while the patient is still recovering and so is not yet load-bearing, would provide a better clinical comparison for the predicted post-operative Cobb angle from the patient-specific models. However, these radiographs were not available for the patients in the current study. Once the rod is surgically attached to the vertebra, it is reasonable to assume that the instrumented region of the spine would experience only small intervertebral motions (< 1 o ), since the main purpose of the surgery is to ensure that motion is sufficiently restricted such that bony fusion can occur between adjacent vertebral bodies. Therefore, the difference in the clinically measured corrected Cobb angle from supine compared to standing radiographs is not expected to be of the magnitude which is observed prior to surgery in the uninstrumented spine. The use of tissue mechanical properties derived from adult data to simulate adolescent spinal tissues is another limitation of the study, and is a necessary consequence of the paucity of paediatric and adolescent tissue mechanical data available in the literature. However, we note that tissue stiffness (e.g. the force-displacement for a whole ligament) is a result of both the inherent mechanical response of the ligament tissue itself, and the dimensions (in this case length and cross-sectional area) of the ligament. By including patient-specific anatomical landmarks as the ligament attachment points in the models, the patientspecific modeling approach used in the current study incorporates variations in ligament length between patients, and therefore goes some way to simulating patientspecific tissue properties. Another limitation of the current study is that the angle of rod prebend which is introduced prior to attaching the rod to the patient's spine is not measured clinically and is based on the surgeon's judgement. In the current study, this angle of prebend for the simulated surgery was estimated using the pre-operative Cobb angle, however, future studies using this patient series will focus on investigating the sensitivity of model predictions to the prebend angle and plastic prestrain in the rod. With regard to model validation, Figure 4 showed that the predicted Cobb angles after surgical correction were within 5°agreement with the clinical values for seven of the eight patients in the study. However it is important to keep in mind that the surgery force profiles used in the study were not 'patient specific' , since average surgery force data for an experimental measurement series [10] were used to define the model load profiles for all eight patients in the current study. The results from the current study show that increasing the simulated intra-operative forces, resulted in a reduction in the predicted corrected Cobb angle. Measurement variability from clinical radiographs results in a wide range of error (±5 o ), and furthermore, there was large variability in the intra-operatively measured surgical forces which resulted in a similarly wide range of variation in the predicted corrected Cobb angle. It should be noted that the inter-relationship between these sources of variability may have the potential to obscure patterns in the predicted outputs. For instance, the results for patient five suggest that the average surgery forces applied to the model were higher than those applied intra-operatively for this patient. While the descriptive data for surgical forces were measured for a series of AIS patients from the same study population as patients simulated in the current study, the use of intra-operative force data measured for Note the Σ values represent the cumulative sum of the disc wedge angles at the beginning and end of the analysis and equate to the portion of the overall coronal Cobb angle due to disc wedging. The schematics show an anterior view of the spinal column for each patient, with the disc wedge angles delineated according to the legend for the bar-chart, highlighting positive, negative and zero disc wedge angles. (Note that the ordering of the disc wedge angles in the stacked bars does not reflect the anatomical ordering in the spinal column since in some cases adjacent discs have oppositely signed wedge angles.) each individual patient would provide a more ideal simulation for individual patient loading. This reflects a limitation of the future clinical application of patient-specific modeling approaches for all such virtual spine models, in that patient-specific surgically applied forces can only be measured at the time of surgery, therefore actual patient force data can only be simulated retrospectively postsurgery. Aside from modeling considerations, the substantial variation in surgically applied corrective forces warrants further study, and there may be a case for developing technology to provide force feedback to surgeons during implant insertion. The results of this study highlight the importance of the intervertebral disc space anatomy in governing the coronal plane deformity correction which may be achieved in the instrumented curve. Since the partially cleared intervertebral disc spaces are the primary anatomy in the anterior column of the spine imparting flexibility, the maximum correction which may be achieved surgically will be governed by the anatomy of the discs in terms of disc wedge angle and disc height. The limit of this achievable deformity correction will be when bone-to -bone contact of the opposing vertebral endplates occurs, and for different patients, this limit will be achieved with varying magnitudes of surgical corrective forces. One of the strengths of the patient-specific model geometry used here is the ability to capture endplate to endplate contact during the surgical correction, and thus to predict the diminishing return between applied corrective force and segmental correction. Results for the predicted corrected Cobb angle indicate that there is an inverse relationship between the magnitude of the total corrective force and the decrease in corrected Cobb angle and this is a proportional relationship for all except patient two. By increasing the total corrective force by as much as 120% (comparing the total force applied in Profile A to the total for Profile C), this resulted in a reduction in the corrected Cobb angle. For example, for patient three, the corrected Cobb angle for Profile C was 19.1 o and for Profile A was 6.9 o (Figure 4), which represented a 64% reduction in the corrected Cobb angle with increasing corrective force. This percentage reduction in corrected Cobb angle ranged from 32 to 84% when comparing the results for Profile A to Profile C for the eight patients ( Figure 4). Moreover, as stated above, the anatomy of the discs will strongly influence the maximum achievable correction and for some patients, applying increasing magnitude corrective forces will result in bone-to-bone contact in the disc space and unnecessarily load the vertebral bone with a comparatively minor improvement in deformity correction. As such, the interaction of these key biomechanical factors of force, geometry (patient anatomy) and tissue stresses is of key importance in achieving an optimal correction for a patient, with the least risk of excessive loads on the spinal tissues causing possible implant related complications. Herein lies a key advantage of use of patient-specific FE models as tools to assist surgeons in pre-operative planning for deformity surgery. Conclusions Attempts to improve the outcomes of spinal deformity surgery using patient-specific computer models depend strongly on the ability of the models to correctly capture the anatomy, tissue mechanical properties, and applied loading in individual patients for their validity. The simulations presented in this study are an initial step in the development of computational tools to predict surgical deformity correction. This study demonstrated a direct relationship between the surgically applied corrective forces and the deformity correction achieved, showing that the majority of deformity correction occurs in the intervertebral disc spaces at or near the apex of the deformity. The study results highlighted the importance of the intervertebral disc space anatomy in influencing the coronal plane deformity correction. By better understanding how the mechanics of a patient's spine is altered during scoliosis corrective surgery, patient-specific models such as these can potentially provide an improved understanding of how to achieve an optimum correction for an individual patient's spine.
7,393.4
2013-05-08T00:00:00.000
[ "Engineering", "Medicine" ]
Role for the Third Intracellular Loop in Cell Surface Stabilization of the α2A-Adrenergic Receptor* Previous studies have shown that α2A-adrenergic receptor (α2A-AR) retention at the basolateral surface of polarized MDCKII cells involves its third intracellular (3i loop). The present studies examining mutant α2A-ARs possessing short deletions of the 3i loop indicate that no single region can completely account for the accelerated surface turnover of the Δ3iα2A-AR, suggesting that the entire 3i loop is involved in basolateral retention. Both wild-type and Δ3i loop α2A-ARs are extracted from polarized Madin-Darby canine kidney (MDCK) cells with 0.2% Triton X-100 and with a similar concentration/response profile, suggesting that Triton X-100-resistant interactions of the α2A-AR with cytoskeletal proteins are not involved in receptor retention on the basolateral surface. The indistinguishable basolateral t 1 2 for either the wild-type or nonsense 3i loop α2A-AR suggests that the stabilizing properties of the α2A-AR 3i loop are not uniquely dependent on a specific sequence of amino acids. The accelerated turnover of Δ3i α2A-AR cannot be attributed to alteration in agonist-elicited α2A-AR redistribution, because α2A-ARs are not down-regulated in response to agonist. Taken together, the present studies show that stabilization of the α2A-AR on the basolateral surface of MDCKII cells involves multiple mechanisms, with the third intracellular loop playing a central role in regulating these processes. The ␣ 2 -adrenergic receptor (␣ 2 -AR) 1 is a member of a large family of G protein-coupled receptors that are predicted to have seven transmembrane-spanning regions (1,2). Three subtypes of ␣ 2 -ARs exist and couple to members of the G i and G o class of G-proteins to mediate a variety of physiological responses (3,4). Receptor localization and stabilization on the cell surface of target cells are two critical contributors to the sensitivity and extent of signaling by G protein-coupled receptors. There is a growing body of evidence that discrete localization of G proteincoupled receptors may play a role in specificity of signaling by these receptors (5)(6)(7)(8). A precedent already exists for the microcompartmentation of signaling molecules such as protein kinase C (9), cAMP-dependent protein kinase (10), Ca 2ϩ /calmodulin-dependent protein kinase II (11), kinases involved in the yeast mating response (12), and NO synthase (13,14) by interaction of these effector molecules with signaling "scaffold" proteins. In polarized cells, receptor localization is essential for vectorial information transfer, as occurs for ␣ 2 -AR regulation of Na ϩ and H 2 O transport in renal (15) and intestinal (16,17) epithelial cells. Madin-Darby canine kidney (MDCKII) cells cultured in Transwell culture dishes have provided an excellent model system for polarized renal epithelial cells. The localization of the ␣ 2A -AR subtype on the basolateral surface of these cells (18) recapitulates the basolateral localization of this receptor in vivo, based on physiological (19) and pharmacological (20) data. Examination of molecular regions of the three ␣ 2 -AR subtypes in polarized MDCKII cells indicates that basolateral targeting of these receptors involves sequences in or near the membrane bilayer (18,21,22). In contrast, the large third intracellular loop of the receptor appears to play a role in stabilizing the ␣ 2A -AR on the plasma membrane, because mutant ␣ 2A -ARs that lack 119 amino acids from the large third intracellular loop (⌬3i loop ␣ 2A -AR) have a cell surface half-life (t1 ⁄2 ) of 4.5 h compared with a t1 ⁄2 of 10 h for the wild-type ␣ 2A -AR (21). The present studies have explored the structural features of the ␣ 2A -AR 3i loop responsible for stabilizing the ␣ 2A -AR on the plasma membrane of MDCKII cells. Cell Culture-Madin-Darby canine kidney cells (MDCKII) were obtained from Enrique Rodriguez-Boulan (Cornell Univ. New York, NY) and maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal calf serum (Sigma), 100 units/ml penicillin, and 100 mg/ml streptomycin (referred to as complete Dulbecco's modified Eagle's medium) at 37°C, 5% CO 2 . For polarity experiments, MDCK II cells were seeded at a density of 1 ϫ 10 6 cells/24.5-mm polycarbonate membrane filter (Transwell chambers, 0.4-m pore size, Costar, Cambridge, MA) and cultured for 5-8 days with a change of medium every 1-2 days. Before each experiment, the integrity of the monolayer was assessed by adding [ 3 H]methoxyinulin to the apical medium and monitoring its leak after a 1-h incubation at 37°C from the apical to the basolateral compartment by sampling and counting the basolateral medium in a scintillation counter (Packard Tricarb). Chambers with greater than 5% leak/h were not evaluated. been previously described (21,24). Briefly, site-directed mutagenesis was used to create a novel NotI restriction enzyme site in the region of the ␣ 2A -AR cDNA encoding the C-terminal end of the putative third intracellular (3i) loop. Cleavage of this mutant ␣ 2A -AR cDNA with NotI restriction enzymes removes the DNA fragment encoding amino acids 240 -359. This ⌬3i loop mutant receptor has 36 amino acids linking transmembrane domains 5 and 6 of the ␣ 2A -AR as shown in Fig. 1A. Oligo-directed mutagenesis in M13 phage was utilized to create incremental deletions of the predicted third intracellular loop of the ␣ 2A -AR. Single oligos were designed against sequences flanking those encoding the amino acids selected for each deletion. The deletion mutations were confirmed by dideoxynucleotide sequencing and then subcloned into the pCMV4 mammalian expression vector. Deletions corresponding to DNA encoding the following amino acids were made in this manner: ⌬aa252-267, ⌬aa268 -285, ⌬aa286 -303, ⌬aa315-326, and ⌬aa327-340. Fig. 1 provides a schematic diagram of the regions encoded by these deleted amino acids. The nonsense loop was designed by taking advantage of the method used for making the original ⌬aa240 -359 (⌬3i loop ␣ 2A -AR). Because two NotI enzyme sites were used to remove the 3i loop, it was possible to subclone this segment of the gene back into the receptor in two orientations. The correct orientation produced a receptor that corresponded to the wild-type ␣ 2A -AR sequence except for two point mutations at the site of the engineered NotI site (K359A, S360A). The opposite orientation also produces an open reading frame with a 3i loop of the same amino acid length but with very little sequence homology to the wild-type receptor (See Fig. 4). We refer to this mutant ␣ 2A -AR as the nonsense 3i loop ␣ 2A -AR. All mutations were verified using dideoxy-DNA sequencing (Sequenase kit, U. S. Biochemical Corp.) of the single-stranded DNA utilizing T7 DNA polymerase with ␣-35 S-dATP. Once verified, the mutant inserts were subcloned from M13 into the pCMV4-TAG-␣ 2A -AR expression vector (18) containing an N-terminal hemagglutinin epitope (YPYDVP-DYA) to which antibodies are available commercially (Berkeley Antibody Co.). These plasmid constructs were verified by double-stranded DNA sequencing through the region of the mutation. COS M6 cells were transiently transfected with the plasmid DNA encoding the wild-type and mutant ␣ 2A -ARs, and membranes from the transient transfectants were assayed for [ 3 H]yohimbine binding before developing permanent MDCK cell lines expressing these mutant ␣ 2A -AR. MDCK cell lines permanently expressing the wild-type or mutant ␣ 2A -AR were created as described previously (18) (Table I). Determination of the Half-life of Wild-type or Mutant ␣ 2A -AR on the Basolateral Membrane-To determine ␣ 2A -AR half-life on the basolateral surface, a metabolic labeling strategy was employed. MDCK cells expressing wild-type or mutant ␣ 2A -AR were incubated with [ 35 S]Cys/ Met ("pulse") and then incubated for various periods of time ("chase") before isolation of basolateral ␣ 2A -AR using sequential biotinylation, extraction, immunoisolation, and streptavidin-agarose chromatogra-phy. The procedures utilized have been previously described (21) except for the modifications outlined as follows. Specifically, MDCK cells grown in Transwell culture were metabolically labeled with 1 Ci/l 35 S-Express protein labeling mix for 60 min in Cys/Met-free Dulbecco's modified Eagle's medium (18). After labeling, the cells were washed once with Dulbecco's phosphate-buffered saline (dPBS) and incubated for various periods of time (generally 0, 3, 6, and 18 h) at 37°C, 5% CO 2 in chase medium (complete Dulbecco's modified Eagle's medium supplemented with 1 mM cysteine and 1 mM methionine). At the conclusion of each chase period, ␣ 2A -ARs residing on the basolateral surface of these cells were isolated by biotinylating the basolateral cell surface with biotin hydrazide or sulfo-NHS biotin and subjecting detergent extracts of membranes prepared from these cells to sequential immunoprecipitation with the 12CA5 anti-hemagglutinin epitope antibody followed by streptavidin-agarose chromatography (21). The streptavidin-agarose eluates were separated by SDS-polyacrylamide gel electrophoresis, and the amount of radiolabeled ␣ 2A -AR remaining on the basolateral surface after various durations of chase was determined. The gels were exposed to film, and the gel area corresponding to the radiolabeled ␣ 2A -AR was removed and counted (in 10 ml of scintillation Fluor) in a Packard beta counter. Similar-sized gel slices that did not correspond to any radiolabeled protein band were excised and counted to quantify the background 35 To assess the effects of agonist on ␣ 2A -AR redistribution, MDCKII cells expressing either wild-type or ⌬3i loop ␣ 2A -AR were treated for 24 h with 100 M epinephrine and 100 M ascorbic acid to prevent oxidation of the epinephrine, as described previously (25). Control cells were treated with 100 M ascorbic acid alone. Receptor density was determined by [ 3 H]yohimbine saturation binding. Determining the Triton X-100 Extractability of Wild-type and ⌬3i Loop ␣ 2A -AR-MDCK cells permanently transfected with either wildtype or ⌬3i loop ␣ 2A -AR were grown on Transwells for 7 days. Transepithelial leak of [ 3 H]methoxyinulin was determined as described earlier for confirmation of functional polarization. To compare the Triton X-100 extractability of wild-type and ⌬3i loop ␣ 2A -AR, the ␣ 2A -ARs in intact cells were covalently modified via a radioiodinated photoaffinity label and then extracted with increasing concentrations of Triton X-100 (%v/v). Briefly, cells were washed with dPBS supplemented with 0.5 mM CaCl 2 and 1.0 mM MgCl 2 (dPBS-CM), and the Transwells were inverted and incubated 1 h at 22°C in the dark with 150 l of dPBS-CM containing 0.2 Ci/well (0.9 nM) 125 I-Rau-Az-Pec. After 1 h, the wells were washed with dPBS containing 1 mM glutathione, suspended in a Rayonet UV photoilluminator, and photolysed for 3 min with 300-nm light. The ␣ 2A -ARs expressed on the basolateral surface were identified by biotinylation with Sulfo-NHS-biotin in triethanolamine buffer for 2 20-min incubations as described above. To determine the Triton X-100 extractability of ␣ 2A -ARs in polarized cells, Transwells with the photolabeled ␣ 2A -AR were washed with dPBS and extracted with increasing concentrations of Triton X-100 using a modification of a previously described protocol (26). The polycarbonate filters were first removed from the Transwell support and placed into a 12-well dish with each 24-mm well containing 190 l of a Triton X-100 extraction buffer (15 mM Tris, pH 8, 120 mM NaCl, 25 mM KCl, 0.1 mM EGTA, 0.5 mM EDTA) with no Triton X-100. The cells on polycarbonate filters were rocked gently for 5 min at 4°C. The filters were then transferred to a well with extraction buffer containing 0.05% Triton X-100 and rocked 5 min. This procedure was repeated with increasing concentrations of Triton X-100 (0.1, 0.2, 0.5, and 1.0%). After exposure to 1% Triton X-100, the residual cellular material, operationally defined as "Triton shells," was scraped into 200 l of RIPA buffer. A set of control Transwells were subjected to the same procedure, except that each successive buffer contained no Triton X-100. The final RIPA extraction buffer from each well was transferred to a 0.6-ml Eppendorf tube, and the wells were washed with 200 l of RIPA buffer. All extracts were brought up to a final volume of 500 l with RIPA buffer, and biotinylated proteins were isolated using streptavidin-agarose chromatography. After an overnight incubation, the streptavidin beads were eluted with 1ϫ SDS sample buffer at 90°C for 30 min. This elution was repeated, and the combined eluates were loaded onto a 10% SDSpolyacrylamide gel. The dried gels were exposed to preflashed Kodak film for 3-5 days. The receptor was identified as a radioactive band migrating at a position characteristic of the ␣ 2A -AR and whose photoaffinity-labeling was blocked in the presence of 10 M phentolamine in separate control studies. No Small Region in the ␣ 2A -AR Third Intracellular Loop Contains All of the Necessary Information for Stabilization of the Receptor on the Cell Surface- We observed previously that deletion of 119 amino acids from the 3i loop of the ␣ 2A -AR (amino acids 240 -359) generates a structure (⌬3i ␣ 2A -AR) that has a basolateral t1 ⁄2 of ϳ4.5 h compared with 10 -12 h for the wild-type ␣ 2A -AR in polarized MDCKII cells. By analogy with the ability of a 21-amino acid insert into the short (D2S) dopamine receptor to create the long dopamine receptor isoform (D2L) and dramatically slow the rate of sequestration (27), the 3i loop of the ␣ 2A -AR was examined to determine whether a single small amino acid sequence could account for the stabilization of the receptor. Five ϳ20 amino acids deletions were made within the ␣ 2A -AR 3i loop, as shown schematically in Fig. 1A. Demarcation of the regions selected for individual deletions was based on secondary structural predictions of Chou and Fasman analysis (52); for example, ⌬aa286 -303 and ⌬aa315-326 are predicted by this analysis to form amphipathic ␣ helices. In addition, the ⌬aa286 -303 removes the LEESSSS sequence recognized for phosphorylation by G protein-coupled receptor kinases (28,29). This was of interest because G protein-coupled receptor kinasemediated phosphorylation of these receptors promotes association with arrestins that have been shown to act as adaptors and recruit some G protein-coupled receptors into clathrincoated pits (30 -32). The surface stability for each of the ␣ 2A -AR structures examined (Fig. 1A) was determined by pulse/chase metabolic labeling strategies and isolation of ␣ 2A -AR on the basolateral surface by sequential biotinylation and streptavidin-agarose isolation of detergent-solubilized receptor (see "Experimental Procedures"). As shown in Fig. 1B, ⌬aa286 - Direct Cytoskeletal Interactions Do Not Appear to Account for Stabilization of the ␣ 2A -AR via Its 3i Loop-One mechanism that might account for stabilization of the ␣ 2A -AR on the basolateral surface of polarized renal epithelial cells could be direct interaction of the receptor with the underlying cytoskeleton. In this case, accelerated turnover of the ⌬3i loop ␣ 2A -AR could result from loss of these direct cytoskeletal interactions. We utilized differential sensitivity to extraction by Triton X-100 as an indicator of direct and stable association with the cytoskeleton (33)(34)(35). This approach has been informative in revealing the association of the polytopic Na ϩ -K ϩ -ATPase (26,36) and of the single transmembrane-spanning CD44 protein (37) with the cadherin-dependent ankyrin-fodrin matrix underlying the basolateral surface of polarized MDCK cells. As shown in Fig. 2A, both the wild-type ␣ 2A -AR and the ⌬3i ␣ 2A -AR are released from polarized MDCKII cells when exposed to 0.2% Triton X-100 in the presence of 2 mM EDTA and 2 mM EGTA (0 mM [Ca 2ϩ ] o and 0 mM [Mg 2ϩ ] o ). For comparison, proteins directly associated with the cytoskeleton, such as the Na ϩ -K ϩ -ATPase, are not completely extracted by 0.5% Triton X-100 under similar, but slightly more stringent, divalent cation-free extracellular conditions, whereas the ␣ 2A -AR is completely extracted (26,36), 2 suggesting that the ␣ 2A -AR does not interact directly or stably with the cytoskeleton. When Triton X-100 extractions were performed in the presence of 0.5 mM Ca 2ϩ and 1 mM Mg 2ϩ , the amount of Triton X-100 required for extraction of Ն60% of the photoaffinity-labeled surface receptors was increased from 0.2% ( Fig. 2A) to 0.5% (Fig. 2B) but with no difference in extraction efficiency between wild-type and mutant ␣ 2A -AR. These data suggest that ␣ 2A -AR stability on the basolateral surface is influenced by protein-protein interactions that involve a Ca 2ϩ (likely cadherin)-organized substratum, but that these interactions cannot explain the difference in cell surface stability of the wild-type and ⌬3i loop structures, as they are extracted in a comparable manner even in the presence of Ca 2ϩ . Specific Amino Acid Sequences Within the Third Intracellular Loop Do Not Appear to Be Required for Stabilization of the ␣ 2A -AR on the Cell Surface-If bulk size of the 3i loop is sufficient for stabilization of the ␣ 2A -AR, then a loop containing the same number of amino acids as the wild-type ␣ 2A -AR should manifest the same surface t1 ⁄2 regardless of the primary sequence within the 3i loop. To test this hypothesis, we constructed a receptor that contains a nonsense 3i loop corresponding to the exact number of amino acids as in the wild-type ␣ 2A -AR 3i loop but with very little sequence homology. The sequences of the wild-type and nonsense ␣ 2A -AR 3i loops are compared in Fig. 3. In four experiments using two different clonal cell lines expressing the ␣ 2A -AR 3i nonsense loop, the half-life of this structure was indistinguishable from that characteristic of the wild-type receptor as shown in Fig. 3. These findings are consistent with a mechanism where the size of the 3i loop structure plays an important role in stability of the ␣ 2A -AR on the cell surface. There are examples of membrane proteins localized in surface microdomains by virtue of so-called "corrals," often established by the cytoskeletal proteins underlying the cell surface (38 -40). Consequently, ␣ 2A -AR surface stability might arise by steric principles, dictated by the size of the 3i loop (Fig. 3). If corrals partitioned the ␣ 2A -AR, and lack of corralling were responsible for accelerated turnover of the ⌬ 3i loop ␣ 2A -AR, then we should expect a more rapid lateral diffusion coefficient and a significantly greater mobile fraction for the ⌬3i loop ␣ 2A -AR. However, Uhlén et al. (41) have previously reported that the difference in surface half-life between ␣ 2A -AR and ⌬3i loop ␣ 2A -AR is not paralleled by a difference in lateral diffusion coefficients for each receptor structure, estimated at ϳ2.2 ϫ 10 Ϫ10 cm 2 /s using fluorescence recovery after photobleaching. Thus, the mechanistic significance of the retention of the ␣ 2A -AR 3i nonsense loop on the basolateral surface for a duration comparable with the wildtype receptor is unexplained at present. Sustained Agonist Exposure in MDCKII Cells Does Not Decrease Receptor Density for Either Wild-type or ⌬3i loop ␣ 2A -AR-One mechanism that might account for accelerated sur-face turnover of the ⌬3i loop ␣ 2A -AR would be enhanced agonist-elicited redistribution and subsequent down-regulation of this mutant receptor compared with the wild-type receptor. Consequently, we examined the effect of prolonged agonist exposure on steady-state ␣ 2A -AR density in MDCKII cells expressing wild-type or ⌬3i loop ␣ 2A -AR. As shown in Fig. 4, treatment of MDCKII cells with 100 M epinephrine for 24 h results in no detectable down-regulation of either the wild-type or the ⌬3i loop ␣ 2A -AR. In fact, there is even a slight increase in receptor density following agonist incubation, perhaps because of ligand-dependent receptor stabilization (42). 3 These findings are consistent with previous reports that the ␣ 2A -AR subtype does not undergo agonist-induced down-regulation in MDCKII cells (43) and Chinese hamster fibroblast cells (28), although this subtype has been reported to down-regulate in Chinese hamster ovary cells (25,44,45). In addition, the ␣ 2A -AR subtype, in contrast to the ␣ 2B -AR subtype, does not manifest agonist-elicited redistribution (46), 4 nor is there any evidence for intracellular localization of the ⌬3i loop ␣ 2A -AR in MDCKII cells either by immunocytochemistry (48) or cell surface biotinylation. 5 In contrast to our findings and the lack of effect of removal of the 3i loop on agonist-elicited ␣ 2A -AR redistribution, several muscarinic receptor subtypes have been shown to undergo agonist-elicited sequestration and down-regulation in a manner influenced by the 3i loop; mutations within the 3i loop reduce sequestration (47, 49 -51) and deletion of the 3i loop slows the rate of down-regulation for the m2 receptor (47). . After exposure to 1% Triton X-100, the residual cellular material, defined as Triton shells, was solubilized with RIPA buffer (Non Ext.). Biotinylated proteins were isolated using streptavidin-agarose chromatography. The ␣ 2A -AR in the eluates was resolved on a 10% SDS-polyacrylamide gel. Control experiments indicated that the radioactive band shown corresponds to 125 I-Rau-AzPec photoaffinity-labeled ␣ 2A -AR, based on its relative migration on 10% gels and the blockade of its labeling by ␣ 2A -AR antagonists. The results shown compare wildtype ␣ 2A -AR (Tag3 clone at 25 pmol/mg of protein) and ⌬3i loop ␣ 2A -AR (T3 at 3.4 pmol/mg of protein (A) or T66B at 2.8 pmol/mg of protein (B)). These data are representative of at least three separate experiments. This extraction profile is not dependent on receptor density because two cell lines with nearly 10-fold different levels of wild-type ␣ 2A -AR expression (Tag3 clone at 25 pmol/mg of protein versus T24 clone at 3.4 pmol/mg of protein) were examined with the same results. MDCKII cells expressing either wild-type or ⌬3i loop ␣ 2A -AR were treated for 24 h with 100 M epinephrine (ϩ100 M ascorbic acid to prevent epinephrine oxidation). Control cells were treated with 100 M ascorbic acid alone. Receptor density was determined by [ 3 H]yohimbine binding as described under "Experimental Procedures." Results are the mean ϮS.E. from six separate experiments. Results shown are from MDCKII cells grown to confluence on 60-mm cell culture dishes. However, analysis of MDCKII cells polarized in Transwells gave the same results. No decrease in receptor density was observed; in fact, an increase was noted by analogy with findings in other cultured cell systems 3 suggesting that ligand occupancy stabilizes receptor density and affirming that epinephrine remains viable during the course of the incubation. specific amino acid sequences are not necessarily required for basolateral retention and suggesting that the bulk of the 3i loop may be sufficient to stabilize the ␣ 2A -AR on the basolateral surface.
5,180
1999-06-04T00:00:00.000
[ "Biology" ]
Ferroptosis, a new target for treatment of renal injury and fibrosis in a 5/6 nephrectomy-induced CKD rat model Ferroptosis is a non-traditional form of regulated cell death, characterized by iron overload and lipid peroxidation. Exploration of ferroptosis in chronic kidney disease (CKD) has been extremely limited to date. In this study, we established a rat model of CKD by 5/6 nephrectomy, treated CKD rats with the ferroptosis inducer, cisplatin (CDDP), and the ferroptosis inhibitor, deferoxamine mesylate (DFO), and observed the resulting pathologic changes (injury markers and fibrosis) and ferroptotic biochemical indices. Kidney iron deposition, lipid peroxidation, mitochondrial defects, ferroptosis marker induction, and TUNEL staining positivity were detected in CKD group rats. Further, treatment with CDDP or DFO influenced renal injury and fibrosis by affecting ferroptosis, rather than apoptosis, and ferroptosis occurs in the remnant kidney due to disordered iron metabolism. In conclusion, our study shows for the first time that 5/6 nephrectomy induces ferroptosis in the remnant kidney and clarifies the underlying pathogenesis. Moreover, we demonstrate that ferroptosis is involved in CKD progression and represents a therapeutic target in chronic kidney injury and renal fibrosis. INTRODUCTION The physiological functions of iron include participation in the mitochondrial respiratory chain and hemoglobin synthesis [1], and overload and deficiency of iron are detrimental to metabolic homeostasis. Excess iron triggers free radical production, which damages macromolecules [2], and can also lead to organelle stress [3,4], disrupting cellular structural integrity and tissue homeostasis. Patients with chronic kidney disease (CKD) develop renal anemia and reduced erythropoietin secretion. Decreased appetite, and frequent dialysis and punctures render correction of iron deficiency challenging. Combating renal anemia positively can improve the life-quality and survival of CKD patients; hence, alleviation of circulating iron deficiency has long been a hot research topic. Although renal dysfunction contributes to decreased iron bioavailability, kidney cells are susceptible to iron overload [5][6][7], and tissue iron deposition causes oxidative damage and pathological responses, including fibrosis and inflammation [8][9][10]. Therefore, mitigating tissue iron deposition appears more effective in alleviating oxidative insults than ameliorating circulating iron deficiency. Ferroptosis participates in the folic acid-induced acute kidney injury (AKI) mouse model [19], and is implicated in AKI mediated by pathological factors, including rhabdomyolysis and ischemiareperfusion injury [20]. AKI and CKD are interrelated; AKI promotes CKD development and CKD increases susceptibility to AKI [21]. Therefore, we hypothesize that ferroptosis is also present in CKD. The relationship between CKD and ferroptosis was discussed in our previous study [22]; however, exploration of ferroptosis in CKD remains limited. The 5/6 nephrectomy operation induces CKD, closely mimicking the hyperperfusion, hyperfiltration, and hypertensive status of the residual nephron, which develops chronic interstitial fibrotic lesions. Therefore, we used a 5/6 nephrectomy rat model to explore the role of ferroptosis in CKD and the underlying pathological mechanisms. RESULTS CKD rats exhibit iron accumulation, oxidative stress, and lipid peroxidation, which are altered by CDDP and DFO treatment Spontaneous iron deposition was observed in CKD rats and was enhanced or attenuated by CDDP or DFO treatment, respectively (Fig. 1a, c). Treatment with CDDP and DFO also resulted in corresponding changes in T-ROS, L-ROS, 4-HNE, MDA, and LPO (Fig. 1b, d-f, i, and j). GSH and NADPH exhibit anti-oxidative stress and anti-ferroptotic effects due to their involvement in GPX4 and mevalonate biosynthesis, respectively. GSH and NADPH levels were lower in CKD than Sham control group rats, and the reductions were exacerbated by CDDP and reversed by DFO (Fig. 1g, h). These data indicate that CKD rats develop renal iron accumulation, oxidative stress, and lipid peroxidation, which are characteristic of ferroptosis. Typical features of ferroptosis present in CKD rats Given the close association between ferroptosis and oxidative stress, we sought further evidence of ferroptosis in CKD rats. IF analysis demonstrated that GPX4 and SLC7A11 expression was concentrated in tubular cells and decreased sequentially from the Sham, to the CKD + DFO, CKD, and CKD + CDDP groups (Fig. 2a, b). The cystine/ glutamate antiporter (system Xc-), comprising SLC7A11 and Fig. 1 CKD rats exhibit iron accumulation, oxidative stress, and lipid peroxidation, which are altered by CDDP and DFO treatment. CKD rats developed iron accumulation (a, c), oxidative stress (e, g, and h), and lipid peroxidation (b, d, f, i, and j), which were aggravated and alleviated by CDDP and DFO, respectively. **P < 0.01, ***P < 0.001 vs. Sham group; # P < 0.05, ## P < 0.01 vs. CKD group. Scale bar in (a) = 100 μm. Scale bar in (b) = 50 μm. SLC3A2 subunits, requires NADPH to capture extracellular cystine (Cys) for conversion to cytoplasmic cysteine for GSH synthesis; therefore, the reduced GSH synthesis and diminished GPX4 activity in the CKD group may be attributable to NADPH and SLC7A11 deficiency. WB and RT-qPCR assays of GPX4 generated results consistent with those of IF ( Fig. 2c-e), whereas the expression trend of ACSL4 was opposite to that of GPX4, with lowest expression in the Sham group and highest in the CKD + CDDP group (Fig. 2d, f). Evaluation of mitochondrial morphology indicated significant differences among groups ( Fig. 2g-j). MOM were intact and MC were clear in the Sham group, while the mitochondria became smaller, MOM ruptured, and the cristae were reduced or absent in the CKD group. In the CKD + CDDP group, mitochondria were more crumpled and the MOM rupture and MC disappearance more pronounced, while mitochondrial defects were reversed after DFO treatment. These data suggest that specific pathological manifestations of ferroptosis occur in CKD rats. CDDP and DFO affect 5/6 nephrectomy-induced ferroptosis, independent of apoptosis Ferroptosis can cause DNA double-strand breaks, which appear positive on TUNEL staining [23,24]. There were significantly more TUNEL-positive cells in the CKD group than the Sham group, while CDDP and DFO increased and decreased the number of TUNELpositive cells, respectively (Fig. 3a, b). Pro-apoptotic proteins including cleaved-caspase3 and Bax were significantly up-regulated in the CKD group compared to the Sham group, as opposed to the anti-apoptotic protein Bcl-2; however, CDDP and DFO intervention failed to change the expression levels of apoptotic proteins ( Fig. 3c-f). These data suggest that ferroptosis and apoptosis both occurred in the remnant kidney, and that CDDP and DFO intervention had no effect on apoptosis. Ferroptosis occurs in the remnant kidney due to disordered iron metabolism, which are altered by CDDP and DFO treatment We performed a relatively comprehensive assay of iron metabolism proteins in the kidneys to elucidate the mechanisms of ferroptosis pathogenesis. Up-regulation of HO-1 likely mediated increased release of divalent iron from heme in the CKD group, whereas up-regulation of FtH and FtL would facilitate nucleation and mineralization of excess divalent iron, which would be sequestrated by FtH (Fig. 4a, b, e, and f). We also measured expression levels of DMT1 and TfR. Surprisingly, levels of DMT1 and TfR decreased following CKD development, were reduced by CDDP intervention, and were significantly restored after treatment with DFO ( Fig. 4a, c, and d), suggesting that, DFO, unlike CDDP, can facilitate restoration of iron homeostasis. Levels of FPN, the only iron export protein in mammals, were down-regulated in the CKD group and the down-regulation was remarkably aggravated and reversed by CDDP and DFO treatment, respectively (Fig. 4a, h), indicating that iron deposition in the remnant kidney of CKD rats may be attributable to FPN downregulation, and that DFO can ease tissue iron overload by upregulating FPN. NCOA4 is an important mediator of ferritinophagy, and WB analysis of NCOA4 indicated that ferritinophagy occurs in CKD and can be enhanced or down-regulated by CDDP or DFO treatment, respectively ( Fig. 4a, g). Hence, ferritinophagy may contribute to ferroptosis in CKD. These data suggest that disordered iron metabolism occurred in the remnant kidney, and that NCOA4 up-regulation and FPN down-regulation may be important contributors to tissue iron accumulation and ferroptosis. Ferroptosis is a potential therapeutic target in chronic kidney injury Levels of SCr, BUN, and 24-UP were significantly higher in the CKD group than the Sham group, and treatment with CDDP exacerbated the decline in renal function, while DFO treatment ameliorated it (Fig. 5a-c). Histopathological analysis supported these effects on renal function (Fig. 5d, f); few pathological injuries were detected in the Sham group, while glomerular injury and tubular epithelial cell expansion or atrophy were observed in CKD rats, and these effects were aggravated by CDDP and partially mitigated by DFO. Expression levels of the marker of kidney injury, NGAL, were low in renal tubular epithelial cells under physiological conditions and significantly elevated after development of renal injury (Fig. 5e, g, and h). NGAL mediates intracellular iron ion transport [25], consistent with the observed iron accumulation in CKD rat residual kidney. Since intervention with CDDP or DFO did not influence apoptosis, together with the data presented in Figs. 1 and 2, our findings suggest that CDDP and DFO influence kidney injury progression by targeting ferroptosis, highlighting ferroptosis as a potential target in treatment of chronic kidney injury. Ferroptosis is a potential therapeutic target in renal fibrosis Masson staining revealed only trace amounts of blue-stained collagen fibers in the Sham group, while in the CKD group, renal fibrosis was manifested as a mass of blue-stained collagen fibers in the tubular interstitium (Fig. 6a, b). Renal interstitial fibrosis was aggravated by CDDP treatment, while DFO had anti-fibrotic effects. WB demonstrated that CDDP exacerbated and DFO inhibited deposition of the ECM proteins, α-SMA and COL I (Fig. 6c-e). Since CDDP and DFO interventions did not affect apoptosis, these findings suggest that ferroptosis is a potential target for treatments aimed at delaying renal fibrosis progression, and that the TGF-β1/Smad3 signaling pathway is important in this context. DISCUSSION With the substantial improvements in living standards brought about by rapid global economic development, the prevalence rates of numerous metabolic diseases, such as obesity, diabetes, hypertension, hyperlipidemia, and hyperuricemia have increased, leading to a burden on the kidneys and growing CKD incidence and prevalence [26,27]; CKD is expected to rise to be among the top five causes of mortality by 2040 [28]. The contrast between the growing number of patients with CKD and the limited treatment options is becoming increasingly stark, and research into the causes of kidney cell death is imperative. family are major regulators of this pathway [29]. Bcl-2 can influence MOM permeability and cytochrome c release, protecting cells from apoptosis by inhibiting caspase-3 activation and the corresponding downstream cascade [30]. In our study, cleaved-caspase3 and Bax upregulation, and Bcl-2 down-regulation were observed in the CKD group, suggesting that both apoptosis and ferroptosis contribute to chronic kidney injury progression and are responsible for pathological injury and interstitial fibrosis; however, neither CDDP nor DFO treatments altered apoptotic protein expression, although these interventions have previously shown pro-and anti-apoptotic effects, respectively [31,32]; differences in dosing frequency, dosage, and/or experimental subjects most likely account for this discrepancy. Iron metabolism in CKD is extremely complex. In our study, expression of DMT1 and TfR decreased with CKD progression, and FtH and FtL increased with CKD progression. Theoretically, the above changes are not conducive to ferroptosis. Reportedly, DMTI was up-regulated in the kidneys of unilateral ureteral obstructioninduced CKD mice [33,34]. While in human kidney biopsy samples, DMT1 expression was not consistent in different types of CKD [35]. The above suggests that different pathological types and modeling approaches may be important reasons for the differences in expression of DMT1. Expression of TfR was low in the CKD group, indicating that the kidney may be under an elevated level of negative feedback control during local iron overload. Increased ferritin expression in CKD may be related to the activation of the NF-κB pathway and the degradation of ironresponsive proteins by inflammatory cytokines [36][37][38][39]. Our study demonstrates that ferroptosis occurs, despite high expression of Fig. 4 Ferroptosis occurs in the remnant kidney due to disordered iron metabolism, which are altered by CDDP and DFO treatment. Disordered iron metabolism (a-h) in the remnant kidney is responsible for ferroptosis, which are aggravated and alleviated by CDDP and DFO, respectively. **P < 0.01, ***P < 0.001 vs. Sham group; # P < 0.05, ## P < 0.01, ### P < 0.001 vs. CKD group. some negative regulators of this process, due to disruption of iron uptake, storage, and excretion homeostasis (Fig. 7). Hence, more comprehensive analysis of iron metabolism-related proteins in CKD pathology is required. In patients with CKD, renal iron accumulation and circulating iron deficiency contradict one another, and these opposing iron saturation characteristics have different consequences (Fig. 8). Given the uniqueness of FPN at iron export ports, this molecule is crucial to the exchange of iron between tissue cells and plasma, and improving low FPN induction in CKD is beneficial [40]. There are well-established detection methods and indices for iron deficiency; however, there remains a lack of validated assessment tools for tissue iron accumulation. Hence, development of specific markers for organ iron accumulation disorders would be of considerable significance. Interfering with ferroptosis is an emerging therapeutic approach for multiple pathological disorders. Ferroptosis is intertwined with other Ferroptosis is a potential therapeutic target in chronic kidney injury. CDDP and DFO aggravate and reverse 5/6 nephrectomyinduced chronic kidney injury, respectively (a-h). **P < 0.01, ***P < 0.001 vs. Sham group; # P < 0.05, ## P < 0.01 vs. CKD group. Scale bar = 50 μm. forms of cell death; therefore, comprehensive evaluation of iron accumulation, oxidative stress, lipid peroxidation, mitochondriaspecific defects, cell death/cell viability, and ferroptotic-specific markers could better characterize ferroptosis development, which is of great importance in related diseases. Here, CDDP and DFO affected tissue iron deposition, providing fundamental evidence supporting their effects on ferroptotic cell death. To our knowledge, our data are the first pathological evidence that CDDP exacerbates tissue iron deposition. Our research has some limitations. First, we did not assess changes in the LIP. Second, the crosstalk among fibrotic signals during renal ferroptosis remains unclear. These limitations remains to be addressed in the future. In conclusion, we report the first observation of ferroptosis in the 5/6 nephrectomy-induced CKD model, and demonstrate that disordered iron metabolism is an important contributor to this process. We identified the induction of NCOA4 in this context for the first time, suggesting the presence of ferritinophagy in CKD rats. We also show that CDDP and DFO can affect ferroptosis in in vivo experiments and thereby interfere with renal injury and fibrosis progression via regulation of iron metabolism and the TGF-β1/Smad3 pathway. These findings suggest that ferroptosis is a promising target for the treatment of CKD. MATERIALS AND METHODS Animals and protocols Male Sprague-Dawley rats (7-week-old; weight,~220 g) were housed in a climate-controlled room (22°C ± 1°C, humidity 45-55%, 12-h day/night cycle), and given free access to water and food. After one week of acclimatization, rats were randomly divided into Sham, CKD, CKD + CDDP, and CKD + DFO groups (n = 6 per group). Rats in the CKD + CDDP and CKD + DFO groups were injected intraperitoneally with CDDP (2 mg/kg, batcn number: 0J0216B02, QILU Pharmaceutical Co., Ltd, China) and DFO (300 mg/kg, HY-B0988, MCE, USA), respectively, once per week for 8 weeks; rats in the Sham and CKD groups were injected with an equivalent amount of saline. At the end of experiments, 24-h urine, kidney tissue, and serum were collected for analyses. CKD model construction For 5/6 nephrectomy, rats were anesthetized by intraperitoneal injection of 2% sodium pentobarbital (40-50 mg/kg), and immobilized prone on pre-sterilized workbenches after satisfactory anesthesia. The surgical area was shaved, disinfected with iodophor, and sterile cavity wipes laid. A longitudinal 1.5 cm surgical incision was made next to the spine at the lower left costal margin, and the subcutaneous fascia and muscles sequentially incised to expose the left kidney, which was gently lifted out of the abdominal cavity by holding the perirenal fat with forceps, and the adrenal glands and perirenal membrane carefully peeled off. Approximately two-thirds of the parenchyma at the upper and lower Fig. 6 Ferroptosis is a potential therapeutic target in renal fibrosis. CDDP and DFO affect fibrosis progression (a, b) and ECM deposition (ce) in CKD rats by regulating the TGF-β1/Smad3 signaling pathway (f-h). **P < 0.01, ***P < 0.001 vs. Sham group; # P < 0.05, ## P < 0.01, ### P < 0.001 vs. CKD group. Scale bar = 50 μm. poles of the kidney was removed and the remnant kidney immediately compressed with a gelatin sponge until hemostasis ceased, then returned into the abdominal cavity; only the gerota fascia was stripped in the Sham group. Muscle and skin were sutured layer-by-layer, the surgical incision disinfected, and rats replaced in cages when they awoke. After 1 week, a longitudinal 1.5 cm surgical incision was made next to the spine at the right inferior rib cage of the rat, and the right kidney exposed by isolating the subcutaneous tissue layer-by-layer, as in the previous operation. Following ligation of the renal pedicle with 4-0 silk, renal vessels were cut and the right kidney removed. The abdominal cavity was then closed layer-by-layer, when there was no bleeding, and rats returned to their cages after awakening. In the Sham group, surgical incisions were made without kidney removal. Subcutaneous injection of 0.2 mg/kg buprenorphine was administered to rats every 12 h postoperatively for two days for analgesia. All efforts were made to minimize the suffering of the animals. T-ROS and L-ROS assays For T-ROS measurement, single cell suspensions were generated from renal tissue samples and the cells loaded with the fluorescent ROS probe, DCFH-DA (E004, Nanjing Jiancheng Bioengineering Institute, China), after quantification of protein levels. Cells were cultured (30 min, 37°C, in darkness) centrifuged (1000 g) and supernatants removed. Cell pellets were re-suspended in PBS and T-ROS detected using a Multifunctional enzyme-labeled instrument (excitation, 490 nm; emission, 530 nm; M200PRO, TECAN, Switzerland);. L-ROS levels were evaluated using an Image-iT™ Lipid Peroxidation Kit (C10445, Invitrogen, USA), based on the manufacturer's instructions. Results are expressed as the ratio of fluorescence intensity at 590 and 510 nm, following detection using a Multifunctional enzyme-labeling instrument (SynergyH1, Biotek, USA); the 590/510 nm ratio is negatively correlated with lipid peroxidation. Transmission electron microscopy (TEM) Renal tissues were incubated with 2.5% glutaraldehyde (2 h), post-fixed in 1% osmium acid (1.5 h), dehydrated in ethanol and acetone, and subsequently sectioned and stained with uranium acetate and lead citrate. Representative images showing mitochondrial alterations were captured by TEM (magnification, 40,000×). Fifty mitochondria in renal tubular epithelial cells were randomly selected from ten unduplicated views for morphological evaluation. MV, MOM, and MC were scored from 1 to 3 as follows: 1, normal MV, intact MOM, regular MC; 2, smaller MV, ruptured MOM, decreased MC; 3, significantly smaller MV, pronounced ruptured MOM, absent MC. Kidney histopathology analysis Renal tissue was fixed in 4% paraformaldehyde (24 h), embedded in paraffin, and cut into 4-μm-thick sections, which were deparaffinized by xylol and hydrated in gradient ethyl alcohol. Sections were stained using Hematoxylin-Eosin (HE) and Masson's Trichrome (Masson) kits (G1120, G1340, Solarbio, China). Following dehydration, clearing, and sealing, representative images were captured by microscopy (DP73, OLYMPUS, Japan; magnification, 200×). Renal injury scores were determined based on previously described indices [41], including inflammatory cell filtration and interstitial fibrosis. Each parameter was scored from 0 to 4 in 10 different randomly selected views (0, impairment < 5%; 1, impairment 5-25%; 2, impairment 26-50%; 3, impairment 51-75%; 4, impairment >75%). Masson staining was analyzed semi-quantitatively by calculating the blue-stained areas in 10 random different views. Fig. 7 Disordered iron metabolism and ferroptosis occurred in 5/6 nephrectomy-induced CKD rats. DMT1 and TfR expression were down-regulated, while HO-1 was up-regulated; the latter catabolizes heme iron to ferrous iron, CO, and biliverdin, with the assistance of cytochrome P450, biliverdin reductase (BVR), and NADPH. TfR mediates the import of transferrin-bound iron (TBI) into the cell and reduces ferric iron to ferrous iron via six-transmembrane epithelial antigen of prostate 3 (STEAP3). Regarding iron export, FPN down-regulation blocks iron efflux, leading to elevated intracellular iron concentration. A small amount of intracellular ferrous iron is stored in the labile iron pool (LIP) to maintain physiological metabolism, while the remainder is endogenously chelated by ferritin. Excess intracellular catalytic iron reacts with hydrogen peroxide to initiate the Fenton and Haber-Weiss reactions, leading to a further increase in ROS and attacks on polyunsaturated fatty acids (PUFAs) in the cytomembrane, initiating lipid peroxidation and ferroptosis. In addition, down-regulation of SLC7A11 results in reduced synthesis of GSH from Cys, decreased GPX4 activity, and weakened anti-ferroptotic responses. CO carbon monoxide, Glu glutamate. Prussian blue staining Histological sections were stained with Prussian blue (G1422, Solarbio, China) to detect iron deposition, according to the manufacturer's instructions. Representative images were captured under a microscope (DP73, OLYMPUS, Japan; magnification, 200×). Integrated optical density (IOD) of ferrous iron deposit areas was measured using Image-Pro Plus 6.0 and average densities calculated as IOD/area. Immunofluorescence (IF) Paraffin-embedded renal tissue samples were sliced into 5-μm-thick sections, which were deparaffinized and rehydrated in a graded ethanol series. Antigen retrieval was achieved by boiling sections in antigen recovery buffer (10 min), followed by incubation in goat serum (15 min TUNEL assay Dewaxed and dehydrated paraffin sections were washed with distilled water and processed for TUNEL staining according to the manufacturer's instructions (C1088, Beyotime, China). After sealing, sections were observed under a fluorescent microscope (magnification, 400×) and the numbers of green cells in eight random views counted. Real-time PCR (RT-qPCR) Total RNA was extracted from frozen renal tissue using TRIPure (RP1001, BioTeke Corporation, China), according to the manufacturer's protocol. Isolated RNA samples were reverse transcribed into cDNA using super M-MLV reverse transcriptase (PR6502, BioTeke Corporation, China), then subjected to RT-qPCR using SYBR Mix and corresponding primers on a PCR cycler (Exicycler 96, BIONEER, Korea). The internal reference was β-actin, which was used for data normalization in quantitative analysis. Primers (Table 1) were synthesized by GenScript Biotechnology (Nanjing, China). Statistical analysis Data are presented as mean ± SD. Normality and homogeneity of variance were verified, and one-way ANOVA analysis was performed using GraphPad Prism 8.0 (GraphPad Software, San Diego, CA). Tukey's test was used to determine the significance of differences among groups. P < 0.05 were considered significant. DATA AVAILABILITY The data used to support the findings of this study are available from the corresponding author upon request.
5,154.8
2022-03-22T00:00:00.000
[ "Biology", "Medicine" ]
Probing momentum-indirect excitons by near-resonance photoluminescence excitation spectroscopy in WS2 monolayer Coulomb-bound electron-hole pairs (excitons) dominate the optical response of atomically-thin transition metal dichalcogenides (TMDs) semiconductors. The photoluminescence spectrum in W-based TMDs monolayers (i.e. WS2 and WSe2) at low temperature exhibits much richer features than Mo-based TMDs monolayers, whose origin is currently not well understood. Herein, by using near-resonant photoluminescence excitation spectroscopy, we probe the scattering events between excitons and phonons with large kˆ-momentum, which provides strong evidence for the momentum-indirect nature of the optical bandgap in monolayer WS2. The scattering between carriers and zone-edge phonons creates excitons at different valleys, among which, the lowest-energy is momentum-indirect. Our findings highlight that more efforts are required to solve the current debate on the inherent bandgap nature of TMD monolayers and the complex photoluminescence spectrum reported on W-based compounds. Introduction Atomically thin layers of group-VI transition metal dichalgcogenides (TMDs) such as MX 2 (M = Mo, W; X = S, Se) and hexagonal lattice feature prominent exciton properties and spin-valley physics. As a result of strong quantum-and dielectric confinement in single atomic layer of TMD materials, the photoexcited electrons and holes are tightly bound via Coulomb interaction to form excitons with the binding energy of hundreds meV [1][2][3][4][5][6]. The large exciton binding energies reinforce the stability of exciton complexes such as charged excitons (trions) [7][8][9][10] and biexcitons [10][11][12] that offers great opportunities to study many-body physics. In addition, the inversion symmetry breaking in TMDs monolayers gives rise to the energy-degenerate but non-equivalent K/K ′ valleys, which are coupled with electron spins [13]. This unique spin-valley coupling enables the use of light helicity to selectively excite valley excitons at K or K ′ valley [13], making TMD monolayers ideal candidates for opto-valleytronic applications [13][14][15]. The presence of neutral excitons and trions has been widely observed in previous optical studies on TMD monolayers [7,8,16]. The substitution of the metal element M, e.g. W into Mo, reverses the energetic order of the optically allowed (bright) and optically forbidden (dark) states at the K/K ′ valley [17]. In particular, W-based monolayers harness the dark exciton band lying at lower energy than the bright band, leading to the poor emission efficiency at low temperature [11]. Instead, the photoluminescence spectrum of WX 2 monolayer is dominated by several features arising at the low-energy side of the exciton energies, which is currently under intense debate among defects, bound excitons and biexcitons [10,11]. For example, the strongest peak that lies below the charged exciton (trion) was previously assigned to biexciton emission [10,11]. However, the discrepancy lies in that the PL intensity of that peak shows a sub-quadratic increase with excitation power and the binding energy (∼52 meV) [11] is much larger than the theoretically predicted value ∼20 meV [18][19][20][21][22]. In addition, the energies of lower energy features in WX 2 monolayers coincidently match the calculated momentum-indirect exciton transitions, which arises from the recombination of electrons and holes in different valleys [23][24][25][26][27][28][29], suggesting the more complicated nature of these low-energy peaks. Therefore, a detailed investigation on the exciton dynamics of WX 2 material is needed. In this work, by using photoluminescence excitation spectroscopy, we probe the momentum-indirect transition in WS 2 monolayers that could explain for the low-energy features in the photoluminescence spectrum of W-based compounds. Under nearresonant excitation condition, multiple scattering processes between excitons and phonons carrying non-zero wavevector are revealed, indicating the presence of indirect excitons whose constituent electrons and holes locate at different valleys, beside the wellunderstood direct excitons with their electrons and holes located at the same valley. Furthermore, we find that phonon scattering contributes to valley depolarization during hot exciton relaxation. Our results advance non-trivially the fundamental understanding of the photoluminescence spectra of W-based monolayers that can shed light to intrinsic exciton properties in TMD monolayer semiconductors. Results and discussion Sample and Optical Characterization of WS 2 Monolayer. The high-quality monolayers of WS 2 were prepared by mechanical exfoliation onto a Si/SiO 2 substrate and extensively characterized by optical spectroscopy at cryogenic temperatures (T = 20 K). Figure 1(a) shows the reflectance-contrast spectrum, in which, three sharp resonances are clearly resolved. Specifically, the features arising at ∼2.09 eV and ∼2.50 eV are attributed to A-exciton (X A ) and Bexciton (X B ) whose constituent electrons and holes are excited at K or K ′ valleys in the Brillouin zone [1,14,16,30]. The energy splitting of ∼400 meV between A and B originates from the spin-splitting of the valence band at K and K ′ points [1,14,30]. In addition, the energy structure observed at ∼2.06 eV is ascribed to a charged exciton state (trion), which is formed by the neutral A-exciton and a free charge (electron or hole) [7][8][9][10]. A schematic of the energy structure of WS 2 monolayer is illustrated in figure 1(b). The energy positions of A-exciton and B-exciton are taken from the reflectance measurement. The continuum band of A-exciton is located at ∼2.44 eV [3], which is close to the energy of B-exciton. The existence of different electronic states around the energy of Aexciton and B-exciton suggests nontrivial exciton behaviors, especially under near-resonant excitation conditions. For example, with excitation energies of ∼2.5 eV (see the black arrows in figure 1(b)), excitons can be directly generated at B resonance or freecarriers are created at the continuum of A. In the first case, B-excitons can either radiatively recombine in a so-called hot luminescence process [16,[31][32][33], or relax non-radiatively to X Γ A (A-exciton with zero center-of-mass momentum). On the other hand, when free carriers are generated at the continuum of A, an electron and a hole can relax and later bind together to form exciton at A to result in photoluminescence. Alternatively, when the excitation energy is well below 2.45 eV (see the red, green, orange arrows in figure 1(b)), X Γ A excitons are directly generated and later radiatively recombine. Here the exciton formation, relaxation and recombination processes in WS 2 are intensively investigated by varying the excitation energy across the resonances. Figure 1(c) presents the low-temperature photoluminescence spectrum under ℏω exc = 2.707 eV excitation. Several optical features are resolved, including the radiative recombination of the neutral A-exciton (∼2.09 eV) and the charged state trion (∼2.06 eV). The energy positions are in good agreement with the reflectance contrast spectrum shown in figure 1(a). Interestingly, the PL spectrum is dominated by very intense features (P i ) at the low-energy side of the trion, where the light-absorption is negligible. The origin of the peaks P i (especially P 1 and P 2 ) is currently under debate, which will be discussed later in this work. On the other hand, when the excitation energy is tuned close to the resonance A (ℏω exc = 2.103 eV), additional sharp features are resolved on top of A-exciton with the linewidth of ∼0.4 meV. The behaviors of these emission features are carefully monitored while varying the excitation energy from 2.103 to 2.707 eV across the A and B resonances. In the case of the A resonance, up to eighteen excitation energies from a continuous tunable laser ranging from 2.103 to 2.173 eV (see Supporting Information figure S1 (stacks.iop.org/TDM/7/031002/mmedia)) are used to excite the WS 2 monolayer. Figure 2(a) shows PL spectra acquired with selected excitation energies of 2.103 eV, 2.111 eV, 2.125 eV and 2.136 eV. All the spectra are plotted relative to the energy position of the zero phonon line of X Γ A (ZPL X Γ A ), which corresponds to the direct transition of excitons with zero center-ofmass momentum and is independent of excitation energy. On the other hand, the narrow features clearly shift while changing the excitation conditions. For instance, while changing ℏω exc closer to the A resonance, from 2.136 eV (top panel) to 2.111 eV (middle panel), the peak marked by blue arrow (later identified as 2LA(M)) shifts in accordance with the excitation. The rigid shift of the peak position with excitation energy together with the narrow linewidth approaching the resolution limit, are typically observed in scattering processes between carriers/excitons and phonons that is usually referred to as resonant Raman scattering [34]. We, therefore, attribute these features to near-resonant scattering events between exciton and different phonon modes of monolayer WS 2 , which exhibit much richer information than previous nonresonant Raman study [35][36][37]. In total, we identify up to eight modes shown in figure 2(b), whose energies match well with recent resonant Raman studies [38][39][40] and theoretical calculations [41]. The first-order optical modes A 1 g (Γ) and Dependence on Excitation Energy of Exciton-Phonon Scattering Intensity. In addition to the energy shift, the scattering intensity of the all phonon modes also varies drastically with the excitation energy. The lineshape of the PL spectrum has been fitted as shown in the bottom panel of figure 2(a) (2.103 eV excitation). For the fittings, we used Voigt functions to model each of the different peaks, where blue solid-lines refer to excitonic resonances. Specifically, the blue solid-curve centred at zero represents the ZPL X Γ A . Green solid-lines indicate scattering events between excitons and phonons, while red solid-curve is the cumulative fitting of all curves. From the fitting, the intensities of individual features are extracted (all the spectra are normalized by excitation power). The fitting procedure is repeated for all the spectra, from which the maximum of scattering intensity for each phonon mode among all excitation energies is given in figure 2 (1). The data points are plotted with excitation energies ranging from 2.114 eV to 2.173 eV, as in this energy range the resonance condition is achieved between 2LA(M) phonons and neutral X Γ A excitons. Identification of Exciton-Phonon Scattering Processes Under Near-Resonant Excitation of A- where ℏ is the reduced Planck constant, C is the amplitude of the resonant-scattering event, ℏω exc is the excitation photon energy, ℏω det the detected/scattered photon energy, ℏω ZPL the exciton resonance energy, Γ ZPL is a damping constant that is related to the radiative lifetime of the exciton transition. The intensity of the scattering event is maximal at the resonant condition when the difference between the excitation laser and detection photon energy matches the phonon energy (ℏω ph ), i.e. ℏω exc − ℏω ZPL X Γ . The fitting parameters are C and Γ ZPL , while the resonance energies of excitons and phonons obtained from the modelling in figure 2(a) are fixed. The fitting results in Γ ZPL = (4±1) meV corresponding to an exciton radiative lifetime of ∼1 ps, which is in excellent agreement with experiments [42] and theoretical calculations [43,44]. Moreover, the strongest scattering intensity among different phonon modes comes from the scattering with zoneedge phonons (M-point) (see figure 2(b)). Therefore, we highlight that most of the scattering features resolved in our optical spectra involve zone-edge phonons, suggesting that they originate from the collisions between real excitonic states and phonons. Evidences of Indirect Excitons in WS 2 Monolayer. Figure 3(a) demonstrates the exciton-phonon scattering processes in single-particle (left panel) and exciton (right panel) representations. Excitons are composed of electrons and holes, both of which can be scattered by phonons. The exciton momentum is defined by center-of-mass momentumQ i X given by the momentum difference between the electrons and holesQ i X =k i e −k i h , while the phonon momentum isq i ph (i corresponds to the position in the Brillouin zone). During the events of exciton-photon (light absorption) and exciton-phonon collisions, momentum is conserved and obeys the relationship k photon =Q i X +q i ph . In monolayers TMDs, direct photoexcitation of electron and holes takes place at K and K ′ valleys, generatingQ Γ X excitons (see figure 3). In the following, the generated carriers are scattered by phonons, involving the zone-center (the Γ-point,q Γ ph ∼0) and the zone-edge (the M-point,q M ph ̸ =0) of the Brillouin phonon zone (see blue hexagon at figure 3(a). Phonons at Γ-point (q Γ ph ∼0) scatter the electron-hole pairs generated at the light cone into the zone-centre (Γpoint) of exciton dispersion (see orange symbol and parabola at figure 3(b) and (c)). When only a single phonon mode is involved in the collision, either an electron or a hole is scattered after the photoexcitation. As both electrons and holes have similar effective masses (m ∼ 0.42 m 0 ) [45,46], their first-order phonon-scattering probability is likely comparable, in clear contrast to conventional semiconductors such as II-VI and III-V, where exciton scattering is normally dominated by lighter electrons [34]. More interestingly, phonons at the M-point have non-zero momentum (q M ph ̸ =0), which implies that the virtually generated excitons are scattered out of the light cone to their dispersion at other high-symmetry points in their Brilluoin zone to satisfy the momentum conservationq M ph ∼ −Q X (see bluish parabolas at figure 3(c)). We find that the first-order scattering of electronholes pairs photoexcited at K or K ′ valleys takes place via two M-point phonons (see figure 2(b)), suggesting that both electrons and holes are scattered to their dispersions. There are several pathways to form an exciton from these carriers in monolayer WS 2 , depending on the exact valley where they reside after being scattered. Photoexcited electrons at K ′ (K)-valleys can be scattered into Λ (Λ ′ )-valleys by one M-phonon because the momentum conservation . Furthermore, since Λ-valley is an energy minimal [47], the electrons preferentially reside on this band. In other way, the M-phonon could scatter the electron from K (K ′ ) to the K ′ (K) valley. In this case, k e -momentum is not strictly satisfied, lowering the probabilities for this scattering channel. For photoexcited holes at K ′ (K)-valleys, the situation is slightly different. By considering just momentum, they could be scattered to the Λ (Λ ′ )-valley. However, this valleyextrema is hundreds of meV lower in energy that the Figure 3(c) shows a schematic structure of the lowest-energy exciton bands as a function of the center-of-mass momentum Q X . The location of the excitonic valleys, the dispersion of the parabolas and relative energies are reproduced from calculations for excitons in TMD monolayers [27,28] and from our scattering features with phonons of non-zero wavevector (see figure 2). In W-based monolayers, forQ X = Γ, two kinds of exciton branches exist due to different spin configuration of electron and holes. When the electron and hole own the same spin orientation, the resultant exciton transition is spin-allowed (orange solid parabola). In contrast, the opposite spin orientation of the constituent carriers gives rise to the spinforbidden exciton band (grey dash parabola). For Q X = K (K-valley), the spin configuration is opposite to the exciton at the Γ-valley. Specifically, the spinforbidden band of X K energetically lies above the spin-allowed branch. The energy splitting between allowed and forbidden spin-states (∆E Γ a−f , so-called (spin) bright-dark splitting) varies for different compounds, being ∼50 meV as recently reported for W-based monolayers. [27,[49][50][51][52] The spin order of each band is reversed in Mo-based monolayers. [17,[49][50][51][52][53] ForQ Λ X (Λ-valley), the calculated lowestenergy exciton branch is spin-allowed. Interestingly, the momentum-indirect exciton X Λ lies energetically below the momentum-direct X Γ , which is due to the larger exciton effective mass at Λ-valley [27]. The energy separation between spin-allowed excitons at Γ-and Λ valleys (∆E Γ−Λ a−a ) in WS 2 monolayer has been calculated in the range of 70 to 100 meV [23,27]. The formation of the excitons at Λ-valley is elucidated by the observation of first-order intervalley scattering events assisted by M-point phonons in our near-resonance experiments. The spin-allowed X Γ A branch possesses a giant oscillator-strength that allows for a transient decay (∼1 ps) of excitons. At the same time, the excitons decay into spin-or momentum-forbidden branches at lower energies of ∼50 meV [27,[49][50][51][52] and ∼70 meV [27] respectively, from which the luminescence is nominally forbidden. The complexity of the excitonic landscape determines the light emission properties of two-dimensional W-based semiconductors. The lineshape of the PL spectrum in figure 4 has been fitted by using Voigt functions to model each of the different peaks, where blue solidlines refer to excitonic resonances, green solid-lines indicate scattering events between excitons and phonons, while red solid-curve is the cumulative fitting of all curves. The parameters obtained from the fitting to the PL spectrum are listed in table S2 in the Supporting Information. The PL spectrum exhibits several features whose origin is currently under intense debate. An overall consensus exists on the high-energy structures, attributed to radiative recombination of the neutral X Γ A (∼2.09 eV) and its charge state trion T Γ A (∼2.06 eV), whose energy separation is ∆E X Γ A −T Γ A ∼30 meV. However, the largest contribution to the PL spectrum comes from the energy structures lying 50-100 meV below A-exciton (P 1 -P 4 ), at which the light absorption is negligible. They have been attributed to biexciton luminescence [10,11,54] and to localized exciton states [55][56][57]. Similar PL structures have been resolved in boron nitride encapsulated WS 2 (and WSe 2 ) monolayers, while only excitons and trions are present in MoS 2 (and MoSe 2 ) monolayers [54,58] that strongly suggests the contribution of intrinsic exciton-states emission into the energy features below the A-exciton. Intensity (arb. units) The PL energy is plotted relative to ZPL X Γ A . The raw PL spectrum is shown by the black solid-line, while the red-solid line is the global fitting by using multiple peaks with Voigt lineshapes, indicating for exciton-like (blue solid-lines) and exciton-phonon (green solid-line) scattering processes. The inset shows the relative energy of excitonic-like peaks with respect to the ZPL X Γ A . As obtained from the fitting, the height of the color bars indicates the relative contribution of each line to the total PL spectrum. Our PLE experimental results, otherwise, provide evidence for the indirect nature of the optical transition from P 1 to P 4 . At low temperature, most of the exciton population resides at the lowest-energy and momentum-indirect exciton at the Λ-point. This is confirmed by the observation of i) first-order scattering events with non-zero wavevector phonons and of ii) weak light emission from the zero-phonon-line X Γ A (ZPL X Γ A ) in our near-resonant excitation experiments. For radiative recombination to take place from the indirect-X Λ , the assistance of phonons, carrier doping or disorder scattering might provide additional momentum to the optical transition. For phononassisted recombination, different types and combinations of lattice vibrations might contribute [59], leading to various phonon-assisted features. As a result, several lines are expected at energies ∆E Γ−Λ a−a ∼70 meV below the ZPL X Γ A . A peak at energy ∼ ∆E Γ−Λ a−a − n∆ ph below the ZPL X Γ A , where n∆ ph is the energy of the n-phonon mode (n = 1, 2,...) assisting the optical transition. For light emission arising from the indirect-X Λ , the constituent electrons of these indirect excitons located at the Λ-points (see figure 3(a)) need to be scattered into the light cone at K-or K ′ -valleys. There could be two possible scenarios. Firstly, with the assistance of zone-edge phonons, electrons at Λ-valleys can be scattered to K ′ -valleys and then recombine with K ′ -valley holes. Due to strong exciton-(zone-edge)-phonon coupling, it might give rise to several phonon replicas. Interestingly, the energy ∆E 1 = E P1 -E P3 and ∆E 2 = E P3 -E P4 is ∼26 meV (1LA(M) ∼26 meV) as can be observed in figure 4, suggesting evidences for the phononassisted processes. Secondly, it is also possible that electrons at Λ-valleys are scattered to the nearest K-valley to recombine with resident K-valley holes via other type of phonons with non-zero momentum, which could lead to a different emission energy. A way to distinguish between the two recombination pathways would be by investigating the polarization response of the PL spectrum under nearresonant excitation and/or by external magnetic fields. Interestingly, as reported in previous studies, different substrates could strongly alter the PL spectum of TMDs monolayers [60][61][62][63][64] and, especially, are important for phonon behaviours of twodimensional materials [61,[63][64][65][66][67][68]. Depending on the symmetry of the phonon mode, the substrate could modify both the vibrational energy and/or the electron-phonon coupling strength. It has been reported that the doping levels of the TMD monolayers could vary for different substrates, and such a change on the carrier concentration can influence the atomic motion, leading to slight variations in the phonon energy [61][62][63][64][65]. As a result, different carrier dopings and/or substrates might lead to energy shifts of the PL resonances related to the phonon-assisted indirect-exciton X Λ and a change of the energy separation among their phonon replicas. Another scenario is the modification of the phonon amplitudes by the damping induced by the mismatch in the acoustic impedance between the monolayer and the substrate. It will be particularly relevant for out-of-plane modes. Theoretical and experimental studies have reported the increase in the acoustic phonon relaxation times and charge mobility by ten times for suspended graphene, as compared to graphene on top of silicon dioxide [66][67][68]. Therefore, we expect different substrates to have dissimilar interfacial interaction, leading to distinct phononphonon and electron-phonon coupling strengths. Consequently, the change in the phonon and carrier dynamics might alter the spectral weights of the phonon-assisted resonances. On the other hand, electrostatic doping might lead to the formation of a negatively charge state of the indirect Λ-exciton or intervalley Λ-trion (T Λ ). In a simple scenario, the additional charge would locate at either K or K ′ , depending on the spin-valley configuration (K or K ′ ) of the hole to which the Λ electron is bounded. The additional electron will lead to intervalley Coulomb scattering between carriers providing additional momentum and turn the optical transition brighter. The associated peak will appear at the energy of ∼∆E Γ−Λ a−a − ∆ T Λ below the ZPL X Γ A . The binding energy ∆ T Λ of the intervalley Λ-trion will be smaller than that of T Γ A , as the the k-overlap between T Λcarriers might be smaller. Therefore, besides the features below ZPL X Γ A that have been attributed to luminescence from biexciton and localized states, we argue that a multitude of available radiative recombination pathways of the lowest-energy and indirect-exciton X Λ , might contribute to the low-energy peaks (P 1 to P 4 ) in the photoluminescence spectrum of WS 2 monolayer. We highlight that multiple phonon-scattering processes could contribute into lines with similar energies, leading to super-linear power-dependencies that are often observed in literature [11,54]. Biexciton emission with a nonquadratic power-dependence (I PL = P α , α ∼ 1.4) in WS 2 monolayers at energies of ∼50 meV below ZPL X Γ A , which largely deviates from theoretical predictions [69], might be reconsidered. A superlinear emission with excitation power could also be attributed to the increase of the PL quantum efficiency with the increase of excitation power. It is due to the coexistence of nonradiative decay pathways and a non-linear increase of the radiative recombination efficiency. The non-linearity could result from the increase of the relaxation rates or from the saturation of the nonradiative channels [70]. Within the drawn picture, it is very likely that the non-linearity appears due to the saturation of the nonradiative recombination channel of the indirect-exciton X Λ . We believe that future powerdependent time-resolved PL experiments could distinguish the origin of the non-linearity in the power law of the low-energy structures (P 1 -P 4 ). Exciton Relaxation and Valley Depolarization Under Near-Resonant Excitation of B-Excitons. Besides the excitation near A-exciton, we also investigated the relaxation processes of excitons after resonantly creating B-excitons (see details in Supporting Information). We found that the relaxation from B to A is quite efficient and the lack of valley polarization (see figure S6) during fast cooling of B-to A-excitons suggests that intra-valley and inter-valley relaxation are equally probable, of which the later causes the valley-pseudospin depolarization of the PL emission. Therefore, the scattering of hot excitons with zone-edge phonons should be considered as a valley depolarization channel in addition to electron-hole exchange interaction [59]. Conclusion In summary, by using near-resonant excitation experiments, we prove exciton-phonon scattering events with non-zero wavevector that provide strong evidences for the momentum-indirect nature of the optical bandgap in WS 2 monolayer. The scattering between carriers and zone-edge phonons creates excitons at different valleys, among which, the lowestenergy band is momentum-indirect. Our findings advance the understanding on the inherent bandgap nature of TMD monolayers and highlight that more efforts are required for a complete understanding of the complex photoluminescence spectrum reported on W-based compounds. In addition to biexciton and localized states, the low-energy emission features observed at low temperature could arise from momentum-indirect transitions. Moreover, future experiments might be considered on high-quality hBN-encapsulated TMD monolayers. We believe such samples will benefit the clear identification of the different PL resonances. Furthermore, by using a backgate the charge density could be tuned to clearly distinguish between charge-and phononassisted recombination pathways for momentumindirect excitons. Furthermore, photoluminescence excitation across the whole spectral range in combination with polarization-dependent experiments could provide further evidences on the formation and polarization properties of momentum-indirect excitons. Finally, one could use magneto-optical spectroscopy to obtain g-factors that might vary across different valley excitons. These further experiments will shed light on the origin of the low-energy features observed in the low-temperature PL spectrum in W-based monolayer semiconductors. Methods Monolayer WS 2 was obtained by mechanical exfoliation of a high-quality bulk crystal. The thin WS 2 flakes were first exfoliated by polydimethylsiloxane (PDMS) stamp attached onto glass slides. The monolayers were identified under an optical microscope by the optical contrast with the substrate and by fluorescent measurements at room temperature. After, the samples were transfered onto a Si/(300 nm)SiO 2 substrate. For low-temperature (T = 20 K) photoluminescence excitation spectroscopy (PLE) studies, the monolayer was loaded and cooled in a liquid-He continuous-flow optical cryostat. For PLE measurements on A-excitons, a very narrow excitation source (≤0.08 nm FWHM) was achieved, using a tunable jet-stream dye (Rhodamine 6G) laser with lines raging from 2.103 eV till 2.173 eV that was filtered by a 600 gr/mm single grating spectrometer with 300 mm focal length. The output monochromatic beam was directed to the cryostat and focused on the sample by using a microscope objective (N.A 0.45). The photoluminescence, collected by the same objective (so-called back-scattering configuration), was dispersed by a triple-grating spectrometer operating in the subtractive mode. The near-resonant PL emission was dispersed by the final stage, a 640 mm focal length spectrometer and by using a 1800 gr/mm grating and finally detected by a liquid nitrogen cooled CCD camera. Additionally, for polarizationresolved PLE measurements on B-excitons, all the lines (2.707 eV, 2.627 eV, 2.605 eV, 2.541 eV, 2.499 eV, 2.470 eV and 2.412 eV) of an Argon-Ion laser were used. The PL signal was dispersed by using a 600 gr/mm single grating spectrometer with 800 mm focal length and finally detected by a liquid nitrogen cooled CCD camera. Author contribution QX and AGDA. started the project, conceived and designed the experiments. SL prepared the TMD micro-sized semiconductor layers. AGDA and DB performed the optical measurements. TTHD and JP intensively discussed on the data and the manuscript. DB and AGDA analyzed, interpreted the data and wrote the manuscript with input from all co-authors. DB and AGDA contributed equally to this work.
6,409.2
2020-05-01T00:00:00.000
[ "Materials Science", "Physics" ]
Processing Parameter DOE for 316 L Using Directed Energy Deposition The ability to produce consistent material properties across a single or series of platforms, particularly over time, is the major objective in metal additive manufacturing (MAM) research. If this can be achieved, it will result in widespread adoption of the technology for industry and place it into mainstream manufacturing. However, before this can happen, it is critical to develop an understanding of how processing parameters influence the thermal conditions which dictate the mechanical properties of MAM builds. Research work reported in the literature of MAM is generally based on a set of parameters and/or the review of a few parameter changes, and observing the effects that these changes (i.e., microstructure, mechanical properties) have. While these articles provide results with some insight, there lacks a standard approach that can be used to allow meaningful comparisons and conclusions to be made concerning the optimization of the processing variables. This study provides a template which can be used for making comparisons across DED platforms. The tests are performed with a design of experiments (DOE) philosophy directed to evaluate the effect of selected parameters on the measured properties of the DED builds. Specifically, a laser engineering net shaping system (LENS) is used to build multilayered 316L coupons and analyze how build parameters such as laser power, travel speed, and powder feed rate influence the thermal conditions that will define both microstructure and microhardness. A fundamental conclusion of this research is that it is possible to repeatedly obtain a consistent microstructure that contains a fine cellular substructure with a low level of porosity (less than 1.1%) and with microhardness that is equal to or better than wrought 316L. This is mainly achieved by maintaining an associated powder flow to travel speed ratio at the power level, ensuring an appropriate net heat input for the build process. Introduction The past few decades have provided great improvement in metal additive manufacturing (MAM) [1][2][3][4][5], but the consistency of the fabricated components' properties continues to be a matter of concern.MAM components undergo thermal cycles that are unique from other fabrication options.Large thermal gradients can occur, where the temperature can vary between 40 • C and 1000 • C within a short distance (mm's) in the build, and it can also experience rapid solidification (~10 3 -10 8 K/s) [6,7].To evaluate the effect of these unique thermal cycles seen in MAM, empirical methods are often used to correlate thermal cycles to a set of processing parameters, without grasping the complexity of the essential foundations of these effects (i.e., what role does the heat input have on solidification).This kind of approach in the evaluation of observed results is further compounded by the variety of MAM systems available, with each system having particular nomenclatures and frameworks for operational use.In addition to this, there is the issue of selecting the appropriate material and knowing what the influences of composition and or/the physical characteristics of the powders (powder shape, powder distribution, average powder size, oxide content, etc.) can have on processing parameters.Recently, several articles [4,5] have captured the latest information on the challenges facing directed energy deposition (DED).The main challenge remains having appropriate measurement tools and calibrated processes to better define a common nomenclature in which all interested parties can work from.Again, efforts have been made and are assisting in the advancement, but without a comprehensive look at the fundamental aspects, it is difficult to make significant improvements that can lead to widespread adoption. This study provides a better understanding of the processing parameters as they relate to their influence on the thermal conditions that correlate to microstructure and microhardness.What is not really known is the minimum energy required to produce an "acceptable" build.Here, we define "acceptable" as a sample that has mechanical properties which are good or better than wrought with minimal porosity.Approaching it from an energy standpoint rather than processing parameters enables a standardization process that moves beyond systems and looks at the total outcome.This paper is a first step in understanding how the processing parameters and the associated energy are interrelated. Recently, the authors in [8] developed an approach to create normalized energy processing diagrams for MAM that can provide a priori knowledge of the microstructure.The process maps contain isopleths of normalized equivalent energy density (E * o ) that show the ranges of acceptable microstructures.The concept of normalized equivalent energy comes from the ratio of dimensionless volumetric heat input per scan line to the dimensionless hatch spacing [8].This tool is a great starting point as it enables comparisons across system platforms.As such, this concept is used in this study to develop a design of experiments that is traceable to an energy metric. With this energy metric in mind, it is important to define what the critical processing parameters are and see what process control mechanisms can help yield the necessary property outcomes.To avoid creating a new nomenclature and causing further confusion, it is important to look at previous work.In 2006, the authors of [9] demonstrated that process control could be achieved for DED by monitoring the IR-temperature signal during the process.In their study, three groups of thin walls of 316L were deposited under three different processing conditions: (1) different laser powers, (2) constant set-values, and (3) path-dependent set-values.The process control with a path-dependent set-value ensures a nearly constant melt pool size and solidification conditions, resulting in a homogeneous microstructure, hardness, and a high dimensional accuracy of the deposited sample.While this study was limited to thin walls, it demonstrated that with the appropriate cooling conditions, it is possible to obtain consistent microstructures so long as it is possible to control the heat input.In 2007, the authors of [10] studied the ratio of optimal parameters of layer thickness and power input (P in ) per unit travel speed (T s ) for selective laser melting (SLM).It was shown that the greater the ratio of P in :T s is, the larger the re-melted line (called "vector") is.This work experimentally showed that there is a limit to the amount of energy one would like to deposit per unit layer so as to minimize the re-melting during subsequent layers.This is the powder bed fusion (PBF) approach used to understand the importance of energy balance when creating layers.They studied the P in :T s ratio as a fundamental metric and it appears to be an effective linear strategy to optimize thin wall features.The work carried out in this present study provides the same insight, but with respect to DED and large multilayer deposited samples. In 2010, the authors of [11] showed that optimal scanning parameters (laser scan speed, laser power, and capillary instability of segmental cylinders) in the PBF process are a function of the thickness of the deposited powder.They also demonstrated that at higher laser powers, the range for optimal scanning speed that can be utilized is larger, but that the range narrows as the thermal conductivity of the metal increases.They later looked at the P in :T s ratio as a function of the layer thickness and determined an experimental limit for this relationship.They found a strong negative correlation between the thermal conductivity of bulk material and the range of optimal scanning speed (the speed in which a continuous track with penetration to substrate can be achieved).This gives a guideline for understanding the impact of the powder flow rate, since in LENS, this parameter plays an important role in determining the optimal scanning speed for the DED process. In [12], the authors claimed that laser output power and laser travel speed played significant roles in determining the final dimensions of the deposited components.In [13], the authors, through a semi-empirical model, demonstrated that in order to eliminate porosity in LENS builds, the linear mass density (i.e., the optimal mass deposition per unit time) should not exceed a maximum level.The maximum level of linear mass density that should not be exceeded is the ratio of the powder flow rate (dm/dt) to laser velocity (dl/dt), which is defined as the optimal linear mass density.In this paper, there is a limit not to be exceeded, which is the ratio of the powder feed rate to travel speed [P f :T s ], and this ratio represents the maximum amount of powder which can be deposited per unit length of laser travel at a given laser power setting. In 2015, the authors of [14] studied the microstructure and mechanical properties of DED for 316L stainless steel (SS) and their dependence on thermal history.They observed that finer microstructures are achieved when there are longer local time intervals, enabling higher cooling rates.This paper shows that for the geometries built, a consistent fine cellular substructure was achieved in part by having cooling at the base of the plate in addition to the controlled powder feed to travel speed ratios, limiting the heat input.The results obtained in this paper agree with what was found in [7,15] and confirm that the mechanical properties would be similar to those reported.Now, the main focus is to highlight the design of experiments and the necessary steps that are required to achieve these repeatable results for the large samples built over a period time (in this study, it was over two years).This way, others can review this material, repeat the process, and share their results and provide feedback on the process established in this article. Design of Experiment for Repeatable Results As described in [16], one of the first issues that must be dealt with prior to making any builds is the quality of powder.They concluded that if your initial powder contains porosity or large void/particle size ratios, then your final builds will also have similar porosity issues.As such, to perform this work over a long period of time (~two years), a large quantity of gas atomized stainless steel micro-melt 316L (17.9% chromium, 70.5% Iron, 9.3% Nickel, 0.3% Silicon, 0.3 Sulfur, 1.8% Manganese) was purchased.This powder (−140 + 325 mesh; log-normal size distribution) is used in all the reported experimental data unless noted otherwise.Figure 1 shows a sample taken with our HITACHI TM-1000 scanning electron microscopy (SEM), illustrating that the particles are generally spherical with smaller satellite particles attached (i.e., cohesion of particles during the solidification in the atomization process).gives a guideline for understanding the impact of the powder flow rate, since in LENS, this parameter plays an important role in determining the optimal scanning speed for the DED process. In [12], the authors claimed that laser output power and laser travel speed played significant roles in determining the final dimensions of the deposited components.In [13], the authors, through a semi-empirical model, demonstrated that in order to eliminate porosity in LENS builds, the linear mass density (i.e., the optimal mass deposition per unit time) should not exceed a maximum level.The maximum level of linear mass density that should not be exceeded is the ratio of the powder flow rate (dm/dt) to laser velocity (dl/dt), which is defined as the optimal linear mass density.In this paper, there is a limit not to be exceeded, which is the ratio of the powder feed rate to travel speed [Pf:Ts], and this ratio represents the maximum amount of powder which can be deposited per unit length of laser travel at a given laser power setting. In 2015, the authors of [14] studied the microstructure and mechanical properties of DED for 316L stainless steel (SS) and their dependence on thermal history.They observed that finer microstructures are achieved when there are longer local time intervals, enabling higher cooling rates.This paper shows that for the geometries built, a consistent fine cellular substructure was achieved in part by having cooling at the base of the plate in addition to the controlled powder feed to travel speed ratios, limiting the heat input.The results obtained in this paper agree with what was found in [7,15] and confirm that the mechanical properties would be similar to those reported.Now, the main focus is to highlight the design of experiments and the necessary steps that are required to achieve these repeatable results for the large samples built over a period time (in this study, it was over two years).This way, others can review this material, repeat the process, and share their results and provide feedback on the process established in this article. Design of Experiment for Repeatable Results As described in [16], one of the first issues that must be dealt with prior to making any builds is the quality of powder.They concluded that if your initial powder contains porosity or large void/particle size ratios, then your final builds will also have similar porosity issues.As such, to perform this work over a long period of time (~two years), a large quantity of gas atomized stainless steel micro-melt 316L (17.9% chromium, 70.5% Iron, 9.3% Nickel, 0.3% Silicon, 0.3 Sulfur, 1.8% Manganese) was purchased.This powder (−140 + 325 mesh; log-normal size distribution) is used in all the reported experimental data unless noted otherwise.Figure 1 shows a sample taken with our HITACHI TM-1000 scanning electron microscopy (SEM), illustrating that the particles are generally spherical with smaller satellite particles attached (i.e., cohesion of particles during the solidification in the atomization process).All the builds were performed on a LENS 850 M system in which the deposition head moves in x and y directions and the build plate moves in the z direction (see Figure 2).Two custom built, in-situ process monitoring devices were utilized for this work: one has the capability to measure powder flow in real time [17], and the other can monitor the energy density during builds [18].These devices were part of a larger study taking place at NIU's Advanced Research of Materials and Manufacturing (ARMM) Lab on process monitoring and property control for MAM [19], which was sponsored by NIST MSAM.An in-situ acoustic emission sensor in the LENS monitored the powder flow as a All the builds were performed on a LENS 850 M system in which the deposition head moves in x and y directions and the build plate moves in the z direction (see Figure 2).Two custom built, in-situ process monitoring devices were utilized for this work: one has the capability to measure powder flow in real time [17], and the other can monitor the energy density during builds [18].These devices were part of a larger study taking place at NIU's Advanced Research of Materials and Manufacturing (ARMM) Lab on process monitoring and property control for MAM [19], which was sponsored by NIST MSAM.An in-situ acoustic emission sensor in the LENS monitored the powder flow as a function of time (Figure 2), and in this way, the mass flow rate could be independently tracked during the build [17].The presence of the sensor is critical to determining the appropriate powder flow since the powder flow has a strong influence on the balance of the available energy.Schematic of LENS set-up with AE sensor for powder flow and power measurement calorimetric system (PMCS) utilized to monitor net heat input, among other things.Deposition head moves in x and y directoins and build plate moves in z direction.Top right is the output data from the power measurement calorimetric system (PMCS), which provides the integrated energy (kJ) as the sample is being built.This information provides a metric for future process control. As described in [18], the power measurement calorimetric system (PMCS) has the ability to measure things like net heat input and the energy transferred during MAM processing, which allows for a very accurate calculation of energy of the build process (see top right corner of Figure 2).Furthermore, efficiencies of the MAM processes can be calculated as in [19], which can contribute to a greater understanding of the resultant metallurgical properties of various build parameters.Previous studies [9,20,21] show the importance of controlling and utilizing net heat input for additive manufacturing, particularly for modeling and simulation.Heat input is known as the amount of energy that is transferred to the base metal by a source of energy per unit of weld length.It can be expressed as: and so net power P net is defined as: where P in is the input power generated by the energy source in Watts (i.e., laser) and P net is the net power transferred to the substrate in Watts.Another important parameter that is often estimated is the thermal efficiency 'k', which defines the percentage of P in transferred to the substrate, and finally T s is the travel speed of the energy source. With this definition, we can now see the importance of measuring P net .In addition to its measurement, it is also shows how important the thermal efficiency is in determining net power.Therefore, the influence of P net is difficult to overlook, at a constant travel speed, and P net (rather than P in ) is directly responsible for all physical and metallurgical changes in metal being deposited, including bead shape, bead integrity (i.e., porosity), metallurgical characteristics (i.e., microstructure), and mechanical properties (i.e., hardness).With respect to modeling and simulation, P net is a critical variable that can help in the prediction of thermal conditions; weld pool characteristics; weld geometry, integrity, and properties; and distortion. The ability to directly measure and know P net is therefore necessary to improve the overall capabilities of metal additive manufacturing.Finally, although not the main focus of this paper, it now creates a direct connection to the concept introduced in [8], which is to use something like the normalized equivalent energy density (E * o ) to help define a priori the outcomes of the build.In this work, measured energy density and net heat input from the calorimeter are used to qualify DOE results.As such, it is important to note that all the tests carried out in this study had an energy measurement associated with it and provided another metric that helps define the repeatability of our DOE. The aim was to develop a universal procedure that any powder delivered DED user could follow, and the concept was to create a matrix of powder feed to travel speed ratios.This matrix was modified as needed; however, literature has shown that studies have been performed between 300 W-1000 W, so the idea was to start at a power level in the median range of approximately 645 W. Table 1 shows the experimental matrix set up for this study.Column three shows a range of travel speeds from 127 to 1143 mm/min.Column four then shows the adjusted powder flow rate in g/min for each travel speed so that a consistent powder flow to travel speed ratio can be achieved with each bead.It is important to note that with the in-situ powder flow monitoring, we can detect any deviations from the intended powder flow and as such, all values measured were within the intended scope of these ratios.Intuitively, one can recognize that there will be a potential optimized region within this matrix at a given power level.Our aim was to find out what this optimized parameter set was for an input power (nominal) of 645 W. This means that on the LENS machine, the power level was set to 645 W. This nominal power was utilized since this is the common practice within the literature; here, it is defined as P LENS .As such, a total of 90 beads were built by producing 10 beads at each travel speed with the adjusted linear mass density (ratio of powder feed rate P f to travel speed T s ) ranging from 0.007 to 0.025 g/mm.It should be noted that all beads were 20 mm in length and all measurements were done at the midway point taking that as a steady state for equal comparison.1).A series of laser power measurements (using a Molectron PM5 power meter) were taken before and after builds to determine the actual input power, which for at P LENS = 645 W, the actual power measured was ~470 W (P in ), and these are shown in Table 2. Knowing the actual absorbed laser power is important so that when we perform our energy analysis, we can account for the amount of energy actually absorbed during the build.Once the 90 beads were built, it was then necessary to characterize the quality of the individual beads to determine which parameter setting(s) yielded the best result.The determination of what was considered as the best result yielding an "ideal" bead is described below. From a historical context, it is possible to utilize the welding industries perspective to define the shape of the bead since it is well known that it will determine the quality of the weld.The LENS process creates a melt pool while delivering powder to the substrate at this melt pool, which solidifies in a very short period of time.As such, utilizing the contact angle relationship described below is applicable.A contact angle of less than 90 • causes the bead to spread over the surface.Spreading over the surface will fill the gaps between two neighboring beads, which helps decrease the porosity between beads.One can argue that this is also true in AM, but not much work has been done to define what is required.For this study, we define the idealized shape of beads by calculating the contact angle using (3) (see Figure 3): where h b is the total height of the bead, w is the width of the bead, and θ is the contact angle [13].The contact angle is an important parameter for controlling porosity.This definition is used because there is no porosity at an optimal linear mass density [13].It was also observed that this optimal linear mass density increases with an increase in the laser travel speed.Experimentally, the observations in this study showed that at a specific linear mass density (P f /T s ratio), porosity was at a minimum (less than 1.1%).However, by increasing the powder feed rate, the contact angle increases, which results in an improper bead geometry, causing gas entrapment between the beads and increasing the porosity.Vickers hardness was also selected as another criteria for bead selection.Maintaining a uniform hardness is critical to the process selection for MAM.As described in both [6,7], there is a fine cellular substructure that is seen when 316L is used in MAM; as such, our beads should have this fine cellular substructure which has been shown to provide a higher than normal microhardness when compared to wrought 316L [7].As we move to the larger builds, the goal is to control the size of the grains and ensure that we can generate a predominant formation of this fine cellular substructure in our builds.Experimental work started with the single bead analysis.To summarize the physical characteristics considered, the cross-sectional area (see Figure 3) of each bead was analyzed for the following conditions: (a) contact angle θ that is no larger than 110 • ; (b) Vickers hardness (>220 HVN); (c) distribution of fine cellular substructure; and (d) porosity.With these criteria in mind, when possible, an "ideal" bead was selected from each travel speed within the 90-bead experiment. In addition to the process parameters and P f /T s ratio, with this experimental design, the aim is to have another metric based on the energy of the process to provide a complete picture.As mentioned in [8], they introduced the concept of a normalized equivalent energy density E * o that shows ranges of acceptable microstructures.Essentially, at higher values of E * o , this means that the process parameters utilized will lead to an excessive heat input which will lead to things like cracking and or swelling.The opposite is true at lower values of E * o , meaning there is not enough energy to melt and or fuse the powder, which will generate porosity.Looking at the work done in [8], it shows that for PBF, there is a range for an acceptable microstructure in 316L between E * o = 2 to 4. Figure 3. Schematic of single bead analysis to determine optimal bead shape as defined by and the depth to height ratio. Experimental work started with the single bead analysis.To summarize the physical characteristics considered, the cross-sectional area (see Figure 3) of each bead was analyzed for the following conditions: (a) contact angle that is no larger than 110°; (b) Vickers hardness (>220 HVN); (c) distribution of fine cellular substructure; and (d) porosity.With these criteria in mind, when possible, an "ideal" bead was selected from each travel speed within the 90-bead experiment. In addition to the process parameters and Pf/Ts ratio, with this experimental design, the aim is to have another metric based on the energy of the process to provide a complete picture.As mentioned in [8], they introduced the concept of a normalized equivalent energy density * that shows ranges of acceptable microstructures.Essentially, at higher values of * , this means that the process parameters utilized will lead to an excessive heat input which will lead to things like cracking and or swelling.The opposite is true at lower values of * , meaning there is not enough energy to melt and or fuse the powder, which will generate porosity.Looking at the work done in [8], it shows that for PBF, there is a range for an acceptable microstructure in 316L between * = 2 to 4. While this normalized equivalent energy density will be confirmed for this study, it should be noted that one of the metrics utilized in this study can be measured in-situ and that is the associated measured energy (that is total energy measured during build of sample, Figure 2).In [18], the concept of net heat input and the measured energy were described and the results are given in [19].During builds, the results from the calorimeter were recorded and several outputs were analyzed and are discussed later.They are laser energy transfer efficiency (%), net heat input (J/m), and measured energy (kJ); this last value is measured directly from the calorimeter (see top right corner of Figure 2), while the others are calculated based on this data and help bring another perspective to the processing parameters.While further work needs to be done in this area, and is not the main scope of this article, it can be said that experimentally, there appears to be significance in the data obtained from the calorimeter and that it could provide a better understanding of processing parameters for MAM. Once the single beads were carefully vetted and selected based on all these parameters described above, the aim was to see if this relationship would remain for larger multilayered builds.This is another point of departure from the literature that typically looks at single wall builds.An example of a multilayer sample using the selected Pf:Ts ratio is shown in Figure 4.For each build number, a set of three samples of the size 12.2 × 16.5 × 28.6 (W × H × L) cubic mm's was built and these samples were then analyzed with the same criteria.While this normalized equivalent energy density will be confirmed for this study, it should be noted that one of the metrics utilized in this study can be measured in-situ and that is the associated measured energy (that is total energy measured during build of sample, Figure 2).In [18], the concept of net heat input and the measured energy were described and the results are given in [19].During builds, the results from the calorimeter were recorded and several outputs were analyzed and are discussed later.They are laser energy transfer efficiency (%), net heat input (J/m), and measured energy (kJ); this last value is measured directly from the calorimeter (see top right corner of Figure 2), while the others are calculated based on this data and help bring another perspective to the processing parameters.While further work needs to be done in this area, and is not the main scope of this article, it can be said that experimentally, there appears to be significance in the data obtained from the calorimeter and that it could provide a better understanding of processing parameters for MAM. Once the single beads were carefully vetted and selected based on all these parameters described above, the aim was to see if this relationship would remain for larger multilayered builds.This is another point of departure from the literature that typically looks at single wall builds.An example of a multilayer sample using the selected P f :T s ratio is shown in Figure 4.For each build number, a set of three samples of the size 12.2 × 16.5 × 28.6 (W × H × L) cubic mm's was built and these samples were then analyzed with the same criteria. Results and Discussion Table 2 shows the processing parameters that were chosen for the selected beads from the 90-bead DOE for this study.These beads were chosen to represent a range of powder feed to travel speed ratio across (Pf:Ts Ratio) the PLENS 645 W power setting.Based on the criteria described above, the Results and Discussion Table 2 shows the processing parameters that were chosen for the selected beads from the 90-bead DOE for this study.These beads were chosen to represent a range of powder feed to travel speed ratio across (P f :T s Ratio) the P LENS 645 W power setting.Based on the criteria described above, the optimized bead at each travel speed had a linear mass density (P f :T s Ratio) between 0.011-0.013g/mm (see column 5, Table 2).Based on this observation, two beads were selected with a linear mass density at 0.011 and 0.013 g/mm (the selected beads are identified as builds #36 & 55, respectively).In order to confirm the performance of the P f :T s Ratio concept, three more build numbers were chosen: Two samples that had a higher than ideal P f :T s Ratio (builds #7 & 10) and one that had a lower than ideal P f :T s Ratio (build #81).Only one from the lower than ideal ratio was possible to build for comparison.This was mainly due to the fact that porosity was hard to control at the higher travel speeds.It would be expected that the higher and lower ratios from the "ideal" condition would not yield good results.Let us recall that identifying and maintaining a proper P f :T s Ratio (i.e., linear mass density) is critical to producing repeatable structures.The concept of not exceeding a linear mass density to avoid porosity was introduced and discussed in [13], but there was no clear direction as how to clearly define what the "ideal" conditions are to ensure a repeatable build beyond what was achieved.In this work, by intentionally choosing parameter conditions that were perceived outside the "ideal" setting, it is possible to confirm if this P f :T s Ratio concept can provide a metric that can be utilized for future work.With this in mind, the samples shown in Figure 4 were built as mentioned and sets of three samples were built for each build number. In order to observe the microstructure, the three samples built (shown in Figure 4) per build number were cut at the midsection (see Figure 4 section A-A) so as to represent a steady state during the build process.The microstructure was characterized using our SEM and a Moticam 2000 CCD connected to an Olympus PMG3 microscope.The standard method for microscopy observation that was used is based on the ASTM standards for samples.The samples were mounted and polished to 1 micron and etched with a 10% Oxalic acid compound using an electrolysis technique at 6 volts from 10 to 15 s [22].The microhardness was measured with a Mitutoyo hardness testing machine with a force of 0.2 Kgf and a 10 s dwell time.It should be noted that measurements were taken at other locations along the build direction as part of our larger study, and the results were the same as those presented here in the midsection A-A (see Figure 4).Afterwards, the cross sections of the samples were mounted, grinded, and etched (see Figure 5).Vickers hardness measurements were carried out on the cross-sectional area of each of the three samples built per build number.As shown below (see Figure 5), nine independent microhardness measurements (three in each section-top, middle, and bottom) were taken on the cross section of the as built samples. The general conditions of the multilayered builds for the selected beads are described below and representative microstructures are shown later.All the multilayered samples are left in the as-built conditions.As described in [15], it is possible to see that our samples have elongated grains in the direction of thermocapillary convection from the melt pool and heat dissipation (see Figure 6).Additionally, the EBSD performed on the as built 316L shows there is no preferred orientation.These grains range from 100-400 µm, which are made up of the fine cellular subgrains that are between 2-5 µm in size, which is in agreement with observations made in [7].The microstructures of the representative P f :T s ratios are shown in Figures 7-9.These clearly show the importance of controlling this ratio and how it has an influence on the final microstructure.Looking at the 508 mm/min travel speed (build #36, Figure 7), it is possible to see a mixture of the fine cellular subgrain structure and elongated grains, giving an average microhardness of 227 + 9 (see column 6, Table 3).With an increase in travel speed to 762 mm/min and at a P f :T s Ratio of 0.011 g/mm, a similar structure is seen and also yields an average Vickers hardness of 229 + 11 (see column 6, Table 3).Recalling that our target range for P f /T s was between 0.011-0.013g/mm, both the builds #36 & 55 yield a similar microstructure, which shows the uniform distribution of energy.At the higher P f :T s ratios (builds #7 & 10, see Figure 8), an average Vickers hardness of 215 + 11 (see Table 3) was observed.Microsegregation, which is a result of re-melting and re-solidification, was also detected at these higher ratios.Microsegregation is a result of excess energy going into the sample, which then leads to a low hardness value that is caused by re-solidification of material.At lower P f :T s ratios (build #81, see Figure 9), porosity tends to increase, due to the lack of energy for depositing and melting powder.The hardness value 221 + 11 is slightly lower than that of ideal conditions shown above.It should be noted that all these microhardness values obtained from the multilayered builds are higher than wrought 316L obtained in the cold rolled condition (220 HV).Looking at the literature, the micro hardness values obtained in this study are within the range found in [9,15,23,24]. Figure 10 shows the fine cellular subgrain structure that was found in almost all the builds, but that was most predominant in the ideal P f /T s range of 0.011-0.013g/mm.It was observed that at 1143 mm/min, the contact angle became too large and insufficient melting at those interfaces caused porosity.In general, it seemed that the high grain size number as per ASTM standard is only achievable with the very high solidification rates and an appropriate energy balance.As such, this fine cellular sub grain size yields a consistent hardness, which was experimentally observed (220 to 240 HV) within the experimentally determined optimal P f /T s range of 0.011-0.013g/mm, as described in this article.It is possible that further optimization to increase the hardness could occur, but was beyond the scope of this current project.The general conditions of the multilayered builds for the selected beads are described below and representative microstructures are shown later.All the multilayered samples are left in the as-built conditions.As described in [15], it is possible to see that our samples have elongated grains in the direction of thermocapillary convection from the melt pool and heat dissipation (see Figure 6).Additionally, the EBSD performed on the as built 316L shows there is no preferred orientation.These grains range from 100-400 µm, which are made up of the fine cellular subgrains that are between 2-5 µm in size, which is in agreement with observations made in [7].The microstructures of the representative Pf:Ts ratios are shown in Figures 7-9.These clearly show the importance of controlling this ratio and how it has an influence on the final microstructure.Looking at the 508 mm/min travel speed (build #36, Figure 7), it is possible to see a mixture of the fine cellular subgrain structure and elongated grains, giving an average microhardness of 227 + 9 (see column 6, Table 3).With an increase in travel speed to 762 mm/min and at a Pf:Ts Ratio of 0.011 g/mm, a similar structure is seen and also yields an average Vickers hardness of 229 + 11 (see column 6, Table 3).Recalling that our target range for Pf/Ts was between 0.011-0.013g/mm, both the builds #36 & 55 yield a similar microstructure, which shows the uniform distribution of energy.At the higher Pf:Ts ratios (builds #7 & 10, see Figure 8), an average Vickers hardness of 215 + 11 (see Table 3) was observed.Microsegregation, which is a result of re-melting and re-solidification, was also detected at these higher ratios.Microsegregation is a result of excess energy going into the sample, which then leads to a low hardness value that is caused by re-solidification of material.At lower Pf:Ts ratios (build #81, see Figure 9), porosity tends to increase, due to the lack of energy for depositing and melting powder.The hardness value 221 + 11 is slightly lower than that of ideal conditions shown above.It should be noted that all these microhardness values obtained from the multilayered builds are higher than wrought 316L obtained Figure 10 shows the fine cellular subgrain structure that was found in almost all the builds, but that was most predominant in the ideal Pf/Ts range of 0.011-0.013g/mm.It was observed that at 1143 mm/min, the contact angle became too large and insufficient melting at those interfaces caused porosity.In general, it seemed that the high grain size number as per ASTM standard is only achievable with the very high solidification rates and an appropriate energy balance.As such, this fine cellular sub grain size yields a consistent hardness, which was experimentally observed (220 to 240 HV) within the experimentally determined optimal Pf/Ts range of 0.011-0.013g/mm, as described in this article.It is possible that further optimization to increase the hardness could occur, but was beyond the scope of this current project.One important component of this work is looking at the energy of the build process both as calculated in [8] and that of the results obtained from our power measurement calorimetric system (PMCS) (see top right corner of Figure 2) to see if the energy metrics have a correlation to the results obtained through this experimental design.Looking first at the normalized equivalent energy density E * o for the DED builds in this study, the following results were obtained: for the high P f /T s ratio (0.026-0.031 g/mm) (builds #7 & 10), E * o was above 9, which agrees with the notion of excessive heat input as described in [8] and thus yields an unfavorable microstructure; looking at the ideal P f /T s ratio (0.011-0.013 g/mm) (builds #36 & 55), E * o was between 3.02-3.69,which is within the range reported in [8], where an acceptable microstructure for 316L was between E * o : 2-4; finally, for the lower P f /T s ratio (0.007), E * o was 2.7, which would appear to be within the acceptable range and thus further analysis is required. With respect to the calorimeter results, the measured energy presented in column seven of Table 3 shows a similar trend to that of the results of the normalized equivalent energy density E * o that was just presented.That is for the high P f /T s ratio (0.026-0.031 g/mm) (builds #7 & 10) measured energy above 300 kJ, which yields an unfavorable microstructure and lower hardness.Looking at the ideal P f /T s ratio (0.011-0.013 g/mm) (builds #36 & 55), the measured energy seems to be within the range of 217-244 kJ, which yields an acceptable microstructure and microhardness with a minimum porosity of 1.1%.Looking at the lower P f /T s ratio (0.007), it would appear that this measured energy of 233 kJ would also be acceptable (as it falls between 217-244 kJ); however, we did see the large pores (Figure 8) even though hardness and overall porosity (pore size 100-200 µm) were within an acceptable range. It appears that an additional component to the energy analysis is required.As such, referring back to [20], where the importance of net heat input was discussed and how small variations can have an impact on cooling rates and thus microstructures, might provide a better understanding.The net heat input values, as defined in Equation ( 2), are presented in column 9 of Table 3.After analysis, it would appear from these results that there is a minimum net heat input required to obtain satisfactory results, which is 9 J/mm.As such, looking back at build #81, it has a net heat input of 5.46 J/mm, which would fall below that minimum criteria and therefore would explain the large pores shown in Figure 8. Finally, if it appears that build #36 and build #55 yield the best results, how does one select which processing parameter to use?It is now possible with the results from the calorimeter to calculate the laser energy transfer efficiency (column 8 in Table 3).The laser energy transfer efficiency is a parameter that describes the fraction of energy that is absorbed by the workpiece from the total laser output energy [25].In this work, the laser energy transfer efficiency is nothing more than a percentage of what was actually measured in terms of energy; in this case provided directly from the calorimeter to what was delivered by the laser.As such, this is an additional criteria that can be utilized to define which processing parameter yields the most efficient process in terms of energy consumption.This of great interest to industry.Looking at the values, it would appear that build #55 is the most efficient set of processing parameters. Conclusions The research conducted in this study demonstrates an experimental approach that defines the powder flow to travel speed ratio P f :T s (i.e., linear mass density) at a given power level which provides a consistent microstructure and microhardness for metallic DED, regardless of travel speed (above certain T s ).This was seen both in build #36 and build #55, where the range of the P f /T s ratio (0.011-0.013 g/mm) yielded a consistent microstructure with fine cellular subgrains in the order of 2-5 µm, with a microhardness above 227 HVN and porosity under 1.1%.Furthermore, the experimental results show that it is possible to maintain and control the fine cellular subgrains when following the appropriate ratio of powder feed to travel speed ratio and maintaining a net heat input above 9 J/mm.This work also shows that by maintaining the appropriate ratio, the microhardness values are consistent in each build, which is something that is critical for the future of MAM.This type of ratio is a metric that could potentially be utilized to help define standards of quality, but further investigations at other power levels and materials are required.The authors provide a simple design of an experiment that can quickly determine the ideal beads based on these ratios and then yield the consistent results when making larger samples.Finally, through the use of the calorimeter, an in-situ process could be utilized to define the minimum heat input and provide insight into the laser energy transfer efficiency.The work concluded here on DED has also found that the single bead methodology provides the possibility to calibrate both incoming materials and DED machines as a systems approach prior to any production perturbations, such as different lots of incoming materials.Future work will be carried out for 316L at additional power levels to validate this approach. Figure 2 . Figure2.Schematic of LENS set-up with AE sensor for powder flow and power measurement calorimetric system (PMCS) utilized to monitor net heat input, among other things.Deposition head moves in x and y directoins and build plate moves in z direction.Top right is the output data from the power measurement calorimetric system (PMCS), which provides the integrated energy (kJ) as the sample is being built.This information provides a metric for future process control. Figure 3 . Figure3.Schematic of single bead analysis to determine optimal bead shape as defined by θ and the depth to height ratio. 14 Figure 5 . Figure 5.View of cross-sectional area (A-A from Figure 4) of samples in which three regions for measurements were selected (top, middle, and bottom) and the locations of each individual reading. Figure 5 . Figure 5.View of cross-sectional area (A-A from Figure 4) of samples in which three regions for measurements were selected (top, middle, and bottom) and the locations of each individual reading. Figure 6 . Figure 6.(Left) Micrograph of a region in Figure 5, showing the larger grains resulting from thermocapillary convection from the melt pool and heat dissipation; (Right) EBSD of as received 316L [19]. Figure 6 . Figure 6.(Left) Micrograph of a region in Figure 5, showing the larger grains resulting from thermocapillary convection from the melt pool and heat dissipation; (Right) EBSD of as received 316L [19]. Figure 6 . (Left) Micrograph of a region in Figure5, showing the larger grains resulting from thermocapillary convection from the melt pool and heat dissipation; (Right) EBSD of as received 316L[19]. Figure 6 . Figure 6.(Left) Micrograph of a region in Figure 5, showing the larger grains resulting from thermocapillary convection from the melt pool and heat dissipation; (Right) EBSD of as received 316L [19]. Figure 9 . Figure 9. Multilayered build #81 (High P f :T s ) large porosity due to the lack of energy available from increased travel speed, Vickers: 221 ± 11. Table 1 . Distribution of basic parameters in the 90-bead experiment. A 3 kW IPG Nd:YAG fiber laser was utilized with an initial LENSpower of 645 W (Listed in Table Table 2 . Measured parameters of selected beads for building the multilayered samples. Table 3 . Measured parameters of multilayered sample builds.
10,714.2
2018-09-07T00:00:00.000
[ "Engineering", "Materials Science" ]
Mean-Field Coupled Systems and Self-Consistent Transfer Operators: A Review In this review we survey the literature on mean-field coupled maps. We start with the early works from the physics literature, arriving to some recent results from ergodic theory studying the thermodynamic limit of globally coupled maps and the associated self-consistent transfer operators. We also give few pointers to related research fields dealing with mean-field coupled systems in continuous time, and applications. Introduction Understanding the dynamics of complex systems is the forefront of research in many areas of science.Examples of complex systems most impactful in our everyday lives are networks of neurons, gene regulatory networks, artificial neural networks, spread of epidemics, and opinion models.The behavior of these systems is the result of the intricate interactions of their microscopic components. Oftentimes, complex systems are modeled as dynamical systems interacting on a graph/network: each dynamical system represents a fundamental component of the complex system (e.g. a gene, a neuron, an individual) and occupies a node on the graph whose edges prescribe the interactions between components.The literature on dynamical systems coupled on networks is vast.The tutorial [PG16] presents a broad survey. In this review we focus on mean-field coupled systems in discrete time (globally coupled maps) and their thermodynamic limits (self-consistent transfer operators). Mean-field models are characterized by a very large number of components coupled by weak pairwise interactions whose strength scales as the inverse of the number of coupled units.Numerical simulations show that these systems exhibit a great variety of behaviors, some of which are reminiscent of complex systems. Most rigorous arguments deal with thermodynamic limits of coupled systems where the number of components tends to infinity.In this limit, the global state is given by a probability measure describing the distribution of the infinitely many components in phase-space, and its evolution is given by a nonlinear evolution law prescribed by a self-consistent transfer operator. Below we report the main observations and results available on globally coupled maps.The literature on the topic is vast, and a complete review of all the contributions to the subject seems hopeless.Rather than to be complete, the objective of this paper is to: give some history and context to the study of mean-field coupled maps, survey the advances in the study of self-consistent transfer operators made in the last decade, and provide pointers to research topics having an affinity with mean-field coupled maps. Organization of the review.In Section 2 we review the literature on globally coupled maps from the origins in the physics literature (sections 2.1-2.4) and the ergodic theoretical approaches to study these systems in (Section 2.5).We also review other coupled systems in discrete time, in particular maps coupled on lattices and heterogeneous networks (Section 2.6).In Section 3 we focus on the thermodynamic limits and the study of self-consistent transfer operators.We review various rigorous approaches to study existence and stability of fixed states (Section 3.1) and their linear response (Section 3.2).We then look at situations where the self-consistent operators have more complicated attractors, and the available studies are mostly numerical (sections 3.3-3.4).We conclude the section reviewing a recent development on propagation of chaos for globally coupled maps.Finally in Section 4 we give some pointers to works dealing with mean-field models in continuous time and other related topics.Among others, we give very quick (and superficial) overviews of: interacting particle systems, systems of coupled oscillators, mean-field models on adaptive and higher-order networks, highlighting the connections with globally coupled maps. Globally Coupled Maps At the end of the '80s beginning of the '90s, globally coupled maps (GCMs) arose as high-dimensional models of complex systems having simple equations, but whose dynamics exhibited a great variety of behaviors.Loosely speaking, the equations describing the evolution of N identical coupled maps, also called units or sites, have the form where x i (t) characterizes the state at each site and belongs to M , a set with some additive operation (often an interval or T = R\Z).The map f is called the local or uncoupled dynamics.Each term1 N h(x i , x j ) gives the pairwise additive interaction that the i-th unit receives from the j-th one.Different formulations can be found in the literature some of which will be discussed through this review.This model stemmed from a similar setup where maps are coupled on a lattice: Given d ≥ 1 and Λ ⊂ Z d finite or infinite, where Λ i ⊂ Λ prescribes a set of neighbors of i.The above, is a continuous variable version of a spin system, and was introduced as a model to study chaos and pattern formation in spatially extended systems after coarse graining.Systems as in (2) are called coupled map lattices (for a brief review see Section 2.6.1). Globally coupled maps are coupled map lattices where the set of neighbors Λ i is the whole collection of units, and where ε is required to scale as N −1 .One reason behind this normalization invokes energy considerations: If a unit has to spend some "energy" to influence another unit and has only a finite amount of "energy" to spend, this will have to be distributed among all the interactions made.If interactions are identical, it will have to be distributed equally.This assumption is chiefly made for the equations to be well defined for N → ∞.If the interaction strength scales as N −1 , in the limit N → +∞ one hopes that the mean-field interaction term in (1) converges to a number depending only on the global distributions of the {x i (t)} N i=1 , and not on their particular state, making the equation that defines the dynamic of x i (t) virtually only dependent on x i (t) and identical across units. Synchronization, phase ordering, and turbulence It does not come as a surprise that globally coupled maps exhibit a great range of behaviors that can vary changing the parameters that define the local dynamics and/or the coupling among units.The asymptotic behavior of the orbits can also drastically change depending on the initial condition, which suggests that these systems possess a large number of different attractors. One of the first instances where the above observations were reported is [Kan89a] 1 .Here the globally coupled system considered is with x i (t) ∈ [−1, 1] and ε ≥ 0, and corresponds to the application of an uncoupled map f : [−1, 1] → [−1, 1], and of x j (t) i = 1, ..., N a diffusive mean-field interaction where the state of each unit tends to get closer to the average of the states of all the units.Notice that for ε = 0 the maps are uncoupled and evolve according to f , while for ε = 1, after one time step the system synchronizes instantaneously and each coordinate takes the same value equal to the average state.In between these values a great variety of behaviors can arise.In [Kan89a], f belongs to the logistic family which is known to exhibit intricate bifurcation patterns alternating periodic and chaotic attractors.With this choice, varying ε ≥ 0 and 0 ≤ a ≤ 2 and simulating the dynamics for different initial conditions, various behaviors are observed [Kan90a] which have been organized in terms of their synchronization patterns.Two units i and j are synchronized if x i (t) = x j (t) 2 .A cluster, is a maximal subset of the units, {i 1 , ..., i k } ⊂ {1, ..., N }, whose units are synchronized, i.e. x i 1 (t) = ... = x i k (t). One says that an orbit exhibits 1. Synchrony: if there is only one cluster, i.e. all the units are synchronized; 2. Ordered phase: if there is a "small" number of clusters each of which contains a "large" fraction of the units; 3. Partially ordered phase: there is a large number of clusters with few of them containing a large fraction of the units, and many of them containing few units; 4. Turbulent/chaotic phase: there is no discernible organization into clusters. For an (a, ε)-bifurcation diagram showing the emergence of these different behaviors see e.g. Figure 1 in [Kan89a].As expected, small values of ε favor the turbulent phase, while larger values of ε favor the emergence of coherent structures as in (1) or (2).One can classify states from (1)-(3) with respect to the number and size of clusters by associating to the sate an m-tuple (k 1 , ..., k m ) with k 1 + ... + k m = N that denote the presence of m clusters with cluster i containing k i units. For fixed parameters, one observes a wide range of dynamics already among states having the same number of clusters, only differing for the clusters' size.For example, if m = 2 one can cook up the parameters so that if k 1 ≈ k 2 ≈ N/2, the motion is periodic with the units in each cluster switching between two different values X 1 * , X 2 * at each time-step in antiphase, i.e. when cluster 1 is close to X 1 * , cluster 2 is close to X 2 * and vice versa.However, moving units from cluster 2 to cluster 1 and thus increasing the value of k 1 at the expense of k 2 , one assists to a period doubling cascade with the states of the first cluster jumping periodically between 2 n values (see e.g. Figure 13a in [Kan90a]).Further increase of k 1 destroys the two clusters and starts a transition to the turbulent regime.The above picture is explained by the following observations: Letting be the states of the units in each cluster, and substituting in (3), the evolution equations for X 1 (t) and X 2 (t) are 2 Different notions of synchronization exist.We do not address these differences here, but we point the reader to [PRK02].On synchronization, see also [BKO + 02] and [ADGK + 08]. with ε 1 := ε k 1 N and ε 2 := ε k 2 N .Thus, changing k 1 produces a change in ε 1 , ε 2 and a bifurcation in the 2D system above that can explain the observed behavior.Let us stress that these bifurcations are for fixed values of (a, ε) varying k 1 , i.e. they are observed in the same system only changing the initial condition. For a general account on the bifurcations with respect to the (a, ε) parameters see [BJP99] and in particular Figure 1 therein which captures the variety of attractors numerically observed in the system.All of the above suggest that these systems have a large number of attractors.A similar situation was observed in [WH89] in a system of coupled continuous time oscillators where this coexistence phenomenon has been termed attractor crowding.An important feature of attractor crowding is that attractors increase factorially in number with the system size - [WH89] estimates (N − 1)! -and get closer in phase space so that a small perturbation of an orbit can drive the system from one attractor to the other giving high-versatility expected to have implications on the system's function (see also Sect.4.3 below on applications). Parameters (a, ε) can be chosen to give rise to the following interesting phenomenon called posi-nega switching [Kan89a, Kan90a]: Start from an initial condition in a two-clusters state (k 1 , k 2 ) with k 1 ≈ k 2 , where the dynamic of each cluster follows a periodic orbit of period 2 in antiphase.Now, assume that perturbing the state of a single map in the second cluster of a fixed quantity δ brings it to a region where the map eventually joins the first cluster.This leads to an increase of k 1 and a period doubling cascade (see Figure 2 in [Kan89a] and Figure 3 (c) in [ST00]) that roughly corresponds to the period doubling cascade in the logistic map.Further increase of k 1 makes each cluster split with the units separating and undergoing chaotic motion.At this point, there is no more division into clusters.However, if we keep adding the fixed perturbation to the maps that used to belong to the second cluster, at some point the two clusters reform and recover their periodic switching dynamic.What is most surprising is that clusters reform so that maps that were in cluster 1 before the chaotic phase join the same cluster, and similarly for maps originally in cluster 2 suggesting that the system keeps some "memory" of the cluster subdivision. In [Kan91], a similar picture to the one in (3) has been presented with coupled maps having x i (t) ∈ [0, 1] governed by the equations In contrast with the previous setups where the emergence of ordered, partially ordered phases, and the richness of periodic orbits stemmed from the interplay between the richness of the local dynamics and the diffusive coupling, here it is only due to the coupling. We also mention [CG98] which studies logarithmic maps, f a (x) = a + ln |x|, coupled as in (3).Here synchronization and collective behaviors are observed, but there is no subdivision of the units into clusters (there is only one cluster), and the state x(t) = x 1 (t) = x 2 (t) = ... of the synchronized units either evolves around a periodic orbit, or undergoes chaotic motion.For certain values of (a, ε), a turbulent phase is detected.See Figure 4 in [CG98] for a bifurcation diagram.Some works focused on the case of uniformly hyperbolic local dynamics given by uniformly expanding maps or tent maps in the parameter regimes having a unique mixing absolutely continuous invariant measure.In [Jus95b], the author studies globally coupled system with x i (t) ∈ [0, 2π] governed by the equations where f depends on a parameter a ∈ R, and is given by It is immediate to see that for a = ε = 0, the system is a product of N uncoupled doubling maps, therefore it has a unique absolutely continuous invariant mixing probability measure on [0, 2π] N .Nowadays, it is well known that this picture is stable under small perturbations, and therefore persists for small a and ε.At the time, this was claimed in [Jus95b] using Markov partitions.In the same paper, it is shown that larger values of ε lead to synchronization, i.e. the synchronization manifold is a stable invariant set.The above picture shows that the dynamics undergoes bifurcations shifting from a regime where the behavior is dictated by the hyperbolic local dynamics, to a situation where the behavior is dictated by the coupling.The presence of a unique absolutely continuous measure in the small coupling regime was rigorously proved in [Kel97] for systems of coupled tent maps.In our notation, Keller showed that for x i (t) ∈ [0, 1] evolving according to the equations and α satisfying certain assumptions ensuring uniform hyperbolic behavior, the system has an invariant absolutely continuous mixing measure.Again, the results are inferred using Markov partitions and coding.More on the study of GCMs in the context of ergodic theory can be found in Section 2.5. In the next section we review some further results on the turbulent phase when N → ∞, and a phenomenon that influenced the study of globally coupled maps. Violation of the law of large numbers In [Kan90b] and [Kan92], Kaneko observed a phenomenon that he termed violation of the law of large numbers later also referred to as nonstatistical behavior [PC92,SBAL92].The observation is the following.Consider a system of globally coupled maps as in (3) with logistic local dynamics as in (4) and where the parameter a is chosen so that for small enough ε, the system exhibits turbulence/chaos.The mixing character of the local dynamics suggests that in the limit N → ∞, the maps should be uncorrelated3 and the mean-field coupling term should satisfy the law of large numbers and converge to a fixed value.For a finite system, one then would expect the time series h N (t) to be close to this fixed value plus some fluctuations going to zero as N → ∞.To measure the size of the fluctuations, one can pick the Mean Square Deviation (M SD) defined as where • denotes the integral with respect to P N which is the (unknown) distribution of h N .In other words, M SD(N ) is the variance of h N .In practice, M SD(N ) can be estimated from the time series {h N (t)} t≥0 .If {x i (t)} N i=1 were uncorrelated, one would expect M SD(N ) to decay as N −1 .Surprisingly, numerical simulations showed that after an initial decrease proportional to N −1 , for larger N the quantity M SD(N ) stabilizes at a fixed small value (10 −1 -10 −3 ) depending on the parameter a -see Figure 2 in [Kan90b].These fluctuations were deemed due to some "coherence" among units that would persist in the limit N → ∞. Shortly after, in [PK94], a different point of view was put forward, and the observed lack of decay of M SN (N ) was imputed to the lack of stationarity in the system.The conclusion in [PK94] was that in the limit N → ∞ the system can be out of the equilibrium and wonder between different states on which the mean-field takes different values that account for the fluctuations observed in the time series of h N (t).More precisely, rather than describing the state of each map x i (t), one can investigate their distribution given by the measure and its evolution where f at (y) is a map from a parametric family and with a 0 ∈ R. One can consider equations ( 7)-(8) beyond the current setup substituting the empirical distribution µ N (t) with any measure µ(t).In particular, if µ(t) = lim N →∞ µ N (t)5 , one can think of equations ( 7)-(8) as describing the evolution of the system's state in the thermodynamic limit.In [PK94], evidence has been found that the dynamics of µ(t) is not necessarily asymptotic to a fixed point, i.e. the thermodynamic limit does not necessarily have a stable equilibrium state.Instead, one can imagine that for µ(0) in some class of measures, the orbit µ(t) can evolve towards, for example, a periodic orbit (with period > 1), or even more complicated attractors.In this situation, one could expect that for N sufficiently large, µ N (t) would also be close to the attractor shadowing its dynamics and the orbit µ N (t) would appear as a noisy version of an orbit on the attractor.Going back to the study of the mean-field h N (t), this suggests that rather than fluctuations around the expected value of the mean-field h N , one should consider fluctuations with respect to f (y) dµ(t)[y].An example in [PK94] shows numerically that the violation of the law of large numbers can be resolved taking this point of view.The available examples of this kind usually arise when the local maps f belong to a family of maps with nontrivial bifurcation structure. Examples where (7)-( 8) have a trivial attractor given by an attracting fixed point are also available.In fact, if the local maps have uniformly hyperbolic propertiese.g.they are smooth with uniform expansion -and the coupling strength is small, the system is expected to have a unique equilibrium in the thermodynamic limit close to the SRB measure of one of the f a , and µ(t) converges to this equilibrium exponentially fast, provided that the initial condition µ(0) is picked inside a suitable set of measures with some smoothness.The first instance where a claim of this kind has been rigorously proved is [Kel00].This was followed by many other results on the study of self-consistent operators which we are going to review in Section 3.1. Mean-field fluctuations and self-consistent transfer operators The observations of the violation of the law of large numbers, opened the way to the study of the nontrivial evolution of the mean-field h N (t) in the limit for N → ∞.The main starting point for this study are the equations (7)-(8) that define a nonlinear self-consistent evolution law on the space of measures.The generator of this evolution law is also called self-consistent operator or nonlinear Perron-Frobenius operator.This object was introduced in the setting of globally coupled maps first in [Kan92], while another version was already used in [Kan89d] in the context of coupled map lattices.In continuous time, an analogous nonlinear evolution is known as a nonlinear Fokker-Planck (e.g.[DZ78]). In [Jus95a], the author studied the self-consistent evolution in the case where the uncoupled map f : [−1, 1] → [−1, 1] belongs to the family of tent maps f a (x) = 1 − a|x| or to the logistic family (4).He investigated: the linearization of (7)-(8) around fixed points, the presence of periodic orbits for the evolution, and formal conditions implying stability.Interestingly, the conditions are reminiscent of those appearing in the study of linear response for the uncoupled map f , suggesting that structural and/or statistical stability are requirements for the existence of stable equilibria in the thermodynamic limit.This is also supported by numerical evidence showing that in the logistic family, when the parameter a is selected in an area where linear response for f fails, even a very small change in the coupling strength ε can produce notable effects in the observed dynamics.Similar considerations and further evidence have been also put forward in [Kan95] and [CM98]. Nonstatistical behavior has been observed also in heterogeneous systems [SK97], i.e. systems where the local dynamics are not identical as in (1), but each map has different local dynamics. In [Jus97], the presence of stable periodic orbits for (7)-( 8) is investigated for a system of coupled tent maps.A bifurcation digram in the parameters (a, ε) is obtained exhibiting a period-doubling cascade (see Figure 2 in [Jus97])6 . Rather then on the self-consistent evolution of measures, some works focus only on its effect on the evolution of the mean-field h N (t) -for the relation between evolution of measure and h N (t) recall the second equality in (8).In [EP95], an analysis of the self-consistent equations is used to estimate that in a system of globally coupled tent maps, h N (t) has nontrivial fluctuations for certain values of the height of the tents and, most surprisingly, for any value of the coupling strength ε > 0. Numerical and analytic considerations estimate the fluctuations at e −Cε −2 .A similar analysis is carried out for logistic maps in [EP97] estimating the size of the fluctuations at the much larger order of magnitued O(ε).Some insight on the origin of the fluctuations can be obtained from the return plots depicting h N (t + 1) versus h N (t) -see [CM98] for coupled tent maps and [SK98b] for coupled logistic maps.These present a variety of characteristics depending on the local maps and strength of interactions.For example, the points (h N (t), h N (t + 1)): i. can be concentrated on a finite collection of points, see Figure 1b in [CM98], that can occur e.g. when (7)-( 8) have an attracting periodic orbit; ii. can lay close to a one-dimensional curve, like a circle, in which case {h N (t)} t≥0 shows quasi-periodic behavior, see Figure 1a in [CM98] and Figure 1b in [SK98b]; iii. can present more complicated, but still low-dimensional structure, for example laying on what looks like the projection of a 2D torus in an higher dimensional space to the plane (h iv. or they can lack any evident structure whatsoever, see Figure 1a in [SK98b]. Lyapunov exponents for the mean-field dynamics In order to characterize the patterns in i.-iv., several authors have put forward different approaches defining Lyapunov exponents associated to the time-series {h N (t)} t≥0 . Lyapunov exponents of the self-consistent equations In [Kan95], Kaneko investigated the Lyapunov exponents of the self-consistent equations in (7) for a system of globally coupled tent maps.Starting from a measure µ(0), a small perturbation is applied yielding µ ′ (0) = µ(0)+δν.Then the orbits µ(t) and µ ′ (t) are compared, and the top Lyapunov exponent is estimated for several perturbations δν.Situations as in point i. presented a negative exponent, confirming the presence of a periodic attractor for the self-consistent equations, while situations like iii. and iv.yielded a positive exponent.The estimated values of these exponents varying the parameters (a, ε) can be found in Figure 10 from [Kan95]. Top Lyapunov exponents for the finitely many coupled tent maps and their selfconsistent equations have also been studied in [Mor97]. Collective Lyapunov exponents A different type of analysis has been proposed in [SK98a] where the focus is on the Lyapunov exponent of equations (3) when perturbing an initial condition along the direction of the mean-field (5) only.More precisely, considering an initial condition (x 1 (0), ..., x N (0)), a perturbation of this initial condition along the diagonal direction is obtained putting One then obtains a Lyapunov exponent studying the rates of divergence (or convergence) of h N (t) and h ′ N (t) which are the mean-field along the original and perturbed trajectory respectively.Crucially, the Lyapunov exponent estimated with this particular type of perturbation is independent of N , for N sufficiently large, and can be much smaller than the value of the top exponent for the whole system.This exponent is believed to detect information about the collective motion of the coupled system that is emergent in the thermodynamic limit. Ergodic theory of Globally Coupled Maps One of the first papers investigating the dynamic of globally coupled maps in the context of ergodic theory is [Kel97], where conditions for existence and stability of a unique absolutely continuous invariant measure were established for a finite system of globally coupled tent maps.In [Kel00], a similar result was proved for the thermodynamic limit with infinitely many globally coupled tent maps.Here the time evolution is given by a self-consistent transfer operator, and existence and uniqueness of a fixed measure was showed providing the first rigorous results on STOs. In [Jär97], it was proved that N analytic uniformly expanding weakly7 coupled maps admit a unique a SRB measure µ N , independently of N , and µ N converges to a limit µ for N → ∞. [BKZ09] studies a STO undergoing a pitchfork bifurcation.Here the STO is of the type in (12) with a family of piecewise uniformly expanding maps with two onto branches and all maps preserving Lebesgue measure.It is shown that for small values of the coupling, Lebesgue is a stable fixed point for the STO.Increasing the coupling strength, Lebesgue looses stability, and two stable fixed measures appear. [SB16] studies existence and stability of fixed points for STOs arising from systems of coupled doubling maps with piecewise linear interactions for different regimes of coupling strength.These results were generalized in [BKST18] to a wider class of uniformly expanding maps.In [ST21] the authors studied linear response of fixed points for smooth uniformly expanding maps with smooth interactions.In [Gal22], a general functional analytic framework to study fixed points of self-consistent transfer operators and their stability is provided.In [BLS22], STOs arising from coupled Anosov diffeomorphisms have been considered.A more careful discussion of the above results is given in Section 3 where we focus on self-consistent transfer operators. Most of the existence results in the papers above require small coupling strength.Increasing the coupling is expected to destroy the stability of the fixed points, and to eventually lead to clustering and synchronization for larger coupling strength.Rigorous studies of the bifurcations happening in between are unavailable. Before the onset of clustering and the related decrease in dimensionality of the attractors, the system can undergo a bifurcation via breaking of ergodicity where multiple attractors of full dimension form, i.e from a situation where the finite dimensional system has a unique absolutely continuous invariant probability (a.c.i.p.) measure, to a situation where the system has multiple a.c.i.p. measures supported on disjoint sets of positive Lebesgue measure and full dimension. Ergodicity breaking is often related to the breaking of some symmetries of the system.For example, in [Fer14] Fernandez considered a system of N doubling maps coupled via piecewise affine diffusive interactions and, for N = 3, provided numerical evidence and rigorous arguments showing that, by increasing the coupling strength, the unique a.c.i.p. measure of the system breaks into multiple asymmetric ergodic measures having support of positive Lebesgue measure.It is important to notice that this happens when the system of coupled maps is still uniformly expanding.The discontinuities in the coupling are therefore to be considered responsible for the bifurcation.Breaking of ergodicity in a system of 3 globally coupled maps has been rigorously studied also in [SB16], and in a system of 4 coupled maps has been studied in [Sél18].[Fer20], [FS22] study algorithms to obtain computer assisted proofs of breaking of ergodicity for piecewise affine uniformly expanding coupled maps in any dimension.The algorithms provide a way to check existence of forward invariant sets given by unions of polytopes that, given the expansivity assumptions, will be granted to support a.c.i.p. measures. Other systems of coupled maps In this section we briefly review other types of interacting systems in discrete time.We do not aim at completeness, but rather at highlighting some interesting aspects and pointing to some research trends and works that are relevant for the study of globally coupled maps. Coupled Maps Lattices The system described by the equations (2) is an example of CML.The main difference with globally coupled maps, is that there is a notion of distance among maps (e.g.given by a lattice structure) and interactions are local, i.e. they are only among nearby maps or the interaction strength decays with the distance as, for example, in where Λ ⊂ Z d , |i − j| is the distance between nodes i and j, and or decays sufficiently fast so that j∈Λ ψ(|i − j|) is summable. Perhaps the most important difference with globally coupled maps is that even in the case of infinite Λ, each map feels a nonzero -O(ε) -influence from some of the other maps, while in globally coupled maps the interaction strength among any two given units goes to zero when N → ∞ and only the cumulative effect of many interactions has an effect on the dynamics. CMLs originated as discretized models of continuous spatially extended systems such as fluids and systems of chemical reactions with diffusion.The book [KT01] reviews the behavior of CML as investigated in the physics literature.Most of what is reported below can be found there. As for the study of globally coupled maps, the local dynamics mostly employed in the study of CMLs are logistic and tent maps to capture chaotic dynamics with intricate bifurcation structure.Numerical studies ([Kan89b, Kan89c, GS91, AP93, G + 94, AN94, LL94, BKK94, CM95, KP95, BV96, CMPS98, FGV02]) showed that CMLs exhibit a great variety of behaviors: i. Periodic behavior.In this state, the lattice is divided in various connected domains grouping nearby sites.Within each domain, the sites have periodic dynamic with the same period.These states are observed for example in coupled logistic maps on a 1D lattice with parameter in the doubling cascade window. The subdivision into domains depends on the initial condition.Different initial conditions lead to different domains with possibly different characteristic periods.The number of possible domain configurations scales exponentially with the system size (this is analogous to the attractor crowding discussed in Section 2.1). ii. Spatial bifurcations.Starting from a configuration as described in i. and increasing the parameter of the logistic map slightly, one observes that the domains tend to remain intact, but the dynamics within each domain bifurcates.In particular, it first goes through a period doubling cascade, until it eventually becomes chaotic.Thus one ends up with orbits that are periodic on some domains and chaotic on others. iii.Spatiotemporal Intermittency.Further increase of the parameter can lead to destruction of the domains.In this case the dynamic looks non-stationary with each site alternating between stretches of time where it exhibits quasi-periodic behavior, and abrupt switches to erratic motion (temporal intermittency).Furthermore, at the same instant of time, some sites exhibit periodic behavior, while other show irregular dynamics (spatial intermittency). iv. Fully Developed Chaos.Further increase of the parameter for the local maps makes every site undergo chaotic motion.The orbits at each site become uncorrelated on large scales.In some cases transition to chaos happens for effect of the coupling. v. Travelling Waves.These appear in the range of parameters for the logistic map discussed at points i. and ii., if the coupling strength is increased.In this case the domains are not invariant anymore, but they can move across space. CMLs with unidirectional coupling, e.g. on a 1D lattice where each node receives an interaction only from its neighbors on the left, exhibit interesting phenomena not observed in systems with more general coupling [SJ01,KZ01]. Coupled map lattices have been extensively studied also using tools from ergodic theory.In this branch of the literature, the local dynamics are usually uniformly hyperbolic (e.g.uniformly expanding) and the coupling strength is weak.[CF05] reviews early works in its introduction, and collects also several papers on the topic. A seminal work is [BS88].Here a 1D lattice of coupled uniformly weakly expanding maps is considered.The evolution equations look like where f is a uniformly expanding map.It was expected that if there are only finitely many coupled maps (e.g. on a finite periodic 1D lattice) and ε was sufficiently small so that the resulting map was expanding, then the system had a unique absolutely continuous invariant measure.The question was if also the infinite system (e.g. on Z), admitted a unique SRB8 measure for ε small but different from zero.In [BS88] this question was answered in the affirmative with the use of symbolic dynamics. SRB measures for coupled map lattices where the local dynamics has an hyperbolic attractor were studied using approaches from thermodynamics (e.g.polymer expansions Results on the spectral properties of Perron-Frobenius operators for various types of CML can be found in: [Kel96]; [BDEI + 98]; [FR00] for analytic coupled maps using a cluster expansion; [Rug02]; [Jia03] using the thermodynamic formalisms fo transfer operators; Finally in [KL06] a general framework for the study of Perron-Frobenius operators of coupled expanding maps has been put forward.In this paper the authors construct Banach spaces on the infinite dimensional phase space and a direct proof of the presence of a spectral gap is provided.The argument exploits the uniform expansion of the uncoupled dynamics, and the local nature of the interactions. In [MVM97] stochastic stability of the Gibbs states is investigated, while [JdlL00] studies linear response.Some works study finer statistical properties of CMLs: [Bar02] investigates limit theorems; [BA04] studies large deviations; escape rates in coupled map lattices with holes are studied in [BF11] using symbolic dynamics, and in [FGGV18] using the perturbation theory of transfer operators (with applications to synchronization). Increasing the coupling strength, the picture with a unique SRB measure is destroyed and one witness the appearance of: multiple Gibbs states [KKN92, LMM95, Bla97, GM00, BK06]; coherent structures [BLL90,Bla13].See [BLMMR92] for an example with simple uncoupled dynamics and coupling, where a full picture concerning bifurcations is rigorously established.Phase transitions and bifurcations in CML are rigorously studied also in [dM10] [BT98]. [Jus98, AF00, Jus01] focus on the topological properties of piecewise affine CMLs (rather than the measure theoretic ones presented above) using symbolic dynamics. In [KL09] and [BS22], the authors study maps coupled by collisions.Here uniformly hyperbolic maps are coupled to each other by rare but very strong interactions: on most of the phase space the system is uncoupled apart from a small set where the interactions can be large. Coupled map networks Coupled maps with more general types of coupling structures have been considered in the literature.Usually the maps are assumed to occupy the vertices of a graph and the presence of an edge prescribes an interaction.These systems have been considered in [KY10] and were termed coupled map networks.Here the coupled maps are smooth and uniformly expanding, and the interactions (among maps connected by an edge) are piecewise affine.The equations describing the evolution of N coupled maps on the 1D torus can be written as where M ij is a matrix of weights associated to each directed edge from node j to node i.The paper provides sufficient conditions involving the matrix (M ij ) for the resulting dynamics to be piecewise hyperbolic. In [PvST20] the authors study uniformly expanding coupled maps on heterogeneous networks with evolution equations where (A ij ) is the adjacency matrix of an heterogeneous graph, i.e. having most of the nodes making very few connections (low degree nodes) and a few nodes (called hubs) being connected to a large number of nodes9 .The parameter ∆ is the maximum in-degree of the network.Here it was showed that a mean-field reduction can be made for the dynamics of the hub nodes where the average of the interaction is substituted by an expectation, and the reduction holds for times exponentially large in the system's size. For another treatment of the effect of the structure of interactions in shaping the dynamics see [ABM10]. Self-Consistent Transfer Operators As we have argued above, self-consistent transfer operators (STOs) arise as thermodynamic limit of coupled maps.More generally, to define a self-consistent operator acting on measures one needs to specify a mapping that to each measure associates a linear operator on measures; the STO than acts taking a measure, associating the corresponding linear operator to it, and applying this operator to the measure itself.This is made precise in the following definition. Definition 3.1.Given a Borel space (X, B)10 , let's denote by M(X) the set of finite signed Borel measures on X and V ⊂ M(X) a subspace. Denote by End p (V ) the set of linear endomorphisms of V preserving the total measure of X, i.e. such that for every A ∈ End p (V ) and µ ∈ V , A[µ](X) = µ(X). A mapping T : V → End p (V ) defines the self-consistent operator T : V → V as with the above notation standing for the operator T (µ) applied to the measure µ. T is a nonlinear selfmap of V .Depending on the context, the object defined above has been given different names.In the context of nonlinear Markov chains it is referred to as nonlinear Perron-Forbenius operator. The main goal is to study the properties of T from knowledge of the mapping T .Notice that given any map P : M(X) → M(X) such that P(µ)(X) = µ(X) for all µ ∈ M(X), there exist (many) mappings T : M(X) → End p (M(X)) such that the associated self-consistent operator T equals P. It is therefore crucial to restrict to some specific classes of T to obtain self-consistent operators amenable to study.Below we list some possible setups. • Average of self-consistent operators.Consider a measurable map γ : X → End p (V ), and define T : V → End p (V ) as and The above can be interpreted as an average of transfer operators, where the average is with respect to the measure it's applied to. • Nonlinear Markov Chains.As a particular example of the above, consider P : B × X × X → R + 0 such that for every x, y ∈ X, P (•, x, y) : B → R + 0 is a probability measure and for all P (B, •, •) is measurable for all B ∈ B. P should be interpreted as a y dependent transition probability.Define T : M(X) → End p (M(X)) as • Globally mean-field coupled maps with full permutation symmetry. For simplicity let X = T = R\Z, and consider functions f : T → T and h : T × T → R and the system of globally coupled maps given by where x i (t) describes the state at time t of the i-th unit.Defining for any µ ∈ M(T), f µ : T → T as where (f µ (N) t ) * denotes the push-forward of f µ (N) t . This leads to the definition of T : M(T) → End p (M(T)) as T (µ)[ν] = (f µ ) * ν and Under some continuity assumptions on f and h, one can see that if µ t+1 converges weakly to T µ (see e.g. the introduction of [ST21] for a discussion).In this sense, T describes the thermodynamic limit of the system. • Globally coupled maps without symmetry.Again let X = T = R\Z, given functions f i : T → T and h ij : T×T → R, consider the system of globally coupled maps given by For any µ ∈ M(T N ) let F µ,i : T → T be   dµ(y 1 , .., y j ) mod 1 and F µ : T N → T N given by F µ = (F µ,1 , ..., F µ,N ).Then define T : The corresponding T is an extension of the one at the point above and becomes the previous one in the case where f i = f and h ij = h for all i, j ∈ [1, N ], and µ = µ 1 ⊗ ... ⊗ µ 1 is a product measure with all identical factors. • Parametric families of maps.Given a parametric family of maps on X, {f γ } γ∈Γ , and γ : M(X) → Γ, then one can define T (µ)[ν] = (f γ(µ) ) * ν and Below, we are going to illustrate some of the main available techniques for the analysis of self-consistent transfer operators and their stable fixed points (Section 3.1).We will then discuss numerical and rigorous results on linear response for fixed points of STOs and globally coupled maps (Section 3.2), and some further directions in the study of STOs when their attractors are different from fixed points (Section 3.3). Stability for fixed points of STOs In this section we describe the main frameworks used to study stability and convergence to fixed points of STOs arising from globally coupled maps.The objective is not to give the results in their most general formulations, but restrict to a simple example where only the core ideas of each reviewed framework are highlighted. The example is the following: Consider a system of coupled maps with x i (t) ∈ T and where f (x) = 2x mod 1 (the doubling map), and h is some smooth coupling function.The corresponding STO is with P the linear transfer operator of the doubling map, which on L 1 (T) acts as and L ε,µ the transfer operator associated to the mean-field coupling, i.e. the transfer operator of the map g µ (x) := x + ε h(x, y)dµ(y) mod 1. We are going to review three methods to study fixed points of the family of STOs above.The first two are devised for the case of small coupling, while the last one treats some situations that can arise in the case of strong coupling.These are: a functional analytic approach that extends the spectral gap properties of linear operators to STOs (Section 3.1.1);an approach with convex cones that studies the contraction properties of STOs with respect to the Hilbert projective metrics (Section 3.1.2);and an approach devised to study synchronized states (Section 3.1.3). Functional analytic approach This is the approach that under different forms was used e.g. in [Kel00, SB16,BKST18]. The strategy consists of the following steps: Step 1. Use Schauder's fixed-point theorem to prove that T ε has a fixed point µ * . Step 2. Use the spectral gap of the family of linear operators PL ε,µ and continuity of µ → PL ε,µ to show that the fixed point is attracting when ε is sufficiently small.Since we are only going to deal with absolutely continuous measures, if µ has density ϕ we use notations: L ε,ϕ and g ϕ . Step 1.Consider the set where |ϕ| Lip = sup x =y ϕ(x)−ϕ(y)| |x−y| denotes the Lipschitz semi-norm.The first thing to notice is that for ε > 0 sufficiently small, there is L for which B L is forward invariant under action of T ε . Furthermore T ε is continuous. Lemma 3.2.With the parameters as in Lemma 3.1, T ε is is continuous in the C 0 topology 11 . Proof.See Appendix A. Since B L is a convex, compact (in C 0 ) set, and T ε is continuous, by Schauder's fixed point theorem T ε has a fixed point ϕ * ∈ B L . 11 By C i topology we mean the topology generated by the norm Step 2. This step is a bit more involved.First of all, one needs to modify Step 1. a bit to find a forward invariant set of functions more regular than just Lipschitz, for example C 2 with uniformly bounded first and second derivative: In particular, arguments as those presented in Step 1 allow to conclude that the set B L 1 ,L 2 is forward invariant for suitable values of L 1 and L 2 , and ϕ * is in fact C 1 and has Lipschitz derivative with Lipschitz constant bounded by L 2 .Then, for every function with α ∈ (0, 1), K ≥ 0 depending on L 1 and L 2 .Equation ( 15) is a spectral gap condition for the linear operator PL ε,ϕ * which is implied (for |ε| sufficiently small) in a standard way by the uniform expansion and smoothness of the map f • g ϕ * . Equation ( 16) is a continuity relation for the family of operators {L ε,ϕ } and is proven in Lemma 3.3. Lemma 3.3.Assume that h is C 2 and ϕ * is as above, then there is Proof.See Appendix A. The cone approach In this approach, rather than studying the STO with respect to the norm of some Banach space, one considers its action with respect to the Hilbert projective metric on some convex cones of functions (see e.g.[Liv95]).To study the STO in (13), we can restrict its action to the cone of log-Lipschitz functions for some a > 0, which is endowed with the Hilbert projective metric θ a .The peculiarity of the Hilbert metric is that any linear application between two convex cones is a contraction with respect to their Hilbert metrics.For this reason convex cones have been used to study the contraction properties of transfer operators of hyperbolic maps. It is immediate to check that For what concerns L ε,ϕ we have the following Proof.See Appendix A. Eq. ( 18) and Eq. ( 19) together imply that for suitable values of a and ε 12 , there is λ ∈ [0, 1) such that If T ε were linear, this would be enough to conclude that T ε is a contraction, and standard arguments would lead to existence of a fixed point together with uniqueness and stability results.However, T ε is nonlinear, so an extra step is needed to conclude the argument.In [ST21] we used the explicit expression for the Hilbert metric θ a to prove that T ε : V a → V λa is a contraction when |ε| is sufficiently small.With some additional arguments one can use this fact to conclude that there exist a fixed density ϕ * ∈ V a and that Synchronized states and the study of their stability Let's consider the explicit choice for interaction function h(x, y) = sin(2πx) cos(2πy). Notice that the measure δ 0 is a fixed point for T ε , in fact and since 0 is a fixed point for f and g δ 0 , T δ 0 = f * (g δ 0 ) * δ 0 = δ 0 . 12 More precisely if which can always be realized for a sufficiently large. The state δ 0 can be seen as a synchronized state in the thermodynamic limit: Letting (x 1 , ..., x N ) ∈ T N be the state of the finite-dimensional system, if lim when N increases, the fraction of states x 1 , ..., x N that are further than any η > 0 from zero must go to zero, i.e. lim Below we argue that if ε is in a certain range, then δ 0 is stable in the following sense: There is To show this, one can start by noticing that fixing ε in ( 1 2π , 3 2π ), 0 is an attracting fixed point for the map f δ 0 , and there are λ ′ ∈ [0, 1) and By continuity, there is ∆ 0 > 0 sufficiently small and λ ∈ [0, λ ′ ) such that for any ∆ < ∆ 0 and µ with support contained in [−∆, ∆], |f ′ µ | < λ on [−∆, ∆] and therefore the measure and for n → ∞, L n ε µ converges weakly to δ 0 .In [ST22], the picture above is generalized to the case where multiple clusters of coupled maps interact and the number of maps in each cluster goes to infinity.For example, taking a setup with two clusters, the states of the maps in each cluster are given by x 1 , ..., x N ∈ T and y 1 , ..., y N ∈ T and the evolution equations are where f (1) and f (2) are the local dynamics in the first and second cluster, while h 11 , h 22 , h 12 , h 21 are respectively the interactions: among sites in the first cluster, among sites in the second cluster, from cluster 2 to cluster 1, and from cluster 1 to cluster 2. The STO associated to the infinite limit of the system above is given by T : where Notice that when µ = (δ x , δ y ), then and therefore the map G : prescribes the evolution of (δ x , δ y ).In [ST22], sufficient conditions involving G are given for the STO T to have stable fixed synchronized states.That paper considers also setups with multiple clusters where the STO has a stable fixed point which is a product of delta and absolutely continuous measures: this means that some clusters are in a synchronized state, while others are in a turbulent state.The clusters can be chosen so that the equations describing the system have full symmetry giving rise to what is sometimes called a chimera state (see Section 4.1.3). Linear response Given a high-dimensional system composed of many interacting units, how does its behavior change if the dynamics of its components is perturbed slightly?In particular, if the dynamics of the components is perturbed with a perturbation of magnitude ε, is the change in the global behavior of the system still of order ε?If yes the system is said to have linear response.Linear response of high-dimensional systems has important relations to the study of climate models [Luc18].It is generally believed that high-dimensional systems exhibit linear response of their physical relevant measures.This is in conjunction with the chaotic hypothesis of Gallavotti and Cohen [GC95] stating that high-dimensional systems are akin to Axiom A systems, for which linear response is known to hold [Rue09].In the works we review below the question of linear response is addressed in some setups of globally coupled maps and STOs. Linear response for attracting fixed points of STOs Some of the works mentioned above give sufficient conditions for the stable fixed point ϕ ε * of a parametric family of STOs T ε to be differentiable in ε, and provide a linear response formula. In [ST21], the cone approach yields differentiability of ε → ϕ ε * from an interval (−ε 0 , ε 0 ) to C 1 densities.The main idea is to consider curves γ : (−ε 0 , ε 0 ) → C k (T, R) -for a sufficiently large k -and the action T on these curves given by Loosely speaking, the strategy consists in restricting to an invariant class of curves for the action T , and use Schauder's fixed point theorem to show existence of an invariant curve γ * for T with the sought after differentiability property and such that γ * (ε) is in the invariant cone for L ε .By the discussion in Sect.3.1.2,this fixed curve γ * must satisfy γ * (ε) = ϕ ε * and therefore ϕ ε * has a differentiable dependence on ε.Once differentiability has been established, one can exhibit a linear response formula for the derivative of ϕ ε * with respect to ε. In [Gal22], sufficient conditions are given in terms of the spectral properties of the linear operator for the uncoupled system, and of the derivative of the nonlinear family of STOs {T ε } ε with respect to ε.The main requirements are that: i) T ε : B i → B i for i ∈ {w, s, ss} corresponding to three Banach spaces B w ⊃ B s ⊃ B ss with norms ii) the resolvent of the linear operator P, (Id −P) −1 is bounded on densities with zero integral from B w ; iii) an assumption that loosely speaking requires that T ε is Lipschitz in ε13 and differentiable in ε at ε = 0. Under these assumptions ε → ϕ ε * is differentiable at zero with respect to the weak norm • w , more precisely lim Linear response for heterogeneous systems The setup and results below can be found in [WG18,WG19].Here linear response is studied for systems of different (finitely or infinitely many) interacting units belonging to a family of maps that does not necessarily exhibit linear response: the state of the global system at time t is x(t) = (x 1 (t), ..., x N (t)) ∈ M N , and the time evolution is given by The maps f a i are a version of the logistic map with parameter a i14 ; h Φ(x) is a mean-field interaction term where is a mean-field parameter; and εg(x) is a perturbation depending on the parameter ε w.r.t. which linear response is investigated.Different units can have different values of a i which are assumed to be drawn i.i.d. with respect to some (smooth enough) probability distribution.Crucially, depending on the distribution of the parameters a i , the physical measure(s) of the maps f a i may present or fail to exhibit linear response. Considering a global observable15 and letting µ ε on M N be a physical invariant measure of the system, one wonders whether ε → E µε [Ψ N ] is differentiable.For example, in the case where there is no mean-field coupling, i.e. h Φ = 0, then where µ ε,i are the physical measures for the maps f a i (x) + εg(x).It is immediate that if the maps f a i all satisfy linear response when perturbed adding εg(x) to their equations, then so will the global uncoupled system.More surprisingly, it is shown in [WG18] that even if the local dynamics don't satisfy linear response, the global finite-dimensional system does, provided that the distribution of the a i is sufficiently smooth.In the same paper, it is shown that for some singular distributions of the a i , linear response fails for the global system.Another interesting finding in [WG19] is that when there is a mean-field coupling among the units, i.e. h Φ = 0, the authors bring numerical and analytical evidence showing that even if the microscopic units exhibit linear response, in the thermodynamic limit, the global system can fail to do so.This can be rephrased by saying that even if the microscopic components leading to the definition of a STO as in (10) exhibit linear response, the dynamic of the STO might not satisfy linear response.Examples are brought where the STO describing the thermodynamic limit exhibits a fixed point or limit cycle attractor and shows linear response under perturbations, and examples of STOs with more complicated attractors that do not satisfy linear response. Towards the study of more complicated attractors and behaviors With few exceptions, the situations described in the previous sections can rigorously deal with three cases: small coupling, so that the STO is close to a linear transfer operator; evolution of states close to delta measures for which the STO is close to a finite-dimensional map; or a combination of the two situations.General strategies that deal with genuinely nonlinear infinite-dimensional operators are lacking.For example, one wonders if there is a general framework to study STOs with stable fixed point in a regime that is "far" from linear or a finite-dimensional map, and also if it is possible to rigorously study bifurcations of STOs where a stable fixed point looses stability.Furthermore, being a nonlinear transformations of an infinite-dimensional space, STOs are expected to have attractors and dynamics more complicated than periodic dynamics.This gives rise to the question if it is possible to study STOs with multi-dimensional attractors where the dynamics on the attractor and in a neighborhood is amenable to rigorous analysis.Below we give an example of an innocent looking system of globally coupled maps (perhaps the simplest possible) whose associated STO exhibits very complicated behavior. Mean-field coupled rotations Consider a system of coupled maps where each unit evolves according to a 1D rotation of an angle that depends on the state of all the units via a mean-field.Fix a continuous map h : T → R and define the system of globally coupled maps and the associated STO This system has a very simple formulation, but as we are going to show, can produce complicated behavior for the associated self-consistent transfer operator.Consider the one parameter family of rotations {R θ } θ∈T with R θ : T → T and R θ (x) = x + θ.Notice that this collection forms a group that acts on the measures in M(T) as Given a measure µ, denote by C µ its centralizer, i.e. If C µ = {R 0 = Id}, we say that µ has no rotation symmetries.It follows immediately that if µ has no rotation symmetries, then the orbit of µ under the R θ action, {R θ µ} θ∈T , has a natural 1D torus manifold structure.Let's denote by The following proposition is immediate. Proposition 3.1.Consider µ ∈ M(T) with no rotation symmetries.Then i) T (T µ ) ⊂ T µ ; ii) T on T µ acts as the map From the above proposition it follows that the space of measures M * with no rotation symmetries forms an open subset of M(T) which, by the above proposition, is foliated into 1D tori on which the dynamics of T has great variability.To see this, take for example an absolutely continuous measure without symmetries having density ϕ.Equation (24) becomes where * denotes the convolution, and varying ϕ, i.e. moving from one invariant circle to the next, various maps compatible with the regularity of h can be found16 . Numerical Studies of STOs As the dynamics of STOs can be very complicated to study from a rigorous point of view, computational approaches are crucial to get information on these objects.Here we give two examples where the attractors of STOs and their bifurcations have been studied numerically. In Section 3.1 we reviewed situations where one can prove that the thermodynamic limit of some coupled systems has a unique equilibrium measure, provided that the coupling strength is sufficiently small.Increasing the coupling strength beyond a certain threshold, one does not expect the picture to persist and one wonders what kinds of bifurcations the system can undergo.In [Sél21] this question has been addressed employing a mix of rigorous arguments and numerical evidence. Given ε ∈ R, consider the parametric family {f γ } γ∈R of selfamps of [0, 1] which is the center of mass of µ.For example, if F (x) = x, the above becomes x mod 1 which, for fixed µ, is a β-transformation giving a perturbation of the doubling map.The self-consistent transfer operator is If ε = 0, f µ = 2x mod 1 independently of µ, and the Lebesgue measure is the unique absolutely continuous invariant measure, and an attracting point for T 0 .Notice also that Lebesgue is a fixed point of T ε for any value of ε.In [Sél21], it has been proven that for ε > 0 there is another measure absolutely continuous with respect to Lebesgue that is fixed by T ε , and numerical simulations suggest that this measure is a stable fixed point for T ε while the Lebesgue measure looses stability. In Section 3.2.2we reviewed numerical evidence showing that in the thermodynamic limit of coupled systems, linear response might fail.This is in disagreement with the chaotic hypothesis of Gallavotti and Cohen claiming that high-dimensional systems are expected to exhibit Axiom A behavior.Motivated by this observation, the work in [Wor22] presents an example of an STO fo which numerical evidence suggests the presence of an homoclinic tangency implying robust non-uniformly hyperbolic behaviour. The system considered there has x(t) = (x 1 (t), ..., x N (t)) ∈ [−1, 1] N and where Φ N is as in (23) which is a nonlinear perturbation of the doubling map on [−1, 1].Notice that with the choice of functions above, the maps f α are all uniformly expanding with lower bounded uniform expansion.Nonetheless, it is shown that the STO in the thermodynamic limit has a fixed point that is not attracting, but has some unstable directions that numerical evidence suggests are homoclinic to some stable directions. The thermodynamic limit problem for GCMs The following question now arises: to which extent does the thermodynamic limit given by a STO describes the finite dimensional system?This is a particularly relevant question having in mind applications to systems composed by a number of units that, although large, has order of magnitude much smaller than e.g.systems from statistical physics.For example, if a macroscopic sample of any gas/solid-state system is composed by ∼ 10 23 molecules, the brain has "only" ∼ 10 10 neuronal cells with some substructures (e.g.nuclei and bulbs) counting ∼ 10 3 neurons. In [Tan22], we provide quantitative estimates for the convergence to the thermodynamic limit in the case where the system of coupled maps is uniformly expanding, and gives sufficient conditions involving the expansion and interaction strength ensuring that the limit approximates the finite dimensional system for all times up to an error of order N −γ with γ < 1 2 , where N is the number of coupled units.As a corollary, one can show that in the limit, the system exhibits propagation of chaos 17 . Furthermore in that paper, globally coupled maps lacking symmetry were introduced.The evolution equations for a system of N different coupled maps are where xi (t) gathers all coordinates but the i-th one.To describe the behavior of this system when N is finite but large, the STO acting on M(T N ) defined in (11) was introduced.The heuristic behind this definition is that, considering a product probability measure µ = µ 1 ⊗ ... ⊗ µ N , concentration results suggest that when N is large and (x 1 , ..., x N ) is sampled according to µ, the average in (25) can be approximated by its expectation with respect to µ with high probability To prove the result above, it has been showed that these globally coupled maps preserve a class of measures that are close enough to being products, so that they 17 See Section 4.1.1 for more about propagation of chaos. satisfy the usual concentration properties of independent bounded random variables, and with respect to which the approximation in (26) holds with high probability, thus allowing to greatly simplify the evolution equation.Roughly speaking, this implies that picking an initial condition with respect to a measure from this class, with high probability, the evolution is undistinguishable from application of the STO up to a small error.Crucial to this argument is the study of the evolution of conditional measures on non-invariant foliations with leaves along the coordinate directions. Mean-field models in continuous time and other topics The objective of this section is to give some pointers to other branches of the literature on the study of mean-field interacting systems beyond the study of coupled maps.Some of the topics we mention are established fields of research, and our exposition is going to be very superficial with no pretense of completeness. Mean-field interacting particle systems and propagation of chaos As a prototypical example of mean-field models in continuous time consider the Markov process (X N 1 (t), ..., X N N (t)) ∈ R N d describing N identical entities coupled through a mean-field plus noise: where h : R d × R d → R d is a coupling function and (W i (t)) i are N independent Brownian motions.It is well known that if h is Lipschitz and one pick an initial condition such that (X N i (0)) N i=1 are i.i.d., then the SDE admits a solution.Other important models of interacting particles are deterministic (there is no dW i (t) in the equations) or h is singular 18 . The main objective is to study the above system when N → ∞.To this end, one can define and ( 27) becomes This is the continuous time analogue of the discrete time evolution given in (8) in Sect.2.2. The weak coupling among the components and the choice of initial condition with i.i.d.components suggest that in the limit N → ∞, for every k ∈ N and t ≥ 0, (X N 1 (t), ..., X N k (t)) → (X 1 (t), ..., X k (t)) (28) in distribution, where (X i (t)) N i=1 are i.i.d processes solving where µ t is the distribution of X(t).These equations are also referred to as McKean-Vlasov equations.When (28) holds, the system is said to exhibit propagation of chaos.Notice that the self-consistent transfer operator is the discrete time analogue of the generator of (29).The presence of noise and the exchangeability of the system (full permutation symmetry) are some fundamental ingredients to prove propagation of chaos in the setup above. In general, to make the above picture rigorous one has to check that i. the system of coupled SDEs in ( 27) is well posed19 ; ii. the self-consistent SDE in (29) describing the candidate limit is well-posed; iii. the limit in (28) holds. Various approaches to prove (28) exist among which coupling methods were the first to be employed, while large deviation estimates and entropy bounds are among the most recent.The full permutation symmetry of the system is central to the above analyses. Notice that in discrete time: point i. corresponds to the definition of the finite dimensional map, and ii.corresponds to existence of the mean-field map so there is nothing to prove; an effort has to be put in iii., especially to gain explicit rates of convergence (and this is the topic of Section 3.5); while most of the analysis of STOs presented in previous sections corresponds in the continuous time setup to the study of the solutions of (29). For more on interacting particle systems: [Szn91] is a classical reference on propagation of chaos; [Spo12] and [CIP13] give accounts on the study of interacting systems in statistical mechanics; [GOS + 19] contains a nice introduction to the topic together with a discussion of applications and some recent contributions; [CD21] is a very thorough review containing an account of various setups of interacting particle systems, detailed definition(s) of propagation of chaos and different approaches to prove it. Coupled Oscillators Oscillating systems are ubiquitous in nature and artificial systems (see Section 4.3) and often interact with one another.An example of coupled oscillators is given by the system of N coupled differential equations where ω i are called natural frequencies and give the angular velocities at which the oscillators would rotate if uncoupled.They are in general all different and are usually assumed to be real i.i.d.random variables having distribution with some density g.This model originated as a simplification of the dynamics of weakly coupled (almost identical ODEs) having an attracting limit cycle (see [Win67]).It was first intensely studied by Kuramoto in the case where h ij is the sine function yielding what is known as the Kuramoto model [Kur84]: The main starting observation is that when the coupling strength K ≥ 0 is small, the difference in the natural frequencies causes the oscillators to spread and the system appears to be disordered.When K is increased above a certain threshold the system synchronizes. A first attempt to study this phenomenon involved the introduction of variables r and ψ defined as with respect to which the equations in (30) can be written (after a change of variables) as Notice that r = 0 implies a "disordered" distribution of the angles θ i , while r = 1 implies θ 1 = θ 2 = .... = θ N , i.e. a fully synchronized state, therefore this parameter gives important information on the state of the system.The evolution of r(t) has been rigorously studied in [SM91] for N → ∞.In this limit, the state of the system at time t was assumed to be described by the collection of densities ρ ω,t (θ) := ρ(θ, ω, t) -this corresponds to the state of the GCM in the thermodynamic limit -, i.e. the density of the distribution of the oscillators having natural frequency ω at time t, with the function ρ satisfying a PDE originating from a continuity equation -which gives an evolution law corresponding to the STO in the GCM setup. The study of the Kuramoto model generated a large body of works.We direct the reader to the excellent paper [Str00].For other reviews see also: [ABV + 05]; [PR15] for generalizations and other models of coupled oscillators; [RPJK16] for works on oscillators coupled on various types of networks. Chimeras Roughly speaking, a chimera is a state of a system of coupled units where part of the units exhibit coherent motion, e.g. are synchronized, while another part exhibit erratic incoherent behavior (this is analogous to the partially ordered phase described in Section 2.1).Most surprisingly, chimeras were observed early on in systems of coupled oscillators having full permutation symmetries suggesting the presence of symmetry breaking [AS04,AS06]. There is no general consensus of what constitutes or not a chimera.A review of different characterizations can be found in [Hau21], while [BA16] proposed a rigorous mathematical definition.In some cases chimera states are expected to arise as stationary states for the system [MPA16] while in others arise only as long transients [WO11]. Coupled Systems and their Symmetries Given X ⊂ R N and a vector field f : X → X, the ODE has the linear transformation γ : R N → R N as a symmetry if f (γx) = γx.Permutation symmetries are an example where one or more coordinates can be swapped without changing the vector field20 .When the vector field describes the continuous time evolution of units coupled on a graph, it is likely that the symmetries of the graph correspond to symmetries of the dynamics, and therefore contain information on the time evolution [GS03, GS03,FG09].In fact, they can explain synchronization and coherence patterns as well as the bifurcations that these patterns undergo when the parameters of the system change.This is a notable example of how the interacting structure can influence the dynamic. For a review see [GS15].See also [AADF11, RS14, RS15] and references therein.For a study of the role of symmetry in a system of identical coupled oscillators see [AS92].For a study of the role of symmetries on mean-field limits see [BS21]. Different types of couplings Interactions shape the dynamics of complex systems and can produce behavior drastically different from that of the local dynamics.It was noted in the previous section that the coupling structure (who is connected to whom) influences the resulting dynamic.The particular form of the coupling (e.g. the function h in (9)) has also an important role (see e.g.[SPMS17]). In contrast with what we have presented so far, in this section we review works where the coupling changes with time (adaptive networks); and when the coupling can arise among multiple units (higher-order networks), rather than from pairwise interactions. Adaptive networks In this type of networks, the interaction among the units changes according to the internal state of the system.For example, some links in the graph of interactions could be severed or added with time, or more generally the coupling strength among different nodes could be increased or decreased.These networks capture many phenomena in real-world systems, a prime example is plasticity between neurons by which signal transmission at synapses is strengthened or weakened to modify the dynamic (plasticity is at the base of development, learning, and memory). For a mean-field reduction approach to adaptive networks related to plasticity see [DBB22] and references therein.For an application to networks on power grids see [BYS21].For many more on adaptive networks and examples arising in real-world systems see [GB08,GS09,Say14], for an earlier reference see [SB81]. Higher-order networks All the coupled systems considered so far have been characterized by pairwise additive interactions, meaning that the interaction term is given by the sum of all the interactions between a node and each other node in the network.In contrast, systems coupled in higher-order networks ([Bia21, BGHS21]) have interactions terms where the interaction can also be among 3 or more units at the same time.For example, one can imagine a situation where the interaction strength between two nodes is modulated by a third node in the network.In this case, the interaction term depends on the coordinates of all three nodes. As an example, the Kuramoto model discussed in Section 4.1.2arises as a first order approximation of a system of interacting limit cycles with only pairwise interactions; in contrast, [BAR16] contains a derivation of the higher orders where the interaction terms depend on multiple oscillators.Higher-order networks have recently shown to arise also as a result of the choice of coordinates [NOEE + 22]. Mean-field coupled higher-order networks and their thermodynamic limits have been studied in [BBK22a,BBK22b,GJKM22]. Mean-field coupled models as models of real-world systems In this section we give some indications to reviews and some selected works on modeling real-world systems via mean-fields. Globally coupled oscillators and maps arise as models of several physical systems.Some lists of applications can be found in the introductions to [Kan89b, Kan91, Str00].Among these, one finds Josephson junctions array, charge density waves, nonlinear optics, coupled lasers, and microwave oscillators. Mean-field coupled maps and flows have received particular attention for their ability to reproduce behavior observed in systems of biological origin.This was very early on noted, among others, by Kaneko [Kan94] (see also the more recent [Kan15]) who reviewed several biological systems in which coherent structures like those described in Section 2.1 were observed to arise as the result of the interaction of many components.Another feature of GCM that recalls the functioning of some biological systems is the presence of a great variety of attractors that the system can visit under perturbation due to external forcing.This characteristics can allow for some computational mechanisms: as the external factors change, orbits are sent to a different attractor that encodes a particular stimulus or some features of the stimulus. The use of mean-field models to simulate the behavior of globally coupled neurons has a long list of contributions.For what concerns map-based models of neurons (i.e.models in discrete time) a review is given in [ICS11] which contains a list of studies on globally coupled maps describing the evolutions of ensembles of neurons.Here we mention Rulkov Maps [Rul02] which model chaotic bursting, a firing pattern where stretches of high-frequency spiking are alternated (in an erratic fashion) with stretches where the neuron is at rest.A system of coupled Rulkov Maps is given by equations (for i = 1, ..., N ) with parameters α ≈ 4.2 and σ = β ≈ 0.001.The fast variable x i describes the membrane voltage of the neuron, while the slow variable y i describes an internal variable that is responsible for the switching between resting and chaotic phase.Mean-field coupled Rulkov Maps and their synchronization are studied in [Rul01]. A review of models of coupled oscillators that capture some aspects of neuron dynamics can be found in [BGLM20].Here we mention Ermentrout and Koppell's Theta Model which is a continuous time 1D model of tonic spiking activity in neurons.A single neuron is described by the ODE on the unit circle where r < 0 is a resting potential, and I(t) is the current coming to the neuron at time t.When I(t) < |r|, the system has an attracting fixed point, and the dynamics is at "rest".At I(t) = |r| the system undergoes a saddle node bifurcation, and for I(t) > |r| orbits rotate around the circle corresponding to a "tonic spiking" phase. Coupling Theta Models one obtains the equations where I i (t) now depends on {θ j (t)} N j=1 , and for example (for instantaneous synapses) are equal to A study of the dynamics is undertaken using approaches similar to those described in Section 4.1.2for the Kuramoto model [KYY + 14, Lai18]. The Kuramoto model can also arise as a phase reduction model of coupled neurons [Izh99].Other works study mean-field coupled models of integrate and fire neurons (called population density models in the computational neuroscience literature) and can reproduce some characteristics oscillations recorded in brain activity that are known as rythms [NT00,HNT01].For a review of population density models see [BH15]. Mean field coupled models have been proposed also to reproduce the behavior of gene networks.A general strategy to obtain mean-field models for gene regulatory networks has been proposed in [AK06].An interesting mean-field model we mention is the one where the expression of a group of genes is regulated by a common repressor field.This generated a simplified model of mean-field coupled degrade and fire oscillators with interesting features, like clustering, and which is amenable to rigorous analysis and classification of the periodic attractors and their basins [FT14,Fer18]. Related to applications is also the problem of recovering models for coupled systems from observational data.This is a particularly hard task for mean-field coupled systems where the very small size of the interactions hinders reconstruction of the connections among units via model based methods, and the erratic dynamics makes model free estimation (e.g.correlation analysis) ineffective.For some contributions to this problem that specifically address mean-field coupled systems and the issues that arise in this set-up see [ETvSP20]. A Some proofs from Section 3.1 Proof of Lemma 3.1.For a fixed ϕ ∈ B L , g ϕ (x) = x + ε T h(x, y)ϕ(y)dy.For |ε| small enough, g ϕ is a diffeomorphism and A computation shows that above is Lipschitz, with Lipschitz constant (1+O and the result follows. Proof of Lemma 3.2.P is evidently continuous.For ϕ → L ε,ϕ ϕ, triangle inequality implies For the term in (31) where O(ε) is uniform in ϕ.To bound the term in (32), notice that where O(ε) is a constant that depends only on the derivatives of h and ε, and is uniformly bounded when ε is bounded.This implies that For the term in (31), with K 2 and K 3 independent of x, ϕ 1 , and ϕ 2 ; in the first inequality we used that |g ′ ϕ i |(x) is uniformly bounded away from zero in both x and ϕ i while in the second we used Putting together all the inequalities above the lemma is proved. Proof of Lemma 3.3.Since P is bounded in the • C 1 norm, it is enough to prove that ∃K ′ ≥ 0 such that for all ϕ ∈ B L 1 ,L 2 Now, (L ε,ϕ − L ε,ϕ * )ϕ * C 0 has been already bounded in (34).We proceed with a bound of the first derivative For (37) where the bound on the first term follows as (34), and for the second term and the above can be bound with computations analogous to (35).Similar estimates imply that the expression in (38) can be bounded by O(ε) ϕ − ϕ * C 0 . Proof of Lemma 3.4.
18,656.4
2022-11-21T00:00:00.000
[ "Physics" ]
On generalized quasi-topological cubic-quartic gravity: thermodynamics and holography We investigate the thermodynamic behaviour of asymptotically anti de Sitter black holes in generalized quasi-topological gravity containing terms both cubic and quartic in the curvature. We investigate the general conditions required for physical phase transitions and critical behaviour in any dimension and then consider in detail specific properties in spacetime dimensions 4, 5, and 6. We find for spherical black holes that there are respectively at most two and three physical critical points in five and six dimensions. For hyperbolic black holes we find the occurrence of Van der Waals phase transitions in four dimensions and reverse Van der Waals phase transitions in dimensions greater than 4 if both cubic and quartic curvature terms are present. We also observe the occurrence of phase transitions in for fixed chemical potential. We consider some applications of our work in the dual CFT, investigating how the ratio of viscosity to entropy is modified by inclusion of these higher curvature terms. We conclude that the presence of the quartic curvature term results in a violation of the KSS bound in five dimensions, but not in other dimensions. Introduction In recent years there has been an increased appreciation of the role that higher-curvature theories of gravity play in our understanding of several areas of physics, including supergravity and string theory, holography, cosmology, and black holes. These studies are motivated both by a desire to understand the ultraviolet behaviour of gravity and by a realization that terms non-linear in the curvature generically appear in perturbative calculations, particularly in string theory [1]. Furthermore, an analysis of this class of theories provides us with new insights into general relativity (or Einstein gravity) and may even provide new empirical tests of gravitational physics [2][3][4][5][6][7][8][9][10][11]. JHEP07(2019)012 Originally proposed nearly a century ago [12,13], a revival of interest in these theories in the theoretical physics community occurred once significant effort began to be expended on constructing a quantized theory of gravity. Adding terms quadratic in the curvature to the Einstein-Hilbert action were found to yield a power-counting renormalizable theory [14], and later a Gauss-Bonnet term was found to appear in the low energy effective action of string theory [15]. More recently it has been shown from a variety of perspectives [16][17][18] that a proper investigation of dual theories beyond large N in the context of the AdS/CFT correspondence conjecture [19,20] entails inclusion of higher curvature terms. A key challenge presented by a generic higher-curvature theory of gravity is that the equations of motion are greater than second order in the derivatives, leading to a number of pathological properties such as the appearance of ghost degrees of freedom and other instabilities. However there exist a few classes of theories in which such pathologies are ameliorated and in certain cases are absent. The best known example is the Lovelock class of gravitational theories [21]. This class yields second order equations of motion in arbitrary dimensions, with the Einstein-Hilbert term being one of several terms that constitute Lovelock theory in a given dimension. In this sense Einstein gravity can be regarded as a special case of Lovelock gravity. Since this class of theories is ghost-free [15] they are viable candidates for generalizations of Einstein gravity in higher dimensions d ≥ 2k + 1 for a Lovelock theory that is kth order in the curvature. For dimensions d < 2k + 1 such terms play no role in the equations of motion. Hence only k = 1 Einstein gravity has non-trivial field equations and so one must look beyond Lovelock gravity to obtain interesting higher curvature theories with implications in (3 + 1) dimensions. In the past several years considerable progress has been made along these lines. A broader class of quasi-topological gravity theories [16,22] have been constructed that retain many of the nice properties of Lovelock gravity under certain symmetry restrictions. They are non-trivial in any dimension d ≥ 5 regardless of the order in the curvature. Cubiccurvature quasi-topological gravity, for example exists in d ≥ 5 whereas cubic Lovelock gravity exists in d ≥ 7. Furthermore, their field equations, while generally greater than second-order, become second order under the imposition of spherical symmetry. Moreover, the linearized of the equations of motion of quasi-topological gravity coincide with those of Einstein gravity on maximally symmetric spacetime backgrounds up to an overall prefactor [23], ensuring that negative energy excitations do not propagate to asymptotic regions of constant curvature. Even more recently a more general class of higher-curvature gravity theories have been discovered that are of considerable interest both holographically and phenomenologically. This is because they are free of ghosts on constant curvature backgrounds, solutions of their field equations yield a metric that depends on a single metric function in the spherically symmetric case, and they are dynamically non-trivial even in four dimensions. First obtained in (3 + 1) dimensions for cubic curvature [24] (a theory known as Einsteinian cubic gravity or ECG), they were found to have generalizations to any dimension [25] and to quartic powers in the curvature [26]. Generalizations to any power in the curvature exist [27] but have not been explicitly constructed. JHEP07(2019)012 This class of theories -referred to as Generalized Quasitopological Gravity (GQG) -has several remarkable features. The constraint that spherically symmetric solutions depend on only a single metric function is found to also eliminate ghosts upon linearization of the theory on a constant curvature background [25]. Very recently it has been shown that they have a well-posed initial value problem for cosmological solutions and the potential to provide a late-time cosmology arbitrarily close to the Λ-Cold dark matter scenario whilst having a purely geometrical inflationary period in the early universe with a graceful exit [28][29][30]. While the field equations can be solved exactly in certain special cases [31], it is possible to analytically investigate the thermodynamics of black holes even in the generic case where analytic solutions are not available [32,33]. Charged black branes have an interesting phase structure that is absent for both their Lovelock and quasi-topological black brane counterparts [34]. The Kovtun-Son-Starinets bound on the ratio of entropy density to shear viscosity always holds [35], and small asymptotically flat black hole solutions were found to be stable [27], which may have implications for the information loss problem. Further holographic applications of this class of theories have been carried out, with discussions of the a-theorem, the universal stress-tensor two-point function and a universal relation for central charges [36][37][38]; more recently an extensive investigation of the holographic properties of the cubic case without massive modes was carried out, including holographic central charges, energy flux, Renyi entropies, and shear viscosity to entropy ratio [39]. Other recent work has shown that the shadows of GQG black holes have potentially interesting phenomenological signatures [9,10]. Thermodynamics of GQG black holes has yet to be fully explored. Previous studies for asymptotically flat solutions and AdS black holes have appeared in restricted contexts [25,27,34,40], but a full study combining both cubic [25] and quartic GQG [26] has yet to be carried out. The purpose of this paper is to conduct such a study for both spherical and hyperbolic charged black holes. Our investigation will be carried out in the context of black hole chemistry, in which the cosmological constant is taken to be a thermodynamic variable [41,42] that is interpreted as pressure in the first law of black hole mechanics [43,44]. An extensive amount of work over the past six years has been carried out in this subject [45] and has indicated that black holes can exhibit a broad range of phase behaviour that has been observed in other areas of physics. Examples include triple points [46], re-entrant phase transitions [47], polymer-like behaviour [48], and even superfluid-like phase transitions [49][50][51], as well as a deep analogy between charged anti-de Sitter black holes and Van der Waals fluids [52]. Higher-curvature gravity theories have, using this approach, likewise been seen to have a very rich thermodynamic structure [33,[48][49][50], with even more results surveyed in a recent review [45]. Our paper is organized as follows. In section 2 we present charged static, spherically symmetric AdS black holes in the cubic-quartic GQG theory. This includes an asymptotic solution, a near horizon solution and then their match in the form of a numerical solution. In section 3 we investigate the thermodynamic properties of charged black holes in cubic-quartic generalized quasi-topological gravity by applying the black hole chemistry formalism. In section 4 we classify the phase structure and critical points for these black JHEP07(2019)012 holes by considering the perspective of black hole chemistry, working in the fixed charge ensemble. In section 5 we consider the thermodynamics for fixed potential ensemble. We will analyze the four dimensional case in detail, and then present relevant results in higher dimensions. In section 6 we present some results of holographic hydrodynamics to understand these theories in the context of AdS/CFT correspondence. We summarize our work in section 7 and present some directions for further research. Charged black hole solutions in cubic-quartic GQG We begin by setting up charged static, spherically symmetric AdS black holes obtained from the equations of motion that follow from a combination of cubic and quartic terms of generalized quasi-topological gravity (GQG). Construction of equations of motion Consider the class of static radially symmetric metrics with radial coordinate r and time coordinate t. Choosing coordinates so that the radius of a (d − 2) sphere behaves as r d−2 , higher-curvature gravity theories up to quartic order are obtained by applying the condition g tt g rr = −1 in order to get a single metric function. They result in both Lovelock and quasi-topological curvature terms as well as GQG curvature terms. Concentrating only on the properties of GQG theory, we put aside the Lovelock and quasi-topological terms (for a discussion see e.g. [62,66]) and consider Einstein gravity accompanied by cubic and quartic generalized quasi-topological terms, with minimal coupling to an Abelian gauge field. The action 1 in d dimensional spacetime reads [26] where the cosmological constant Λ = − (d−1)(d−2) and the quartic generalized quasi-topological term [26] is given at appendix A. The rescaled cubic couplingμ, and quartic couplingλ, are given bŷ where µ and λ are arbitrary coupling constants, and rescaling is done to simplify the field equations. JHEP07(2019)012 As per our requirements for radially symmetric metrics, we employ the following ansatz and we find that the field equations of GQG yield N (r) = constant [25]; we shall set N (r) = 1 for simplicity. 2 Here dΣ 2 (d−2),k describes the (d−2)-dimensional line element of the transverse space, where k = +1, 0, −1 stand for spherical, flat and hyperbolic geometries of a surface of constant scalar curvature. As an investigation of the k = 0 case has previously been carried out [34], we shall in the sequel consider only non-planar black holes. For a maximally symmetric space, the metric (2.4) becomes, where is the AdS radius that is related to the cosmological constant. The quantity f ∞ = lim r→∞ f (r) 2 /r 2 and is obtained by solving following polynomial equation which is independent of the choice of k in the transverse section. While at least one coupling is non-zero, the higher curvature terms drive away f ∞ from unity. Since we require the same asymptotics as AdS space we only pick positive real solutions of the above polynomial. The effective radius of the AdS space is given by eff = / √ f ∞ . In fact it turns out the negative of the derivative of eq. (2.6) with respect to f ∞ yields the prefactor of the linearized equations of motion [25], and to prevent the appearance of ghosts in the particle spectrum we require P (f ∞ ) > 0. For charged black holes, we include a Maxwell field F ab = ∂ a A b − ∂ b A a , with electromagnetic one form defined as A = qE(r)dt, (2.8) where, by inserting the above expression into the Maxwell equation, we get for the electric field. The specific choice of prefactor makes for greater simplification later on in the field equations; we choose the constant term in the potential to be zero. 2 We note that the choice N = 1/ √ f∞ has been used [16] to normalize the speed of light on the boundary or to get c = 1 in the dual CFT. Here we shall set N = 1, and note that by time reparametrization of the metric we can obtain c = 1 on the boundary if desired. JHEP07(2019)012 The field equation for the action (2.1) yields the relation where m is a constant of integration and where [25] 12) are respectively generated by the cubic and quartic generalized quasi-topological terms. The parameter m has scaling dimension [length] d−3 and we will see that appears in the formula for the mass of black hole. While exact solutions to the field equation seem hard to find (except in special cases [31]), studying the far and near horizon behaviour of the metric perturbatively is still feasible, and permits us to analytically obtain the thermodynamic quantities associated with black hole solutions. Specifically, we shall utilize information from the near horizon expansion to describe the thermodynamics of the black holes. Degenerate vacuum solutions Before discussing solutions with nonzero mass and charge, we first consider solutions with degenerate vacua. These are analogous to the Lovelock-Unique-Vacuum (LUV) solutions [78][79][80][81], and occur when the field equation (2.10) has solutions of the form (2.5) with multiple degenerate solutions for L = / √ f ∞ for q = m = 0, or alternatively when (2.6) has multiple degenerate solutions. It is straightforward to show that there is a one-parameter family of doubly-degenerate solutions to (2.10) for q = m = 0, given by JHEP07(2019)012 valid for any f ∞ and . There is also a triply-degenerate solution 14) with f ∞ = 2. Note that for d = 8 and d = 6 these solutions do not exist. There are no fully degenerate solutions of the LUV type to (2.10). This situation could presumably be altered if we were to add the quadratic Lovelock term (the Gauss-Bonnet term) to (2.1), but we shall not pursue this here. Far region solution In the asymptotic limit, the form of the metric function is Inserting the above expansion into eq. (2.10) and requiring that it be satisfied at each order in a 1/r expansion yields where P (f ∞ ) was introduced in (2.7). We have presented the six leading terms, and displayed the schematic structure of the next order corrections to f (r) asymp . It is obvious that for µ → 0 and λ → 0 that P (f ∞ ) → 1 and so as expected from the solution in Einstein gravity. JHEP07(2019)012 Degenerate (multiple) solutions of (2.6) will yield weaker falloff behaviour for nonzero m and q, as is the case with LUV-type solutions [81]. Setting m = q = 0, f = f ADS turns out to be an exact solution to the cubic theory [31] and is also an exact solution to the field equations of the cubic-quartic theory (2.1) provided (2.6) holds. The thermodynamic behaviour of this class of solutions is uninteresting (essentially it is the same as that of the BTZ black hole [82]). We shall briefly consider this situation in section 4.1, where we shall see it is connected with a maximal pressure. To find the asymptotic behaviour in the degenerate case, and for simplicity, we consider the cubic and quartic parts of the action separately. In both cases, only for doubly degenerate vacuum solutions (referred to as the critical state) in which P (f ∞ ) in (2.7) vanishes do we get nonvanishing coupling. For higher orders of degeneracy (where all derivatives of eq. (2.6) are zero up to a specific order) we find vanishing coupling. For the cubic case, where the associated critical coupling is given in (4.5) and f ∞ = 3/2, higher derivatives of P (f ∞ ) at this point are nonzero up to the third derivative of (2.6). Studying the asymptotic behaviour for degenerate vacua can be carried out by inserting into the equation of motion (2.10) with vanishing quartic coupling for now. For the parameters at the critical state we get a differential equation for (r). To find an asymtotic solution to this equation we write where (r) is the correction to the asymptotic AdS metric, and A and x are determined upon insertion into (2.10). Setting q = 0 (for simplicity), we find showing (as before) that in six dimensions there is no non-vanishing solution. For other dimensions there are two values for A allowed (both signs of the square root). Our result for x is compatible with the result given in [83] in the case of double degeneracy. Finding subleading terms requires including more terms in the initial expansion of (r). For the quartic case the critical state only exists at the quartic coupling given in (4.9) and f ∞ = 4/3, which yields two degenerate vacuum states while P (f ∞ ) vanishes and up to forth derivative of eq. (2.6) are nonzero at this critical value. Repeating a similar procedure for the cubic part, at leading order for large r and A = 0 we obtain where for d = 8 there is no nonzero solution. For d < 8 there are two solutions for A; for d > 8 there is no real value for A unless m < 0. We obtain the same falloff for the radial coordinate as in the cubic case, since it governs the value of the degeneracy. JHEP07(2019)012 A more detailed study of 'excitations' of the degenerate cases (2.13) and (2.14) with nonzero m and q are somewhat more complicated than the Lovelock case since (2.10) is no longer a simple polynomial; we leave this for future investigation. The homogeneous solution in the far region is found by inserting f (r) = f (r) asymp + f h (r) in eq. (2.10), where parameterizes the strength of these corrections. Substituting this expression in the field equation, we get an inhomogeneous second order differential equation for the function f h (r). At leading order in , assuming that µ = 0 and λ = 0 the homogenous part of the equation at large r becomes 3 and we note that it is independent of the value of k; for vanishing µ and λ it yields the well known AdS Reissner-Nordstrom (RN ) solution. For d = 6 there is an ambiguity in the above expression for µ = 0, λ = 0. For this particular case the applicable equation becomes 6 and the explicit form of γ 2 can be read from (2.24). The solution of (2.22) in the case of γ 2 > 0 is where I and K are the modified Bessel functions of the first and second kinds, and A and B are constants. In the limit of large r f h+ ∼ Ar 5/2 exp 2γr and so we must set A = 0 to ensure the AdS boundary conditions are satisfied. As a result no ghost excitations can propagate to infinity. We shall see shortly that the contribution of the second term can be dismissed. The solution for the particular case (2.24) can be obtained in a similar fashion using its corresponding value for γ 2 . Notice that k does not appear in the asymptotic solution and the numerator of γ 2 in conjunction with the positivity condition (2.7) ensures freedom from ghosts [25,26]. Indeed, the positivity of the numerator relation (2.23) gives the same no-ghost condition as in (2.7), and one only needs to check whether the denominator is positive as well. JHEP07(2019)012 To have the correct asymptotics we must choose (µ, λ) and the mass parameter in such a way that γ 2 > 0. To see this, note that if γ 2 < 0 the homogenous solution asymptotically takes following form where J and Y are respectively Bessel functions of the first and second kind. In this situation, in any dimension the solution oscillates rapidly and its amplitude becomes larger than r 2 2 at large r. It therefore does not approach AdS asymptotically, and so we set C 1 = C 2 = 0 to get rid of this homogenous part of the solution. For the rest of our considerations, to avoid any oscillating behaviour near infinity we restrict the solutions to the constraint γ 2 > 0. Finally we note that the particular solution (2.16) polynomially decreases with 1/r, and is the dominant part of the total solution f (r) = f h+ + f (r) asymp for sufficiently large r; we therefore neglect the term f h+ in eq. (2.25) in the sequel. Near horizon solution To construct the solution near the horizon we consider the following expansion for the metric function, where T is the Hawking temperature of the black hole. It is found by imposing the regularity condition for the Euclidean sector of the complex manifold (under t → iτ ) and reads as Substituting the near horizon expansion of the metric function into the field equation and imposing that it holds at each order of (r−r + ) we obtain for the zeroth and first order terms which specify the formula for the mass parameter m and temperature T in terms of the horizon radius and coupling constants. We shall use these equations for our thermodynamic investigation later on. Continuing to higher order terms one is able to find all other series coefficients in terms of a 2 , which is a free parameter and its value is determined by using the boundary condition at infinity. We pause to comment that in a general non-linear theory of gravity a spherically symmetric metric depends on two functions, and the mass and the temperature are determined by two parameters. However it is a special feature of the generalized quasi-topological theories (as well as the Lovelock gravity theories) that both the mass and the temperature are determined in terms of one parameter: the horizon length, as (2.30) and (2.31) indicate. This is an important feature of these theories, since it allows us to analyze thermodynamic behaviour without full knowledge of the solution [26,32,33,35,40]. Now that we constructed the asymptotic and near horizon solutions, we next find a numerical solution that interpolates between them. For this purpose we define the rescaled metric function where g(r) → 1 as r → ∞. Here f ∞ is a positive real root of eq. (2.6). Choosing some specific values for the coupling constants, electric charge and horizon radius, while = 1, we find the associated values of mass parameter and temperature referring to (2.30) and (2.31). To numerically solve the second order differential equation (2.10) we need to identify initial values for the metric function f and its first derivative. We use the value of f close to the horizon to set up the seed solution for g. We then fix the value of a 2 to desired order, using the shooting method such that the numerical solution for g approaches unity asymptotically. As there are several branches of solutions, we select the one that tends to the Reissner-Nordstrom (RN ) solution in the µ → 0, λ → 0 limit; otherwise we get other solutions that are not physically interesting. Furthermore, because the differential equation is stiff, the JHEP07(2019)012 solution can be obtained only to a certain precision. For our choice of a 2 the asymptotic solution up to O(r −12 ) is precise to one part in 1,000 or better. In order to exhibit the behaviour of the solution while varying the cubic and quartic couplings individually, we performed the computation for the cubic and quartic parts of the equation separately to see the impact of each of these terms individually. Figure 1 illustrates the numerical solution in four dimensions and shows that at fixed quartic coupling, increasing charge drives the horizon inward, whereas at fixed electric charge, larger values of |λ| displace the event horizon outward. 4 The right panel demonstrates the difference between the numerical result for the metric function and its corresponding asymptotic behaviour from (2.16); we see that these converge at enough large r. We also plotted the corresponding graphs for cubic gravity and find that similar behaviour takes place as charge and/or the cubic coupling are varied. To see the behaviour of the metric function in four dimensions as r → 0, we expand the field equations in powers of r. In general we have the following expansion f (r) = r s (a 0 + ra 1 + r 2 a 2 + · · · ). (2.33) Considering only the quartic curvature part of (2.10), for small r the first term in the above expansion is the dominant contribution to the metric function. To find s, we use the same numerical procedure described before, and depict the behaviour of rf (r)/f (r) near r = 0. We find that in four dimensions s vanishes. However in higher dimensions depending on the choice of parameters s gains an non-integer negative value, except in six dimensions where it becomes positive for the choice of physical parameters as prescribed in the next section. A similar argument was given in [84] for the cubic part. Therefore in four dimensions the metric is regular at the origin. However, the Kretschmann scalar R abcd R abcd ∼ r −4 ; the spacetime is still singular, but its singularity is softer than its counterpart in Einstein gravity, in which R abcd R abcd ∼ r −6 . We also find that the associated plots for the five-dimensional metric function (2.32) are similar, provided the parameters are chosen to satisfy a physical constraint that we will discuss in the next section. Thermodynamic properties Our aim is to study the effects of including cubic and quartic generalized quasi-topological terms on the known behaviour Einstein black hole thermodynamics in four and higher dimensions. We begin by investigating the first law and Smarr relation, where we apply the black hole chemistry formalism [45] by taking the cosmological constant, Λ and the couplings µ, λ as thermodynamic variables. We then study the physical constraints and find out whether at different domains in terms of the couplings and the charge they satisfy these constraints. We also elucidate the critical behaviour for these black holes. The red line shows the solution for Einstein gravity (for which the quartic coupling λ = 0) with zero electric charge. For the other curves we choose different values for these parameters. Top right: this panel depicts the difference between the numerical solution and its corresponding large-r analytic solution in quartic gravity -we see that convergence holds asymptotically. Bottom left: the plot presents the rescaled metric function (2.32) in four dimensions for cubic gravity (with λ = 0); the red line is again the solution for Einstein gravity with zero charge. Bottom right: the difference between the numerical solution and its corresponding large-r analytic solution in cubic gravity; we see again the asymptotic convergence. In all cases, we set = 1 and r + = 10. The mass parameter m is defined in (2.30). First law and Smarr relation As discussed in section 2.4, equations (2.30) and (2.31) provide the relations for obtaining the mass and temperature of the black holes without requiring knowledge of an exact solution. Since the explicit form for the temperature is complicated, we apply the second equation implicitly to verify the first law of thermodynamics is satisfied. We utilize the Iyer-Wald formalism [85,86] to compute the entropy andε ab is the binormal to the horizon, which is normalized asε abε ab = −2. The induced metric on the horizon is γ ab and γ = detγ ab . From the action (2.1) we find for the entropy, where Σ (d−2),k is the volume of the submanifold with line element dΣ (d−2),k . For k = 1 this is the volume of the (d − 2)-dimensional sphere and finite, although when k = 0 and k = −1 one needs to perform some kind of identification to define finite volume. Identifying the pressure as [45,87] the other thermodynamic quantities are and [88] the mass is It is straightforward to show these quantities satisfy the extended first law of black hole thermodynamics, JHEP07(2019)012 where V is the thermodynamic volume conjugate to the pressure, and Ψ µ , Ψ λ are the respective thermodynamic conjugates to the couplings µ, λ. Furthermore, a scaling argument [44] applied to the various thermodynamic quantities above yields the Smarr formula which can be shown to hold for these quantities. To investigate the critical behaviour of these black holes, an equation of state is required. This is obtained by substitution of 2 in (2.31) in terms of pressure. Hence where the different parameters are where v is the specific volume [89]. As in previous studies [33,34], we see from (3.9) that there is a non-linear dependence of the equation of state on the temperature. For future reference we choose the free parameters to be e, β 3 and α 4 ; these are independent of the choice of k. The explicit form of the Gibbs free energy as where we pulled out an overall positive factor to simplify the expression; the explicit form of the other parameters is given in eq. (3.10). The equilibrium state is the one that minimizes the Gibbs free energy G for fixed temperature and pressure. Physical constraints We now explicate the constraints on the cubic and quartic couplings required for physical solutions. Generalized quasi-topological theories have the property that only the massless graviton propagates on constant curvature backgrounds provided the parameters are appropriately constrained. To ensure this, the effective Newton constant of gravity must have the same sign as that in Einstein gravity. This implies that the pre-factor in the linearized equations of motion about the AdS solution is positive [25], i.e., P (f ∞ ) > 0 with P (f ∞ ) defined in eq. (2.7) and the value of f ∞ is given by solution of eq. (2.6) which is positive in order to get an asymptotic AdS solution. The same relation occurs if we require γ 2 > 0 (see the discussion after (2.26)). In terms of the rescaled parameters in eq. (3.10) and the pressure given in (3.4), the no-ghost constraint (2.7) becomes and we note that in the limit β 3 → 0, α 4 → 0 (or µ → 0, λ → 0) that we reach the Einstein branch of the theory. Disregarding solutions with γ 2 < 0 (since they are not asymptotically AdS) we have from (2.23) upon using eq. (3.10), where P (f ∞ ) is given in (3.12). It is well-known that in higher curvature gravity black hole entropy can be negative in some regions of parameter space, perhaps indicative of an occurrence of instability [90,91]. Imposing the requirement for positive black hole entropy yields (3.14) When temperature and specific volume are positive, in each dimension the values of couplings must be chosen to satisfy the above inequality. We search for the domains in parameter space where these conditions are valid for various charge and coupling constants in the next section. Thermodynamics in the canonical ensemble Equipped with the field equations and relevant thermodynamic relations, we consider first the fixed charge ensemble. We aim to investigate the phase structure and critical points for these black holes. JHEP07(2019)012 The equation of state in terms of the rescaled parameters e, β 3 and α 4 introduced in eq. (3.10) is for any d ≥ 4; we note that only e 2 appears everywhere, so our results are valid for both positive and negative charge. Phase transitions occur if the equation of state demonstrates some oscillatory behaviour, with P (v) having at least one minimum and one maximum. This in turn depends on the signs of the coefficients of different powers of v, as these determine how many roots exist in the equations for critical volume and temperature. To get a critical point, the following equations must hold We also find that when µ = 0 in four and five dimensions λ must be negative, whereas in higher dimensions λ must be positive in order to get physical points that satisfy all physical constraints mentioned in the previous section. To get the critical volume and temperature in terms of charge and couplings, we solve equation in various dimensions. As the explicit form is lengthy, we do not present results explicitly; in practice it is easier to solve the equations parametrically for T and v c in terms of the other parameters in certain dimensions. AdS 4 vacua and maximum pressure We consider here the structure of the AdS 4 vacua of (2.1) in four dimensional spacetime, with curvature scale 1/ 2 eff = f ∞ / 2 . Setting the action length scale to = 1 (implying a fixed pressure of 3/(8π)), we analyze solutions to (2.6), considering the cubic and quartic couplings in distinction for simplicity. Starting with only a non-zero cubic coupling µ, we have both µ < µ c and µ > 0; the first of these yields a negative real valued solution, and the second implies γ 2 < 0 in (2.23). Both regions are unphysical and we exclude them from further analysis. Note that from the linearized equations of Einstein gravity, the following relation holds between the effective Newton constant G eff and G. We see that for f 2 ∞ = − 4 4032µ , G eff → ∞, and inserting this value inside eq. (4.3) yields the critical coupling of µ c (noted previously [31]). At this point the discriminant of the cubic changes sign and there are distinct branches of solutions for values of µ < µ c and µ > µ c . Repeating the same approach for higher dimensions, the overall behaviour of f ∞ given in figure 2 is similar to that in four dimensions. The critical limit is given by More generally we can reconsider the above discussion for arbitrary values of the pressure P . In four and five dimensions there is a maximum value for the pressure that results from the condition that the discriminant ∆ > 0, which is a constraint on parameter space JHEP07(2019)012 for physically acceptable solutions. In general we have where in four and five dimensions β 3 > 0 and is given in (3.10). For d = 6 the pressure is unbounded and for d ≥ 7 a similar procedure does not yield an upper bound for the pressure. Turning now to the quartic case, equation (2.6) becomes in d = 4 with = 1. The right graph in figure 2 indicates three possible real solutions to the above equation. The discriminant of (4.7) vanishes for λ = λ c = −81 6 /1024. Again, we note that G eff becomes infinity or equivalently P (f ∞ ) vanishes for f 3 ∞ = −3 6 /(16λ) and that (4.7) in turn implies λ = λ c . Similar to the previous case, for λ c < λ < 0, we have ∆ < 0 and there are two positive real solutions for f ∞ . Only the smaller of these has positive P (f ∞ ) and γ 2 . For λ < λ c , ∆ > 0 and there are no real solutions for f ∞ ; for λ > 0, although there is one real positive solution to (4.7), it implies γ 2 < 0 and is therefore physically inadmissible. Requiring λ c < λ < 0 we conclude that there is a maximum value for the pressure given by where α 4 is given in (3.10) and Only for d = 4, 5 is α 4 > 0 and P max > 0, yielding an upper bound on the pressure; in higher dimensions there is no bound on the pressure. In what follows, we concentrate on several specific dimensions and investigate the thermodynamic behaviour in some details. In the figures 3, 6, 11 given in the following sections we will adhere to the colour coding explained in table 1. Critical behaviour in four dimensions Our next task is to determine how many of these possible critical points are actually physical, and study their critical behaviour. We proceed by examining each value of d in succession. Here we consider the effects of both cubic and quartic GQG in d = 4. The equation of state (3.9) becomes (4.10) JHEP07(2019)012 It is obvious that for small v (i.e. for small black holes) that the term cubic in T (coming from quartic GQG) dominates. By taking different linear combinations of (4.2) it is possible to obtain an equation linear in T ; the resultant critical temperature and volume are then easily seen to satisfy the equations and which can be solved numerically for any choice of parameters for T c , v c . For simplicity, consider the behaviour of the critical temperature and volume, with only the quartic coupling active. Equations (4.11) and (4.12) become , and the critical volume v c satisfies We see that critical temperature is singular if the critical volume is such that the polynomial quartic in v 2 c in the denominator vanishes. An exception to this is if α 4 = 6400/729π 6 e 6 : the numerator in (4.13) also vanishes and t c remains finite. Note that is only occurs if k = 0, in accord with earlier work on black branes in GQG [34]. However the corresponding T c and v c become imaginary in the case of k = −1, so for hyperbolic black holes this singularity is absent. For k = 1, if β 3 = 0 and α 4 = 6400/729π 6 e 6 , the resulting values of T c , v c correspond to a critical point with standard critical exponents (see (4.15) and the discussion following), and the phase transition in the vicinity of this point is a standard first order VdW transition similar to what is depicted in figure 4. In studying the behaviour at this point, we note that the equations of state need to be solved for these specific parameter values, instead of using (4.13) and (4.14), since the latter becomes invalid if the denominator of t c vanishes. If both couplings are non-zero, again the denominator of T c in (4.11) is quartic in v 2 , and a similar procedure can be employed to write a formula for α 4 in terms of β 3 and e. Doing so, we find that for any values of the parameters the only solution is α 4 = 0, however in cubic gravity critical temperature does not have a singularity in four dimensions [84]. In other words, this particular occurrence of this apparent thermodynamic singularity is obtained only in Einstein-quartic GQG (for which the cubic coupling vanishes). Due to the complexity of the equation of state, it is not possible to find an explicit bound on the couplings and the electric charge by applying the positivity constraints (3.12) and (3.13). However we can numerically investigate whether these physical constraints are satisfied whilst varying the cubic and quartic couplings for a given fixed charge. The corresponding pattern is given in figure 3, where we also check for positivity of the entropy (3.14) as well. JHEP07(2019)012 The physical critical domain is the part of the parameter space for which the physical constraints discussed in section 3.2 are satisfied, with the property that a phase transition occurs. Looking at the left part of figure 3, for k = 1, this region has β 3 < 0 (or µ < 0) for all values of α 4 , with the exception of the axis β 3 = 0, where only for α 4 ≥ 0 are the associated phase transitions physical (the positive axis is green). The point β 3 = α 4 = 0 is also green, recovering the result that in the limit of vanishing cubic and quartic couplings, a charged AdS black hole still has physical critical points [52]. On the vertical axis α 4 < 0, we have γ 2 < 0. The entire physical region has only one critical point, whereas in the unphysical region it is possible to have either one, two, or three critical points depending on the given values of (β 3 , α 4 ). As we make the fixed value of e smaller (see for example the top right diagram in figure 3), we find that there are at most two possible critical points but only one of them is physical. Summarizing, in the presence of charge, critical points exist for all (β 3 < 0, α 4 ) (see figure 3). The center of the parameter space is the k = 1 Reissner-Nordstrom-AdS solution for which all constraints are satisfied in any dimension. JHEP07(2019)012 Whenever critical points exist we find that the critical exponents 5 are α = 0, β = 1 2 , γ = 1, δ = 3, (4.15) which are the standard values from mean field theory, even when both numerator and denominator vanish in (4.13). These are typically obtained by considering the equation of state near the critical point [89], writing v = v c (φ + 1) , T = T c (τ + 1), (4.16) and expanding in powers of (φ, τ ). Since here we do not have a closed form for the critical quantities, we insert numerical values for parameters into equation of state to obtain critical values for T c and v c and we obtain yielding (4.15). We explicitly illustrate the occurrence of the phase transition by drawing a P − v graph in figure 4 for parameters for which there is a physical critical point by setting e = 1, β 3 = −4e 4 and α 4 = 5e 6 . We see clear Van der Waals behaviour, with two distinct phases for T < T c that coalesce at T = T c and become indistinguishable for T > T c . For sufficiently low temperature, the curve tends to negative pressures; however only positive values of the pressure are physical. The coexistence line is plotted in the right half of figure 4, illustrating the critical point at the end of a line of first-order phase transitions between large and small black holes. The Gibbs free energy as a function of temperature is shown in figure 5, exhibiting the typical swallowtail characteristic of Van der Waals behaviour. It is notable that for quite small values of the pressure we still observe a swallowtail shape whose size grows rapidly. 5 The critical exponents quantify how physical quantities behaves in the vicinity of a critical point [52]. For t = T /Tc − 1, the exponent α characterizes the behaviour of the specific heat, while keeping volume constant The exponent β denotes a difference between the volume of a large black hole V l and the volume of a small black hole Vs on the isotherm process The behaviour of the isothermal compressibility κT is given by exponent γ The exponent δ characterizes the following difference on the critical isotherm T = Tc Computing the specific heat we find that two stable branches of black holes exist, with the physical one at the global minimum of G. For the allowed regions of parameter space in figure 3 for k = −1, the phase diagrams are qualitatively the same as in figures 4 and 5. Critical behaviour in five dimensions In this section we consider five dimensional solutions. The equation of state becomes 19) and the critical temperature is The critical volume satisfies the following equation and, as before, finding an explicit closed form for both the critical temperature and volume is not feasible. However for vanishing cubic coupling, there is a considerable simplification; solving (4.2) with β 3 = 0 for the corresponding critical temperature (4.20) yields we find where v c satisfies and we must have k = 1 so that t c > 0. This equation is a cubic polynomial in v 2 c and can be solved exactly. At most there are two real solutions for any given choice of parameters that are physically acceptable. 6 Figure 6 plots the number of critical points as a function of (β 3 , α 4 ) with fixed charge. For k = 1, unlike in 4 dimensions, we see that only if both couplings are non-zero we get two physical critical points in a certain region of parameter space, shown in dark green. The occurrence of two physical critical points for spherical black hole in five dimensions has to our knowledge not been seen previously. On the axes β 3 < 0, α 4 = 0 and β 3 = 0 (i.e., on the vertical axis) only for positive values of α 4 (or λ < 0) greater than some specific lower bound for the quartic coupling are the critical points physical, whereas for β 3 > 0, α 4 = 0 and β 3 = 0, α 4 < 0 they are unphysical, having γ 2 < 0. Physical critical points exist for JHEP07(2019)012 most of the region β 3 < 0, except for small values of |β 3 | and large enough large values of |α 4 | where γ 2 < 0. The case α 4 = 0 is discussed in [84]. Summarizing we have observed for the first time the occurrence of two physical critical points for spherical (k = 1) black holes and one critical point for hyperbolic (k = −1) black holes. We must have both couplings nonzero for this to take place. Again we see that even if e = 0 there are physical critical points. For k = 1 we get regions with either one or two physical critical points, and only one for k = −1. In the latter case, there are some regions having negative mass (white region) and both couplings must be non-zero in order to get physical critical points. However in regions of parameter space having two physical critical points, there is new behaviour. We illustrate this in figure 7, which shows that there are two first order phase transitions. The transition at T = T c is standard VdW behaviour. But the second phase transition at T = T c is that of 'reverse VdW' behaviour: it is a transition from one phase for T < T c to two distinct small/large black hole phases for T > T c . Note that for a sufficiently large temperature the pressure becomes negative and the asymptotic structure of the spacetime is no longer AdS. Consequently there is an upper bound on the temperature of AdS black holes. We illustrate this in figure 7. Note that curves having P < 0 over a finite range of T can be given physical meaning via the equal-area law [92]. Referring to figure 7, we see that although P < 0 corresponds to a different asymptotic structure from that of AdS, the equal area law implies that P never actually attains these negative values, but rather remains constant and positive as the phase transition takes place. The coexistence line of the two distinct phase small/large has a critical point at a minimal value of T in contrast to that of a standard VdW phase transition. This phenomenon has been previously observed for black branes [34] and in cubic gravity [84]. In each plot, red lines depict the parts of the curves for which the specific heat is negative, blue lines indicate negative entropy, and the purple line indicates that both specific heat and entropy are negative. Top left: the low temperature region (for which there is a standard VdW phase transition), with P = 1.2P c (dotted, blue curve), P = P c (dotted, black curve) and P = 0.6P c , 0.2P c (solid, black and red curve) each plotted. The remaining graphs pertain to the high temperature region (for which there is a reverse VdW phase transition). Top right: P = 0.5P c , Bottom left: P = P c Bottom right: P = 0.99P c . JHEP07(2019)012 The existence of a standard VdW phase transition followed by a reverse VdW transition at higher temperature is shown in figure 7. One might anticipate that with an appropriate choice of parameters that these two critical points (illustrated in the top right diagram of figure 7) could merge, yielding an isolated critical point (see the discussion for the six dimensional case in the next section). We find that this only occurs for either negative entropy and/or negative mass; these unphysical conditions persist in a neighbourhood of the merged critical point. In figure 8, we illustrate the behaviour of the Gibbs free energy in both the lowtemperature region containing a standard VdW transition and in the high-temperature JHEP07(2019)012 region containing a reverse VdW transition. In the former case, we obtain the standard swallowtail behaviour for P < P c . In the latter case, for P sufficiently smaller than P c there are no phase transitions, as the upper right part of figure 8 indicates. As P → P c we obtain swallowtail behaviour, shown in the lower right part of figure 8. For P = P c , in addition to negative specific heat we get regions with negative entropy (shown by the blue curve) and with both negative specific heat and negative entropy (shown by the purple curve); these are all unstable. For larger temperature the black hole solutions become stable (black line). Figure 8 shows that in addition to a reverse VdW transition, at high temperatures there are three black hole solutions with positive specific heat, with one having a minimal Gibbs free energy. For the k = −1 hyperbolic black hole in d = 5 we get only a reverse VdW transition, in which higher temperatures have two distinct phases, as depicted in figure 9. The Gibbs free energy in figure 10 exhibits swallowtail behaviour for pressures a bit less than the critical pressure. However for larger values of pressure one of the branches corresponds to solutions with negative mass (depicted by the green curve) and the phase transition is no longer physical. For even lower pressure (bottom left diagram in figure 10) we see that unstable black holes with negative specific heat have lower free energy than those with positive specific heat. It is reasonable to expect that there is now a zeroth order phase transition between the two black curves in this figure. For even smaller P , the unphysical parts of the branches shrink and (apart from a small red region) become stable. As in the k = 1 case, there are again three black hole solutions with positive specific heat, with one having a minimal Gibbs free energy. Critical behaviour in six dimensions Turning now to six dimensions, we note from (2.6) that for the linearized field equations the contribution of the cubic term drops out 7 [24]. This yields some simplification, but the analysis is still somewhat complicated. The equation of state takes the following form 24) 7 In eight dimensions the quartic term drops out. JHEP07(2019)012 with the explicit expression T c = 3 (36π 7 β 3 3 k + 24α 2 4 π 7 k)v c 18 + 180α 4 π 6 v c 16 β 2 3 k 2 + (162π 5 β 3 4 k + 276α 2 4 π 5 β 3 k)v c 14 +(256α 2 4 π 8 e 2 + 1476α 4 π 4 β 3 3 k 2 + 960π 8 β 3 3 e 2 − 144α 4 3 k 2 π 4 )v c 12 + (2880α 4 π 7 β 3 2 ke 2 for the critical temperature and being the equation yielding the critical volume. Setting the cubic coupling to zero, the associated critical temperature is 27) and the critical volume obeys the relation There is a singularity in the critical temperature at physical critical points; if α 4 < 0 we get γ 2 < 0 in the first case, and the critical temperature or critical volume becomes negative in the second case. We plot in figure 11 the possible critical points in the (β 3 , α 4 ) plane. Physical critical points appear only for β 3 < 0, and there can be as many as three for certain ranges of (β 3 , α 4 ) if k = 1 but only one for the hyperbolic (k = −1) case. Other possible critical points have one or both of γ 2 < 0 and S < 0. For vanishing charge we also have a region of at most two critical points for both k = 1 and just one critical point for k = −1; for vanishing α 4 there are still two physical critical points if k = +1, studied in detail in [84]. Clearly the maximal number of critical points for a given value of (β 3 , α 4 ) depends on both horizon geometry and dimension. We do not need to have both couplings non-zero JHEP07(2019)012 to obtain physical critical points. However in the absence of charge or when k = −1 (both with or without charge) both couplings must be nonzero. The appearance of three physical critical points in the d = 6, k = +1 case stands in contrast to previous studies. This remarkable feature has not been observed before, and its occurrence is related to the conjunction of electric charge, cubic, and quartic couplings all being nonzero. Figure 12 shows that there is a reverse VdW transition in between two standard ones, one at cold temperatures T < T c 1 and the other at high temperatures T > T c 3 , with the critical temperature T = T c 2 of the reverse transition T c 1 < T c 2 < T c 3 . The curves in figure 12 correspond to phase transitions that obey Maxwell's equal area law [92] with the actual pressure remaining positive during the phase transition (despite the curve indicating P becomes negative over a finite range of T ) as noted earlier. At sufficiently low temperatures the P − v curves cross the horizontal axis, yielding unphysical behaviour since the asymptotic structure is no longer AdS. Critical points and coexistence lines are also displayed in figure 12. Numerical analysis confirms that for typical values of the parameters, each of the three critical points are characterized by mean field theory critical exponents, a hallmark of the end point of a first order phase transition. Note that the two critical points at high temperature are joined by a coexistence line. For certain choices of the couplings and charge these two disjoint lines merge into each other, and an isolated critical point appears at the merge point. This new critical point is characterized by critical exponents that differ from the mean field theory ones. This phenomena was first observed in Lovelock and quasi-topological gravity [48,62,66], where isolated critical points were found to occur for hyperbolic horizons and massless AdS black holes. A thermodynamic singularity, at which pressure remains constant for any temperature and isotherms cross and reverse, was also found to occur at this point. However for Lovelock and quasi-topological black holes accompanied with conformal scalar hair, isolated critical points have been observed in five and higher dimensions for massive black holes without coinciding with a thermodynamic singularity [50,51]. More recently, in GQG cubic gravity only, isolated critical points have been found in six dimensions with no scalar hair for spherical black holes [84]. In all of these cases there are initially only two critical points that converge to one isolated critical point. Here we see for the first time three critical points, two of which merge to form an isolated critical point. Furthermore these isolated critical points do not correspond to any thermodynamic singularity, i.e., the P − T curve does not have a zero slope. From the relation (4.17) we find that the coefficient B vanishes and the associated critical exponents arẽ which differ from the standard critical exponents appearing in (4.15) but are in accord with previous studies [48,62,66]. On either side of the isolated critical point the black holes satisfy all physical constraints, as the bottom right diagram in figure 12 indicates. The behaviour of free energy with respect to temperature for spherical black holes is illustrated in figure 13. At low temperatures, for P = P c the solution is stable and for P < P c a standard VdW transition occurs. For the two other critical points P c 2 , P c 3 there JHEP07(2019)012 is a region of the curve that has negative specific heat at low temperatures (red lines). For pressures P c 2 < P < P c 3 there are reverse and standard transitions that are shown by the rather sharp swallowtails depicted in the bottom graphs of figure 13 from left to right respectively. In contrast to the lower temperature (P < P c ) swallowtails depicted in the upper diagrams in figure 13, these swallowtails have positive specific heat everywhere apart from a few short segments. For regions of parameter space having two physical critical points, as depicted in figure 14 there is a first order standard VdW phase transition between small and large black holes at low temperatures and then a reverse VdW transition at higher temperatures. The right graph shows that for fixed charge and α 4 , and varying β 3 we obtain the appropriate value for the cubic coupling β 3 such that two coexistence lines meet, yielding an isolated critical point, with non-standard critical exponents given in (4.30). Again we see that there exist a range of temperatures on either side of the isolated critical point for which the black holes satisfy all physical requirements. Finally, in regions of parameter space with one physical critical point we get a standard VdW phase transition for k = 1. However if k = −1 then we find an reverse VdW transition that is similar what we described for the d = 5 hyperbolic black hole. Critical behaviour in more than six dimensions Increasing the value of d further, we find for seven dimensions that the qualitative features remain similar to d = 6 spacetime. Quantitatively, however, the parameter regions with only a single critical point get larger, whereas regions with two or three physical critical points get smaller. No further features emerge and so we shall not consider this case further. However for d = 8 we find that we get up to three critical points. The structure of the associated phase transitions is similar to that we presented for d = 6 in the previous subsection, so again we shall not consider this case further. Finally, we compute the ratio of critical quantities for any value of d. In general this must be done numerically, but we can obtain an analytic expression for small values of the couplings for which the physical constraints hold. Here we shall set the cubic coupling to zero 8 and compare results with the critical behaviour in Einstein gravity for spherical JHEP07(2019)012 black holes. To leading order the critical quantities read is the critical volume in Einstein gravity. We see a dimension-dependent deviation from the critical values in Einstein gravity. The critical temperature and pressure increase and the critical volume decreases relative to Einstein gravity, except in four and six dimensions, where critical pressure and temperature are smaller and the critical volume is larger. In four dimensions these corrections are negligible. JHEP07(2019)012 We finally obtain and we see that in the limit of vanishing quartic coupling, these results reduce those of charged k = 1 black holes in Einstein gravity [89]. The effect of the quartic curvature term on the Van der Waals ratio is to increase it in any dimension above the Einsteinian value. In four and five dimensions these corrections are negligible for small values of coupling and large enough charge. However in higher dimensions we see considerable deviation and the van der Waals ratio no longer is a "universal" quantity as it is in Einstein gravity: it depends on parameters of the theory as well as the dimension under considerations. Thermodynamics in grand canonical ensemble We now consider the grand canonical ensemble, in which we have fixed potential instead of fixed charge. From the viewpoint of AdS/CFT holography, fixed potential on the gravity side is related to fixed chemical potential on the CFT side. We first consider d = 4 and then discuss properties for generic dimensions, employing the approach in ref. [93]. We shall consider only the quartic term in the action; the cubic case was studied in [84]. We expect that when both couplings are nonzero a pattern similar to that of the fixed charge case for the number of critical points will emerge, though we shall not perform that analysis here. Four dimensions For fixed potential one needs to find expressions for the mass and the temperature solving again the equations of motion for the metric function near the horizon, since this choice of ensemble alters how the equations depend on the horizon radius. The first two leading JHEP07(2019)012 order terms in the expansion result in the following formulas, parameterized by the quartic coupling and r + : with the second equation yielding the equation of state where we defined v = 2r + . The explicit form of the critical quantities from the equation of state (5.2) are and the formula for the corresponding critical pressure is given inserting the above relations for T c and v c into equation (5.2). Note that we have the constraint so that critical quantities remain real. Solving equations (5.1) to determine the explicit form for M and T , and choose solutions that in the limit λ → 0 approach the Einstein branch, we find that only for k = +1 (spherical) black holes are there physical critical points. Admitting a positive mass, while imposing the condition (2.6) with zero cubic coupling, we use the formula (4.8) to obtain an upper bound 0 < P ≤ JHEP07(2019)012 The black hole entropy at the critical point becomes and r + c = vc 2 with v c introduced in (5.3) and Z given in (5.4). Explicit numerical computation for k = +1 and λ < 0 indicates that the critical entropy is always positive; this will not hold for other values of these parameters. In order to search for the phase structure of the black hole solutions, we obtain the formula for the free energy in the grand canonical ensemble which we plot in figure 15. We again observe standard Van der Waals behaviour. However we also see that the free energy is a decreasing function of the temperature for small T , vanishing at T → 0, in contrast to the fixed charge case; however a large branch of the curve has negative entropy. The P − v diagram in figure 15 illustrates that a phase transition happens for temperatures a bit less than the critical temperature. However for enough low temperatures, the pressure becomes negative, and we do not consider this unphysical case. Another way to observe the phase transition is via the coexistence line in the P − T plane, shown in the top right of figure 15. For low enough temperatures the entropy becomes negative, denoted by the blue solid line. Instead of finding the equation of state for fixed potential, we perform the analysis with fixed pressure, commonly used in holography. The phase graph of temperature versus potential in figure 16 shows it again describes a first order phase transition between small and large black holes, and the coexistence line terminates at the critical point. Higher dimensions Here, we look into the thermodynamic properties of black hole solutions in the fixed ensemble in generic dimension. The equation of state in d dimensions is To compare results between the two ensembles with the cubic coupling set to zero, we note that in the fixed potential ensemble for spherical black holes in quartic GQG we get physical critical points for α 4 > 0 in four and five dimensions. However in six dimensions no physical critical points exist. This is in contrast with the fixed charge ensemble, where (α 4 > 0) we get single physical critical points in d = 4, 6 and two physical critical points in d = 5 (as well as single physical critical points). In addition, for fixed potential and k = −1 hyperbolic quartic black holes, while there are potential critical points for d = 4, 6, these all have γ 2 < 0 (since α 4 < 0) and so there are no physical critical points for these dimensions (and likewise none for d = 5). This situation is the same as for the fixed charge ensemble, confirming that both cubic and quartic coupling must exist to obtain physical critical points. Holographic hydrodynamics One of applications of the AdS/CFT correspondence is the computation of the ratio of shear viscosity to entropy density η/s. We investigate in this section this ratio for the quartic theory. It is well-known that for the field theories dual to Einstein gravity, the shear viscosity to entropy density ratio is lower-bounded by i.e., η/s = 1/(4π), and it has been suggested that this lower bound is universal, holding for any matter [94]. This conjecture thus states that η/s ≥ 1/(4π), and is the so-called KSS bound. However higher derivative order contributions can cause the violations of this bound [95]. We therefore evaluate η/s for field theories dual to the quartic generalized quasi-topological theory for general d to see whether the KSS bound is satisfied. Here, we employ the planar black hole solutions described by the following metric We define a new coordinate z = 1 − r 2 + /r 2 , to compactify the region outside the horizon. This gives where g(z) vanishes at z = 0, and g(1) = f ∞ . Expanding g(z) near the horizon 0 z 3 + · · · , (6.3) we solve the field equations to determine g (i) 0 for i = 2. As we discussed previously, the second derivative of the metric function near the horizon, g 0 , is not determined by the field JHEP07(2019)012 equations. However, its value can be chosen such that the numerical solution approaches its associated asymptotic solution. It is easy to check that by the coordinate transformation, the parameters g (i) 0 are written in terms of the parameters a i appearing in the near horizon expansion (2.28). We find where the explicit form of a 3 from section 2 for the planar black holes is Using methods described in [96], we perform a shift on the metric (6.2) with perturbation parameter . Computing the Lagrangian for the perturbed metric, and performing a series expansion for small , we obtain whereλ is the coupling in the action and is related to the coupling appearing in the equation of motion, using (2.3). The 'time formula' gives the shear viscosity Res z=0 √ −gL ω 2 2 , (6.8) whose explicit form for the case at hand is whereλ appears in the action (2.1) and is related to λ via (2.3). Taylor expanding η/s about λ = 0 we obtain whereȧ 2 (0) denotes derivative of a 2 with respect to λ and then setting λ = 0. To compute the value of η/s numerically, one needs to determine the parameter a 2 for the choice of the other parameters, where the value of temperature is known from expansion near the horizon. In the above computation we replaced the parameter a 3 in terms of a 2 using the second order expansion near r = r + of eq. (2.10). The Taylor expansion yields anȧ 2 (0) term that is left undetermined at this level. Only under the condition that the expression at first order in λ in the above series expansion for higher dimensions with some positive coupling (but in d = 4 for certain negative coupling) can we conclude that the KSS bound η/s ≥ 1/(4π) holds in all dimensions in the quartic generalized quasi-topological theories at small coupling. The generalized quasi-topological term can cause the entropy density of black branes to change sign [34]. Hence there is a certain T p for which the ratio η/s exhibits a pole. Using the first order near-horizon expansion from (2.10), the corresponding quartic coupling is , (6.12) where the subscript "p" stands for "pole". It is interesting to note that in four dimensions λ p is equal the value λ c = −81 6 /1024 that we encountered in the critical limit of the theory in section 4.1; however, this coincidence does not hold in higher dimensions. However in any dimension it leads to the Einstein branch of the theory. The leading order term in (λ − λ p ) that describes the behaviour of the entropy near zero is JHEP07(2019)012 Furthermore the corresponding expansion for the shear viscosity reads as 14) It is interesting to note that the entropy density vanishes linearly as λ → λ p . By contrast, the shear viscosity does not vanish in this limit as it contains a term consisting of some powers of a 2 evaluated at the pole. Explicit numerical evaluation shows that this term does not vanish as λ → λ p . Consequently the ratio of shear viscosity to entropy density has a pole at λ = λ p . Figure 17 shows that there is a smooth curve connecting η/s = 1/(4π) (for λ = 0) and η/s = ∞ (for λ = λ p ). Since including quartic quasi-topological or Lovelock terms into the action does not alter the black hole entropy from its Einstein gravity value [34], only quartic GQG contributions are responsible for the occurrence of this pole. A previous study of cubic GQG found that similar behaviour for η/s took place [84]. To determine how the ratio η/s can be recast in terms of λ, we use the Padé approximant method to evaluate the parameter a 2 . For our consideration involving quartic term, the computations become cumbersome very rapidly while going to higher orders, so we only present only a few corresponding curves in four, five and six dimensions in figure 17. From this figure it is obvious that η/s starts from 1/(4π) for λ = 0 and then it grows until it diverges as expected at λ = λ p . For four dimensions this figure illustrates that the KSS bound for η/s holds. From the relations (2.30) and (2.31) the explicit formulae for the temperature and mass are where we see that T is independent of λ in five dimensions. In figure 17 we plot η/s for the values of λ such that physical constraints are satisfied. To determine this, one needs f ∞ > 0 for an asymptotic AdS spacetime and P (f ∞ ) > 0 to satisfy the no-ghost criterion. The behaviour of f ∞ in four dimensions is given in right diagram in figure 2; similar behaviour is observed in d = 5, but with slightly different contributions from the different solution branches. In four and five dimensions we find that λ must be negative so that γ 2 > 0. The expression for the mass in (6.15) sets a lower bound on the coupling. In conjunction with the condition γ 2 > 0 in (2.23) required for a well-defined asymptotic region, we must have 0 > λ > λ p = − 6 /16 for any choice of AdS length and black hole horizon, where the lower bound λ p corresponds to zero mass and vanishing entropy; note that |λ p | is smaller than |λ c | given in (4.9). We note from figure 17 that the KSS bound is violated for λ/λ p > 0 or in other words for all physically acceptable values of λ since λ p < 0. In six dimensions solving equations (2.30) and (2.31) yields four different solutions for the mass and temperature. However the only branch that satisfies the criteria for a physical solution is Here, the mass is positive for 0 < λ < λ p and vanishes (along with the entropy) at λ = λ p ; from eq. (2.24) we find JHEP07(2019)012 which is positive both for λ > 0 and for sufficiently negative λ. On the other hand, even though the overall structure of vacuum solutions looks like what is exhibited in the right graph in figure 2, in this case the upper branch with negative λ is associated with the existence of ghosts -it is therefore excluded from further consideration. The lower negative λ branch yields negative γ 2 . Hence only positive values of λ yield physical solutions as well as γ 2 > 0. However there is an upper bound for the positive coupling that is enforced by positive mass. From figure 17 starting from the same value of the ratio at zero coupling, for small absolute values of λ, in four dimensions the ratio is initially larger than for d = 6 but then declines to smaller values. Both curves blow up as the pole is approached. In these cases the KSS bound holds, and mass remains positive as 0 < |λ| < |λ p |. We anticipate similar behaviour in seven and higher dimensions, namely that positive coupling is required for correct asymptotic behaviour and physical mass, and also satisfies the KSS bound. It is known that there is an upper bound on the coupling restricting the existence of acceptable CFT duals, so the whole range of λ ∈ (0, |λ p |) cannot possess a holographic interpretation [97,98]. We postpone how to address this issue to future investigations. If both cubic and quartic couplings are nonzero, there is still a point in parameter space for which the entropy vanishes. This is a singular point. It cannot be removed since a 2 , which appears at zeroth order in the η expansion (similar to eq. (6.14)), is computed for the values of the couplings at the pole. Corresponding results for cubic gravity are discussed in [84]. Discussion We have investigated charged static spherically symmetric AdS black holes for both spherical (k = 1) and hyperbolic (k = −1) geometries in generalized quasi-topological gravity (GQG). These theories are of notable interest since this class of solutions has a single metric function, analogous to Lovelock and quasi-topological gravity at the same order. We have considered both cubic and quartic GQG to see how these additional terms modify the results for Einstein gravity in four, five, and six dimensions. Although the metric function cannot be obtained analytically, it is feasible to find both the asymptotic and near horizon behaviour of the metric perturbatively. We then apply the shooting method to verify that these solutions match in the intermediate region. The near horizon expansion characterizes the mass and temperature of black holes, and therefore, the thermodynamics of the black hole can be completely understood, despite the lack of an exact solution. Furthermore our numerical considerations demonstrate that for either fixed cubic coupling or quartic, increasing the electric charge correspondingly decreases the horizon radius. On the other hand at fixed charge, enlarging the coupling has the effect of inceasing the horizon radius. Investigating the solutions near the origin r = 0 in four dimensions we find that the curvature scalar singularity is softened. Taking the cosmological constant and cubic and quartic couplings as thermodynamic variables, we examined the thermodynamic properties in the given spherically symmetric JHEP07(2019)012 configuration in detail, including verification of the extended first law and Smarr relation. However not all solutions obey standard physical requirements (positive mass, positive entropy, AdS asymptotics, and the no-ghost condition), so we constructed the physical constraints between the couplings and the charge that gives the domain for parameters to yield physical critical points. In some regions of parameter space black hole entropy can be negative. The sign of the entropy depends on the spacetime dimension and is a function of temperature in terms of horizon radius. For a fixed charge ensemble or for neutral black holes, there are situations (for small black holes) in which the entropy becomes negative as r + → 0. One can add the absolute value of this amount to the entropy to ensure that the vacuum state has zero entropy. Under these circumstances the associated free energies are shifted by the same amount, so the thermodynamic properties are not affected. Another way to shift entropy to a positive value is by adding an explicit Gauss-Bonnet contribution to the action [32,99] or by including the volume form of the induced metric on the horizon in the Lagrangian [100] where, similar to adding the absolute value of the negative entropy as r + → 0 as mentioned above, one must ensure S → 0 as M → 0. More generally, in even dimensional spacetimes adding Euler densities to the action will shift the entropy by an arbitrary constant without changing the solution of the field equations (see appendix A of [84] for the case of Gauss-Bonnet gravity in four dimensions). This arbitrary constant must be chosen such that the entropy vanishes for AdS spacetime. The more general problem of how to deal with negative entropy in any dimension we leave to future work. Working in both the fixed charge ensemble and fixed chemical potential ensemble, we classified the phase structure and critical points for these black holes. In the fixed charge ensemble, and in four dimensions we found out that even for zero charge, there are still physical critical points for k = 1 when at least one of couplings is non-zero (in contrast to Einstein gravity) and also for k = −1 when both couplings are nonzero. A first order VdW phase transition between small and large black holes was seen. For the first time we have observed critical points for a neutral hyperbolic black hole in any dimension provided both the cubic and quartic couplings are nonvanishing. This emphasizes the importance of the non-linearity of GQG in inducing new phase behaviour. In five dimensions we observed the occurrence of two physical critical points when both cubic and quartic couplings are nonzero, even for vanishing charge. These respectively correspond to the end point of a standard VdW transition and the starting point of a reverse VdW transition. However for the reverse VdW transition the pressures are smaller than the second critical pressure, and so one cannot choose parameters such that these two end points merge to obtain an isolated critical point -the pressure in the second line of firstorder phase transitions does not increase with temperature unlike the first coexistence line. For hyperbolic black holes in five and six dimensions there are regions in parameter space that yield negative mass. In the d = 6, k = +1 case, we noted the existence of three and two critical points under the respective condition that three and two of the parameters (charge and two couplings) are non-zero. Since the coexistence lines are both increasing JHEP07(2019)012 functions of pressure with respect to temperature, it is possible to find parameter choices for which the critical points merge into an isolated critical point. We obtained the critical temperature and volume in terms of the electric charge and couplings in various dimensions up to first order in the quartic coupling λ. For spherical black holes in quartic gravity the universal relationship in Einstein gravity given by the first term in (4.32) receives dimension-dependent corrections -it is no longer universal. These correction are negligibly small in four and five dimensions; however in higher dimensions they cause the critical ratio (4.32) to increase. We also analyzed the existence of physical critical points in the grand canonical ensemble in the quartic theory. Obtaining the relevant thermodynamic quantities, we confirmed the presence of a first order phase transition in four dimensions that is absent in the corresponding situation for Schwarzschild black holes in both cases, for which either the chemical potential or the pressure are fixed. We also investigated phase transitions in higher dimensions. Our study of black hole thermodynamics will, of course, be modified by the inclusion of Lovelock terms in d > 4 and by standard quasitopological terms [16,22,101]. While it is conceivable that such terms in combination with those in the quartic theory will yield new phase behaviour, their inclusion would considerably broaden the parameter space and so we shall leave this investigation for future work. Our goal was to isolate the thermodynamic behaviour of black holes in the quartic theory. The reason for omitting the Lovelock terms (which in principle could be present in more than 4 dimensions) was primarily for simplicity. Their inclusion broadens the parameter space, and would make our paper notably longer than it already is. A study of how the Lovelock terms affect our results is possible, of course, but we prefer to leave this for future work. Finally, in the context of the AdS/CFT, we computed the ratio of shear viscosity to entropy density η/s for field theories dual to the quartic generalized quasi-topological theory in all dimensions and concluded that in four dimensions the KSS bound held for choices of the quartic coupling yielding positive mass and temperature. In striking constrast, we found that this behaviour did not hold in five dimensions -the range of coupling required for a physical solution entailed a violation of the KSS bound. However, the bound remains valid in four and six dimensions, and we anticipate it is satisfied in higher dimensions. It would be interesting to see if including yet higher order terms in the curvature can yield black holes satisfying both the KSS bound and all physically reasonable requirements in five dimensions as well as other dimensions. Further constraints could be imposed by configuring other conditions in the corresponding CFT, such as causality constraints and positivity of energy flux; the implications of this require further investigation. JHEP07(2019)012 The near horizon expansion for the metric function is f (r) = 4πT (r − r + ) + a 2 (r − r + ) 2 + ∞ i=3 a i (r − r + ) 3 . (B.1) As discussed earlier the field equations at each order determine the parameters in the expansion in terms of a 2 , but a 2 = f (r + )/2 itself remains undetermined. One approach for fixing this free parameter is to use the shooting method such that the numerical solution for the metric function reaches the known asymptotic solution at large r. Another method [9] entails considering a 2 = g(λ). Continuing to higher orders, we find that g (n) (0) is specified at any arbitrary order provided a n+3 does not have a singularity as λ → 0, and a n+2 receives the expected Einstein value at the same limitation. Because the coefficients of the derivatives increase rapidly, the Taylor series does not have a non-zero radius of convergence and the analytic expression for a 2 is not valid. However inserting the g (n) (0) terms (which implicitly contain derivatives of the temperature) into a Padé approximant yields a satisfactory result. The coefficients of the Taylor series diverge due to the existence of a pole at positive λ. This problem comes from the fact that in our calculations we consider the temperature as a function of the coupling, although it is not a real analytic function. Computation of the Padé approximants for higher orders is quiet tedious, although according figure 18, for small coupling one can make use of lower order Padé approximant to obtain accepted outcome. For example, [2|2] Padé approximants for some dimensions in terms of x = λ/λ p are, a d=4 Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
19,742.8
2019-07-01T00:00:00.000
[ "Physics" ]
Gaze Estimation Using Neural Network And Logistic Regression Currently, a large number of mature methods are available for gaze estimation. However, most regular gaze estimation approaches require additional hardware or platforms with professional equipment for data collection or computing that typically involve high costs and are relatively tedious. Besides, the implementation is particularly complex. Traditional gaze estimation approaches usually require systematic prior knowledge or expertise for practical operations. Moreover, they are primarily based on the characteristics of pupil and iris, which uses pupil shapes or infrared light and iris glint to estimate gaze, requiring high-quality images shot in special environments and other light source or professional equipment. We herein propose a two-stage gaze estimation method that relies on deep learning methods and logistic regression, which can be applied to various mobile platforms without additional hardware devices or systematic prior knowledge. A set of automatic and fast data collection mechanism is designed for collecting gaze images through a mobile platform camera. Additionally, we propose a new annotation method that improves the prediction accuracy and outperforms the traditional gridding annotation method. Our method achieves good results and can be adapted to different applications. INTRODUCTION Gaze estimation is a technique of detecting and obtaining the direction and position of observed gaze through hardware and software algorithm analyses [1][2][3][4]. For instance, as illustrated in Fig. 1, a gaze estimation-based application installed in the device would return the predicted gaze location on the screen when the user is looking at the screen. Recently, gaze estimation has been widely used in many scientific research fields, especially in human-computer interaction [5]. With gaze estimation technology, a system was developed that around the needs for region of interest decompression and display in the context of large image interpretation and analysis in science and medicine [6]. Other application fields cover advertising recommender systems [7], assisted driving [8], psychology [9], military [10] and other fields [11,12]. As the number of scenarios using gaze estimation technology is increasing, more convenient, fast and low-cost methods for gaze estimation are required. Gaze estimation also has many applications in medical research. An example is the vision screening test, a popular test used in hospital for screening potential vision problems and eye disorders [13]. Typically, during the test, a doctor monitors the eye movement of children when the children are attracted to the test pictures. However, many factors including the non-cooperation of children, limitation of viewing angle and different medical experience of doctors can affect the results of the vision screening test. For the vision screening test, gaze estimation can be utilized to obtain more accurate results in a fast and convenient way. For instance, using the gaze estimation method proposed herein, without human intervention, mobile devices can automatically recognize the gaze direction of children or assess whether the gaze of children 2 Y . X i a et al. is located on the testing pictures through the gaze captured by the camera of mobile devices, and can thus aid doctors in obtaining more accurate test results. In this study, we aim to develop a gaze estimation method specialized for mobile devices such as mobile phones. The main reasons for mobile devices are 3-fold. First, mobile phones are widely used not only in communication but also in daily life, including making online shopping payments, entertainment, telecommuting and identity recognition. Next, mobile phones receive frequent hardware updates, and most smart phones are capable to support computationally intensive algorithms. Moreover, the rapid development of digital cameras in phones has tremendously improved image quality, which enables us to collect reliable gaze images conveniently and economically. Mobile phones supply a convenient platform for gaze estimation and have become popular in recent years. Gaze has shown its advantages in human-computer interaction. When one uses mobile device, gaze serves as a tool for handsfree interaction and has many applications experience as an input modality for tasks including desk control [14], target selection [15] and notification display [16]. Using gaze for interaction control is faster than the mouse for pointing on the screen; the mouse tends to lag behind gaze by more than 100 ms on average, according to the result of experiment about eye and mousecorrelated relationship [17]. The gaze estimation technique can also be used to recommend relevant or similar goods to a potential customer if the gaze of a user is detected to stop on some good. When a user is playing games on a mobile phone, the captured gaze can help the user to control the direction of the target movement. The gaze captured by mobile phone cameras can help users to unlock their phones using password entry [18] and open an application without using hands. Deep learning is an extremely powerful technique that has developed fast and is widely used in computer vision [19][20][21], including gaze estimation [22,23]. iTracker [24] is a convolution neural network (CNN) for gaze estimation, a typical method based on deep learning. Hence, we used the deep learning technique in our proposed gaze estimation method. Furthermore, there have been many open-source datasets for gaze estimation, such as GazeCapture [24], TabletGaze [25], a comprehensive head pose and gaze database [26], ETH-XGaze [27] and RT-GENE [28]. However, these datasets were collected from mixed devices or tablets instead of single phone devices. Also, these datasets lacked the images where the participants' gaze located outside the screen. We wish to develop a gaze estimation method mainly based on mobile phone that could exhibit good precision and is applicable to various applications in daily life. Therefore, we collected our own gaze dataset using mobile phones. The appearance and operating system of device used in this study is similar to the most popular and daily used mobile devices. Besides, the participants were all Chinese. In collecting process, we added temporal information and collected the images when the gaze was located outside the screen, which can be used to improve the model. Motivated by the existing methods, we proposed a gaze estimation method which combined a neural network and the logistic regression method. Our proposed method relied on a two-stage process. In the first stage, a convolutional neural network with a logistic regression layer processed the input gaze pictures and output estimated probability vectors of bins annotation labels in both horizontal and vertical directions. In the second stage, an additional logistic regression was used for refinement of prediction from the neural network. The proposed two-stage method is different from existing gaze estimation methods, although they share certain similarities. To be specific, for the first stage, a similar method treating the screen horizontally and vertically has been investigated in TabletGaze, where the gaze labels of data also include both horizontal and vertical coordinates on the screen. Different from TabletGaze, we treat gaze direction as horizontal and vertical directions separately and split screen into bins. The labels of horizontal and vertical directions are bins indexes vectors including 1 and 0 instead of one coordinate. The gaze location in the TabletGaze is obtained directly by regression, while the gaze location in the proposed method is obtained though regression using the probability vector output from CNN and setting the threshold. In another paper, Liu [29] studied a logistic regression layer following a CNN for gaze estimation, which also shares certain similarity with our method. In Liu's work, the last classification layer of CNN is replaced with a logistic regression layer using for classification in all raw image pixels. The proposed method by Liu is more similar to the gridding method, which needs sion layer following the CNN in our proposed method is used to produce probability vectors corresponding to the bins instead of directly classification for binary outcome. In the second stage of the proposed method, the probability vectors are then used to fit curves with modified sigmoid function and get target point by setting threshold, which can refine the prediction result. In this article, we developed a data driven model for gaze estimation based on two-stage process without using handengineered features. In specific, we proposed an annotation method that annotates the gaze location by splitting the screen into bins horizontally and vertically instead of pixels. Compared with traditional annotation methods, the proposed annotation method could convert a complex multi-class problem into a binary-class problem in multiple bins, well control the number of labels and improve the accuracy. The first stage of the proposed method is similar to existing works, and the additional logistic regression in the second stage is our major contribution which can further process the output probability vectors for refinement of the gaze prediction. Data collection For gaze estimation, we first designed a set of automatic and fast data collection mechanisms for collecting gaze data that included ordinary images captured by the designed mobile platform camera. In this study, we collected 200 frames for each of the 550 participants. The participants were all Chinese with the age ranging 20-35 years old. It took about 10 minutes for each participant. The data collection was implemented using Samsung Galaxy S8+. The screen resolution was 2220 × 1080 pixels. The screen size and device type are aligned with the popular devices Chinese used. The light environment of data collection mimics the natural office working scenarios. To collect gaze images, we preset fixed points on the mobile device screen, which enabled us to obtain the ground truth of the gaze points on the screen easily and conveniently. We setup a collecting program, in advance, that provided instructions to the participants. The participants followed the instructions to prepare for the next step or look at the screen. We obtained frames from the camera of the mobile device when the participants looked at the fixed points on the screen. Specifically, once the collecting program started, the fixed points began to appear in order on the screen. There were 25 gaze points in total and each point was presented for 15 seconds on the screen. The camera captured pictures when the participants were instructed to look at the points. To guarantee the quality of the collected images, every step was performed after a voice prompt. Before displaying a new gaze point, there was a break about 5 seconds. To avoid fatigue of participants, the screen automatically turned black so as to give the participants a short break about half minutes when the screen displayed five gaze points. In addition, the participants were encouraged by voice prompt to change their head pose and move their head to a different position relative to the camera. The scenes of participants with different head poses or body postures simulate the scenes of different postures people using phones in daily life. The scene of gaze data captured was displayed in the left panel of Fig. 2. Due to different sitting posture of participants, the distance from participants to screen was not fixed and was ranging from 25 to 60 cm. An illustration is shown in the right panel of Fig. 2. We also captured the images where the gaze was spotted outside the screen for improving the accuracy of gaze estimation. Besides, we captured the data frames with time sequence for adding the time series relation information to our model for future research. Data annotation and pre-processing Data annotation is necessary for gaze learning tasks. A commonly used method is the gridding annotation method that divides the screen into grids. Denote the width and height of the screen as W and H, respectively, and set the grid width as g. Here, W and H are measured with pixel values. We can select the appropriate g such that W and H can be divided by it. Therefore, we set the grid value of the gaze location as 1 and the others as 0. Finally, this method produces (W × H)/g grid labels, which may cause a computationally intensive problem in practice or result in a complex model and hence affect the accuracy of the model output. Considering the drawbacks of regular gridding annotation method, we propose a new annotation method to control the number of labels on the gaze image. We herein propose an annotation method that divides screen into bins instead of grids. Hence, the amount of calculation is reduced significantly, and the predicted gaze location is labeled separately in the horizontal and vertical directions. We divide the screen into horizontal and vertical bins, respectively, and set the bin length equally as b, see Fig. 3. Additionally, we select the appropriate bin length such that W and H can be divided by it. This annotation method produces (W+H)/b bins. Denote their references as ( In this annotation method, all bin references on the left of gaze location in horizontal direction are labeled as 1. Similarly, all bin references above the gaze location in vertical direction are labeled as 1. The rest bins references are labeled as 0 in horizontal and vertical directions. Compared with the proposed annotation method, the gridding annotation method yields larger dimension of an annotation vector, producing larger amount of annotation values. The proposed method yields less dimension of an annotation vector but requires more processing steps to obtain the predicted gaze location. With the proposed annotation method, we essentially convert a multi-class problem into a binary-class problem in multiple bins. We herein focus on the new annotation method. For the training of neural network, we only use the face and eyes patches, so we first pre-process the frames, as illustrated in Fig. 4. For each image, we detected the two eyes and face on the frames employing Haar Cascade Classifiers [30] in Open Source Computer Vision Library (OpenCV) [31] and got the position of top-left corner, width and height of the two eyes boxes and face box. After detecting, we cropped the images of the left eye, right eye and face from the frames according to the box position and width. We resized the box images to the same scale for the convenience of input. The next two subsections introduce the proposed two-stage procedure of gaze estimation, consisting of using the neural network to output the estimation in two directions and refining the estimation with logistic regression. Stage-I: process gaze image using neural network In this subsection, the neural network model was used to extract the features of eyes and faces and output coordinate probability Section C: Computational Intelligence, Machine Learning and Data Analytics The Computer Journal, Vol. 00 No. 00, 2021 vectors. Frames of the face, left eye and right eye were cropped from the original pictures. Subsequently, these three parts were used as input to the corresponding convolutional layer to extract high-level features. The first two convolutional layers of left eye and right eye shared the same weights. After two convolutional layers and pooling layers, the left eye features and right eye features were independently input into three convolutional layers, respectively. The three convolutional layers were followed by a common fully connected layer. The frame of face was input to five convolutional layers followed by two fully connected layers. Finally, the total features above were joined and then placed into two fully connected layers, as shown in Fig. 5. The inputs contained left eye, right eye and face patches with size 227 × 227. The size of last two fully connection layers were 128 and M + N. For the convolutional layers, each layer contained two hidden layers including batch normalization [32], which have much higher learning rates and are insensitive to initializations in the network and rectified linear units [33], and hence could better learn features for face verification and preserve relative intensities information through multiple layers of feature detectors. Finally, with a sigmoid function, we obtained vectors output by the neural network. We further divided them into an M × 1 dimensional vector and an N × 1 dimensional vector, respectively, in the horizontal and vertical directions, where M and N are the number of horizontal and vertical bins, respectively. The vector, which is the output from a sigmoid activation function, represents the probability of each bin in which the value is predicted to be 1. Here, we obtain the probability vectors instead of predicted coordinates. It is noteworthy that the traditional neural network outputs a value in a range starting from 0 to the screen width or height if it predicts the coordinates directly, which causes a wide output range compared with that of the proposed model, while our method converts the prediction task into a binary classification problem, hence avoiding such problems as in traditional neural networks. Stage-II: fitting network output using modified logistic regression Based on the neural network output, we used an additional logistic regression for refinement of the estimation results in the second stage. Considering the annotation data of the image is a vector composed of subsequence of 1 and 0 in order, we applied logistic regression with a modified sigmoid function to process the output from stage-I, which corresponds to the activation function of the neural network output layer, i.e. the sigmoid function. The same model was separately applied to the output Section C: Computational Intelligence, Machine Learning and Data Analytics The Computer Journal, Vol. 00 No. 00, 2021 6 Y . X i a et al. vector in the horizontal and vertical directions. The purpose was to obtain the boundary position of the two subsequences such that the logistic regression model could be used for classification to obtain the target position according to the classified data. We used the vector data of the neural network output from one direction, such as the horizontal direction, as input to fit the sigmoid function, see Fig. 6. The fitting data were a value vector ranking from big to small because the labeled data composed of subsequence of 1 and followed by subsequence of 0. According to the characteristics of fitting data, we modify the sigmoid function as x i p i is the mathematical expectation of the output variable from neural network, p i represents possibility variable of neural network output and K is the number of variables from one direction. Through x, we process the variable values by mean normalization. Meanwhile, to match the fitting data with the characteristics of the curve graph, the symbol of variables x in the function was transformed such that the curve flipped left and right. In equation (1), a is a tuning parameter that affects the steepness of the fitting curve but not the output results. Therefore, we can set it freely according to our needs. The method above was applied to the output vector in the horizontal and vertical directions, respectively. The process of finding the target points in the horizontal direction is shown in Algorithm 1. We set the median value between the maximum and minimum values of the value range as the threshold value for classification with logistic regression. We iterated through each component of the probability vector and compared it with the threshold value until the point breaking through the threshold value was obtained. We define this point as the mutation point, and target point. The process is similar when finding the target points in the vertical direction. In Fig. 7, the relative points are mapped to the points on the screen. We obtain the actual position coordinates by revivifying the relative position point and calculate the predicted gaze position coordinates using the following equations: Revivification of target point where T x and T y are the predicted gaze position in horizontal and vertical direction, respectively. Recall that M and N represent the amounts of bins in horizontal and vertical direction, respectively, and W and H represent width and height of the screen. EXPERIMENT In this section, we introduced the datasets on which our method performed and detailed the settings in our experiment. Then, we showed our results using two annotation methods. Besides, we evaluated the performance of the proposed method on our own dataset and the GazeCapture dataset. We also analyze the prediction errors of our method. Setup For the training data of the gaze model, we divided the gaze dataset into training sets and test sets. The participants were divided into 11 groups: six groups for the training set, two groups for validation and three groups for the test set, as listed in Table 1. The participants were enrolled into groups in sequence so as to balance the gender proportion, wearing glasses proportion of participants for each group. Finally, we obtained 50 participants in each group. Also, for comparison, we performed our method on Gaze-Capture dataset. This dataset was captured with iphone and ipad, which contains 1 490 959 frames from 1471 subjects. The dataset was divided into train, validation and test set. The train, validation and test set consisted of 1271, 50 and 150 subjects, respectively. The network input contained three parts, including the left eye frames, right eyes frames and face frames. The input frames size was 227 × 227. With our gaze dataset, we set the bin number of horizontal and vertical direction as 111 and 54, respectively. With GazeCapture dataset, we set bin number as 100 both in the horizontal and vertical directions. The model was implemented using Python, Pytorch programming framework and compute unified device architecture (CUDA). In the training procedure, the initial learning rate was 0.001, and stochastic gradient decent optimizer with a momentum of 0.9 and a weight decay of 0.0001 was used. The model training ran on Ubuntu 18.04 with 12 3.2GHz i7-8700 CPUs, 32GB memory, additionally with two GPUs including NVIDIA GeForce RTX 2070 and TESLA K40. Results and comparison We evaluate the accuracy of the proposed method via the error from the ground truth location to the gaze predicted location on the screen. We pre-set the fixed points on the mobile device screen and regard the points location as the ground truth when participants watching the points. We compute the error using the following formula: where (x 0 , y 0 ) represents the ground truth location coordinates, and (T x , T y ) is the predicted gaze location. To assess the proposed annotation method, we compared its performance with the gridding annotation method. For the gridding annotation method, the logistic regression in stage-II is not required for processing after the neural network, so we can obtain the target point through a neural network directly. We evaluate the maximum, minimum and mean errors in the models using two annotation methods. The results were evaluated in centimeters. Note: GazeMP represents our collected dataset. M1 and M2 represent models with the gridding annotation method and the proposed annotation method, respectively. The max. error and min. error were evaluated by groups. The results of M3 are reported in the GazeCapture paper [24]. Specifically, we denoted the model with the gridding and the proposed annotation methods as M1 and M2, respectively. In Table 2, the first row is the results by M1 and M2 performed on our collected dataset. We evaluated the maximum error and minimum error by participant group. The maximum error and minimum error showed the largest and smallest participant group mean error in all groups. The mean error of M2 was smaller than that of M1. M2 also yields better prediction results in terms of the maximum error and minimum error. The neural network in M2 has fewer output quantities than M1; it reduces the quantity of predicted targets. In addition, M2 used the sigmoid function where a high relevance exists between the variables, while M1 did not. Hence, M2 will undoubtedly improve the prediction accuracy. We checked the participant groups with big error in our dataset. We found that there were more participants wearing glasses in the group with big prediction error. It may be necessary to balance the proportion of participants wearing glasses both in training dataset and test dataset and in all participant group. We also performed the two annotation methods on GazeCapture dataset [34] for comparison. The results are shown in the second row of Table 2. The mean error of M1 was 2.42 cm. However, the mean error of M2 was 2.23 cm, which was better than M1. Also, maximum error and minimum error of M2 were better than that of M1. Neither M1 or M2 has ideal performance on GazeCapture dataset, compared with the results reported in the GazeCapture paper [24]. For GazeCapture data, the results using proposed annotation method also outperform that by the gridding method on both ipad platform and iphone platform. Specifically, the gaze prediction mean error by M1 and M2 on iphone data was 2.20 and 2.09 cm, respectively, much better than that performed on ipad data. One possible explanation is that the gaze location in our proposed method is predicted to be located in a square area with width equal to bin length, and hence, the error is related to the bin length. Since our predicted gaze location is the center point of the square area and the bins, number we set was equal on the ipad and iphone, so the bin width of ipad is larger than that of iphone, which may cause the larger error on the ipad than that on the iphone. Prediction error analysis The results indicate the gaze prediction errors at different locations, see Fig. 8. According to the error distribution, we observed that a larger error primarily occurred when the points were close to the margin of the screen. The error increased with the distance to the screen center. The result of the center point was not the best of all prediction results. Good predicted results were shown on points close to the center point. Observing the distribution of errors, we discovered a poor gaze prediction on points close to the margin of the screen, which were related to our collected dataset. Because we only collected a small number of frames of people looking at points around the margin of the screen, which might adversely affect the gaze prediction on the point around the margin. Our gaze estimation method did not perform perfectly in prediction error and the possible explanations are as following. Our collected dataset was collected on the mobile phones with our designed data collecting mechanisms, while the Gaze-Capture dataset was captured on mixed platforms including iphone and ipad with different screen size. Besides, we have collected high-quality images in our dataset and design the proper annotation method according to our data. When annotating the data, we adjusted the bin width according to the different size of screen on our collected dataset instead of setting the fixed bins numbers on GazeCapture dataset, which may affect the prediction results. In addition, the predicted gaze point is located at the center point in a square area with width equal to the bin length, which makes our method more robust. In the future, we will include information of facial coordinate position [35], head pose [36,37] and time sequence information, such as eye optical flow, as the inputs of a neural network to further reduce the prediction error and hence improve the prediction accuracy of the proposed model. CONCLUSIONS In this article, we herein proposed a two-stage gaze estimation method based on a neural network and logistic regression, Section C: Computational Intelligence, Machine Learning and Data Analytics The Computer Journal, Vol. 00 No. 00, 2021 where the neural network was used to process the input pictures and output predicted probability vectors of gaze labels and the logistic regression was used for refinement of prediction from the neural network. We also designed a dataset collecting mechanisms and built our own dataset. The proposed method could be widely used in various mobile devices without additional hardware or systematic prior knowledge. We demonstrated that the two-stage gaze estimation method combined with a new annotation approach significantly improved the gaze estimation accuracy. Furthermore, by changing the annotation bins, we could adjust the bin length to different accuracy needs for applications. Data Availability Currently, the data underlying of this article cannot be shared publicly due to privacy concerns and ongoing study. Once the project is ended, the data will be publicly available, with the ethical authorization, in the near future. The original data should only be used for scientific research purpose. The implementation of the proposed method will also be publicly available on GitHub. Funding National Natural Science Foundation of China (11901013, 12075011); Beijing Natural Science Foundation (1204031, 7202093); Fundamental Research Funds for the Central Universities.
6,605.2
2021-05-12T00:00:00.000
[ "Computer Science" ]
Quantitative Assessment of the Arm/Hand Movements in Parkinson’s Disease Using a Wireless Armband Device We present an approach for quantitative assessment of the arm/hand movements in patients with Parkinson’s disease (PD), from sensor data acquired with a wearable, wireless armband device (Myo sensor). We propose new Movement Performance Indicators that can be adopted by practitioners for the quantitative evaluation of motor performance and support their clinical evaluations. In addition, specific Movement Performance Indicators can indicate the presence of the bradykinesia symptom. The study includes seventeen PD patients and sixteen age-matched controls. A set of representative arm/hand movements is defined under the supervision of movement disorder specialist. In order to assist the evaluations, and for progress monitoring purposes, as well as for assessing the amount of bradykinesia in PD, a total set of 84 Movement Performance Indicators are computed from the sensor readings. Subsequently, we investigate whether wireless armband device, with the use of the proposed Movement Performance Indicators can be utilized: (1) for objective and precise quantitative evaluation of the arm/hand movements of Parkinson’s patients, (2) for assessment of the bradykinesia motor symptom, and (3) as an adequate low-cost alternative for the sensor glove. We conducted extensive analysis of proposed Movement Performance Indicators and results are indicating following clinically relevant characteristics: (i) adequate reliability as measured by ICC; (ii) high accuracy in discrimination between the patients and controls, and between the disease stages (support to disease diagnosis and progress monitoring, respectively); (iii) substantial difference in comparison between the left-hand and the right-hand movements across controls and patients, as well as between disease stage groups; (iv) statistically significant correlation with clinical scales (tapping test and UPDRS-III Motor Score); and (v) quantitative evaluation of bradykinesia symptom. Results suggest that the proposed approach has a potential to be adopted by physicians, to afford them with quantitative, objective and precise methods and data during clinical evaluations and support the assessment of bradykinesia. inTrODUcTiOn Contemporary approach to evaluation of the patient's condition in Parkinson's disease (PD), as well as assessment of the rehabilitation effectiveness, is based on the clinical assessment tools and evaluation scales, such as Hoehn and Yahr (HY) (1) and Unified Parkinson's Disease Rating Scale (UPDRS) (2). However, although beneficial and commonly used, those scales are descriptive (qualitative), primarily intended to be carried out by a trained neurologist, and are prone to subjective rating and imprecise interpretation of patient's performance. Recent developments in the field of affordable sensing technologies have a potential to improve and support traditional evaluation techniques, aiming at defining quantitative movement indicators to assist practitioners and clinicians. Various types of wearable sensors have been proposed in the literature for the measurement and assessment of the arm/hand movements: accelerometers (3,4), gyroscopes (5,6), magnetic sensors (7,8), force sensors (9,10), and inertial sensors (11). However, these sensor systems only modestly contribute to the arm/hand movement assessment. Specifically, the use of one or two isolated sensors in motion acquisition restricts the movement quantification, due to the limited amount of the collected data. More informative sensors are the ones that measure muscle activity, and the standard approach for obtaining the muscle activity information is the placement of the surface Electromyography (EMG) electrodes on the skin, which detect the electrical potential generated by muscles. The main drawback of the standard EMG electrodes is the wired connection with a device for EMG signal representation. Consequently, muscle activity tests are available only in the hospital environment. The analysis of the muscle activity is reported in some recent studies concerning PD (12)(13)(14). The authors in Ref. (12,13) particularly observe the muscles' behavior during deep brain stimulation. They report that Parkinson's disease symptoms change the EMG signal properties and suggest that EMG analysis is able to detect differences between the deep brain stimulation settings. The authors in Ref. (14) use the EMG data, along with the readings from the accelerometer, to successfully differentiate essential tremor from Parkinson's disease. However, all these studies collect the EMG data using surface electrodes relying on the wired system. The authors have suggested many different features to characterize the EMG signals in the time domain (13)(14)(15)(16)(17)(18)(19)(20)(21) and frequency domain (15,16,19,21). The two most common approaches for the EMG signal analysis are the wavelet transform (14,21) and the window approach (15,19). In our study, we have adopted the window approach and the features suggested in the literature that emphasize the amplitude characteristics of the EMG signal. Such choice has been convenient for our case as it will be explained in detail in the Results section. In our previous studies (22,23), we have used a visionbased sensor (Kinect device) to quantify full-body movements (gait and large-range upper body movements) and a sensor glove (CyberGlove II device) to quantify hand movements of Parkinson's patients. We proposed novel scores called Movement Performance Indicators that were extracted directly from the sensor data and quantify the symmetry, velocity, and acceleration of the movement of different body/hand parts. Our approach for the hand movement characterization, based on the sensor glove data, has demonstrated significant results and ability to support the diagnosis and monitoring evaluations in PD (23). Still, due to the high cost, it does not fit into our concept of a low-cost rehabilitation system for movement analysis. Another limitation arises from the right-hand design of the sensor glove device. This implies that only right-hand movements can be tested; and hence, only right side affected patients are taken into account. Consequently, left-right side analysis cannot be conducted as an important indicator of the disease progression. In this study, we focus on quantification of the arm/hand movements from measurements acquired with a wireless wearable armband device-the Myo sensor, 1 in order to investigate whether the armband sensor can assess fine movements and be used as a suitable alternative to the sensor glove. This device is placed on the forearm and outputs Electromyography (EMG) data from eight channels. EMG data provide insight into the muscle activity information. Impaired muscle activity and restriction of motor functions are common characteristics of PD. The armband device contains also three-axis accelerometer and three-axis gyroscope, which output acceleration and angular velocity information (Inertial Measurement Unit (IMU) data), respectively. The accelerometer and gyroscope have been widely tested in studies related to PD and showed significant potential toward quantification of PD symptoms (14,(24)(25)(26). The authors in Ref. (24) use accelerometers, while the authors in Ref. (26) use both, accelerometers and gyroscopes, to observe the gait characteristics in PD patients. They state that freezing of the gait episodes can be detected using sensor data, along with the feedback about gait performance. The study (25) focuses on the quantification of bradykinesia from finger-tapping movement using two gyroscopes placed on the fingers. Although the results of bradykinesia quantification using gyroscope data are promising, the analysis is limited to one movement and two sensors. The overall conclusion is that signals from accelerometer and gyroscope demonstrate meaningful patterns in the patient's movements and reveal the presence/intensity of the disease motor symptoms. Like in the case of EMG signals, we concentrate on the signal features from accelerometer and gyroscope that take into account the signal amplitude characteristics. The wireless armband device has been launched very recently and only a few conceptual studies report some preliminary results concerning its inclusion into medical protocols (27)(28)(29). However, to the best of our knowledge, it has not been previously used in any study regarding the quantification of the arm/hand movements in PD assessment. Our study overcomes the scope of conceptual studies published so far, by introducing the comprehensive processing modules and interpretation of the sensor measurements from armband device. We propose new scores for the arm/hand movement characterization denoted as Movement Performance Indicators (hereinafter, MPIs). The MPIs are intended to support diagnosis rotation of the hand with elbow extended RH-EE 2. rotation of the hand with elbow Flexed at 90°RH-EF 3. Object grasping, Pick and Place in the case of easy load GPP-EL 4. Object grasping, Pick and Place in the case of heavy load GPP-HL 5. The Proximal Tapping Task TT-P 6. The Distal Tapping Task TT-D and monitoring evaluations, as well as the assessment of the motor symptoms, with a special emphasis on bradykinesia. The MPIs we propose are built upon both domain-specific knowledge (provided by movement disorder specialist), as well as data analysis. They are primarily designed in accordance with clinically relevant aspects and tested toward official clinical tests and scales. We thus propose an affordable, reliable, and portable sensor system along with an approach for movement quantification, with the potential to be used as a support for the conventional motor performance evaluations and the possibility of home rehabilitation. In this article, we present extensive experiments and analysis conducted to address the following aspects: (1) quantitative evaluation of the arm/hand movements of Parkinson's patients, (2) objective assessment of bradykinesia motor symptom, and (3) investigation whether the armband sensor can be an adequate low-cost alternative for the sensor glove, due to its high cost. Aspects addressed in (1) and (2) are worth to be investigated in the treatment of Parkinson's disease, but their direct assessment is not possible considering the limited resources and standard techniques used by doctors. Participants Seventeen Parkinson's disease patients (age = 63.5 ± 8.3, 2 disease duration = 4.7 ± 2.5, HY 3 disease stage = 2.59 ± 0.93, UPDRS-III 4 = 31.82 ± 15.43 during ON-period) have been tested in this study. Patients are examined during their first ON-period in the morning. For ten patients, the right hand is affected by the disease, while seven patients have the left hand affected. A control group is formed by sixteen age-matched volunteers without any history of neurological or movement disorder. All subjects have been examined under the same conditions and instructed by a neurologist and therapists. This study was approved by the local ethics committee according to the Declaration of Helsinki. After the experimental procedures were explained, all subjects signed written informed consent forms. experimental Protocol The experimental protocol, designed by the movement disorder specialists (Table 1; Figure 1), includes six exercises performed with the left and right hand: four arm/hand movements and two tapping test movements, well-established experimental paradigm designed for bradykinesia assessment (30). The tested movements are chosen to closely reflect the patient's activities of daily living that engage forearm muscles. The movements have been performed with the left and right hand, respectively, and acquired using the armband sensor. The subjects were instructed to perform the movements as fast as possible. The medical procedure adopted in PD analysis includes a set of movements/exercises, in order to allow doctors to make a qualitative evaluation of the disease stage and progress. The first two exercises emulate the bulb screwing/unscrewing in two variations: Rotation of the Hand with Elbow Extended (RH-EE, Figure 1A) and with Elbow Flexed at 90° (RH-EF, Figure 1B). Those movements were acquired during the period of 10 s. The following two exercises relate to the object Grasping, Pick and Place in the case of Easy Load (GPP-EL, Figure 1C) and Heavy Load (GPP-HL, Figure 1D). Those movements were repeated five times. The last two exercises represent the tapping test. The test consists of the proximal and distal tapping tasks using a specially designed board as the one proposed in Ref. (30). The Proximal Tapping Task refers to the alternate pressing of two large buttons located 20 cm apart with the palm of the hand, during the 30 s interval (TT-P, Figure 1E). The Distal Tapping Task is related to the alternate pressing of two closely located buttons (3 cm apart) with the index finger while the wrist is fixed on the table during 30 s (TT-D, Figure 1F). The acquired data consist of: (i) EMG data from 8 channels (sensor data rate 200 Hz) and (ii) three-axes IMU data-acceleration and angular velocity (sensor data rate 50 Hz). The armband sensor consists of eight EMG channels labeled as shown in Figure 2A. During the experiments, the sensor was placed in the same position for every subject ( Figure 2B, right hand). It can be seen that for the right-hand channels 3, 4, and 5 cover the upper forearm (extensors muscles), channels 7, 8, and 1 are placed on the lower forearm (flexors muscles), channel 2 covers the external forearm muscles, while the channel 6 is placed on the internal forearm muscles. As for the left hand, extensors and flexors are covered with the same groups of channels, while the channels 2 and 6 are replaced between internal (channel 2) and external (channel 6) forearm muscles. Data Processing In this section, we explain the design of the seven basic measurements, based on which MPIs are grounded. The choice of the basic measurements is based on the properties of the sensor signals in the time domain (signal amplitude). The readings from the EMG electrodes, as well as outputs from an accelerometer and gyroscope, are used for movement characterization. Before the basic measurements calculation, the signals are preprocessed to remove the measurement noise and for performing temporal segmentation. In our experiments, all signals were filtered with regular Butterworth low pass filter. Cutoff frequencies and order of the filter were chosen in accordance with the signal sampling rate and the frequency characteristic of the meaningful signal content. EMG signals are filtered using 4th order filter with cutoff frequency of 20 Hz. As for the accelerometer and gyroscopes signals, the cutoff frequency is set to 5 Hz and filter order to 3. The segmentation procedure is required in order to remove the non-informative signal parts at the beginning and at the end of the signals. For this purpose, the threshold based on the signal energy in the time domain has been adopted (0.4 times the maximum signal energy). Since the EMG signals are highly non-stationary, the most common approach for the processing of the EMG signals is the window approach (15,19). This method implies the temporal segmentation of the signal into sliding windows and calculating the particular value of basic measurements for each separate window (Figure 3). The same technique has been applied to the signals obtained from the accelerometer and gyroscope. The main benefit of the window analysis is to characterize the temporal evolution of basic measurements during the movement. Different lengths of the window and overlapping segment are tested and the results were not sensitive to those choices of the length. We set the window length to 200 ms for EMG signals and 800 ms for signals from accelerometer and gyroscope. The length of the overlapping segment usually amounts 25-50% of the window length as suggested in Ref. (15,19). We choose the length of the overlapping segment as 25% of the window size, hence 50 ms for EMG signals and 200 ms for signals from accelerometer and gyroscope. Quantification of the EMG Signals Various measurements have been proposed in the literature for characterization of the EMG signal (15)(16)(17)(18)(19). Our choice of suitable basic measurements from EMG signal relies on the signal amplitude properties; hence, we tested amplitude-based measurements that are most often used in the literature. Thus, we have quantified obtained EMG signals using the Mean Absolute Value (Emg-mav) (1), Variance (Emg-var) (2), and Waveform Assessment of the Arm Movements in Parkinson's Disease Frontiers in Neurology | www.frontiersin.org August 2017 | Volume 8 | Article 388 Change (Emg-wc) (3). In equations (1)-(3), Wn represents the window length, expressed in signal samples. Quantification of the Signals from an Accelerometer and Gyroscope The accelerometer (ACC) and gyroscope (GYRO) signals are quantified using the same time-window approach as for EMG signals. The choice of basic measurements is different, in accordance with the signal characteristics and the properties of its transformations (such as signal derivative). The accelerometer and gyroscope signals are not processed in their original form. Instead, the basic measurements are extracted from their timederivatives since the signal derivative enlarges the differences between controls and patients. Extracted basic measurements are Simple Square Integral (SSI) and Range (RAN), given by equations (4) and (5), respectively, where  x t ( ) represents the accelerometer or gyroscope signal derivative. The above specified basic measurements are directly related to the signal amplitude-larger amplitude indicates larger value of basic measurements defined by equations (4) and (5). Data analysis The MPIs are designed to emphasize the largest differences between patients and controls. We investigate whether the EMG data from particular channels are more discriminative than others. The comparative statistical analysis between patients and controls across six collected movements and eight EMG channels has been conducted using Wilcoxon rank sum test. In addition, we consider the difference of the group mean values as an indicator of the difference between groups of interest. The same statistical test is conducted for accelerometer and gyroscope sensor data. They have three axes and depending on the particular movement, the data from one axis are more relevant than the data from the remaining two. Consequently, for each movement, corresponding axis of interest is adopted based on the statistical analysis using Wilcoxon rank sum test and comparison between group mean values. Reliability Analysis In order to test the reliability of the extracted MPIs, the split-half method for reliability analysis (31) has been applied. The split-half method divides the conducted tests into two parts and correlates the scores on one-half of the test with scores on the other half of the test. Thus, the split-half method estimates the reliability based on the repetitions inside the same trial. Reliability of the extracted MPIs is assessed using Intraclass Correlation Coefficient (ICC) (31). ICC has a value inside range [0-1], whereby the values closer to 1 indicate higher reliability. Dimensionality Reduction Finding lower dimensional representations which still preserve the most relevant information contained in the original data is key for many machine learning and data mining applications. It results in reduced data needs, reduced computational cost for algorithms, and often even increases the predictive performance of the learned models. Therefore, we have used two popular approaches for dimensionality reduction and feature selection, LDA (32) and LASSO regression (33), to find most relevant MPIs. LDA is a dimensionality reduction approach which finds the most discriminative principal components (linear combination of features), but can also rank the features by their importance. LASSO regression performs feature selection by assigning zero weights to less relevant features, giving them zero influence on the targeted outcome. Theoretically, the LASSO regression is more adequate to non-Gaussian type of data than LDA, but in practice they have similar predictive performance. Both algorithms have the same computational complexity, cubic in the number of features Classification We want to investigate how designed MPIs can be used to differentiate between the groups of interest. We analyze two distinct classification problems in order to support the diagnosis (patients against controls) and progress monitoring (disease stages). The diagnosis task is posed as discriminating the PD patients from the healthy controls, based on the measured values of MPIs, which is a well-known binary classification problem. We define the monitoring task as discerning among the three severity stages in PD patients, which is the multiclass classification problem. Multi-class disease stage classification problem we reduced to three simple binary classification problems, one for each stage, in a common "one vs all" manner (34). To obtain the desired classifiers for diagnostic and monitoring purposes, we employed six common classification approaches: Logistic Regression, Decision Trees, Support Vector Machines (with RBF kernel), K-nearest neighbors (with number of nearest neighbors k = 10), Naive Bayes, and Neural Networks (multilayer perceptron with two hidden layers containing four nodes each). Comparison between Right and Left Side To investigate which MPIs illustrate the differences in the performance of the left and right hand at patients and similar performance of the both hands in controls, statistical comparison has been performed. The choice of statistical tests depends on the data distribution. We performed the Kolmogorov-Smirnov test to assess the normal distribution hypothesis. The test rejected the normal distribution hypothesis with a 0.05 significance level. Consequently, two-sided Wilcoxon rank sum test is applied between the MPI values obtained with the left and right hand. There are forty-two MPIs in total for each hand-seven different (1). The corresponding MPI is considered as relevant for the left-right side analysis between patients and controls if it satisfies the following conditions: (i) patients group: (a) if the difference between the MPI values for the left and right hand is statistically significant (p < 0.05) and (b) the left-hand MPI values are larger than the right-hand MPI values (for the right side affected patients) and the opposite for left-side affected patients and (ii) controls: if the difference between the MPI values for the left and right hand is not statistically significant (p > 0.05). The same statistical tests were conducted for the left-right side analysis between disease stages. Statistical investigation is based on the following conditions: (i) the difference between the MPI values of the left and right hand is statistically significant (p < 0.05); (ii) the left-hand MPI values are larger than the right-hand MPI values (for the right side affected patients) and the opposite for left-side affected patients; and (iii) MPI values decrease with more severe disease stage, while their differences between the left and the right hand increase. Correlation Analysis The correlation analysis is carried out between the proposed MPIs and tapping test (30) and UPDRS-III clinical scale (2). The tapping-test outcomes and UPDRS-III values are obtained as a result of a neurologist's evaluation. The tapping test consists of two tapping tasks-proximal and distal tapping task explained in the Section 2.2. In the case of UPDRS-III, we take into account the general UPDRS-III score (items 18-31 of UPDRS scale (2)) and UPDRS-III subscore related to the examination of the bradykinesia in the hand movements (items 23-25 of the UPDRS scale (2)). Correlations were calculated using Spearman correlation coefficient ρ (higher values of ρ indicate better correlation), along with the p-value. If the correlation coefficient ρ is in the range [0.5-1] and p-value less than 0.05, the corresponding MPI is correlated with the tapping test (positive correlation). On the other side, the correlation coefficient ρ between −1 and −0.5 and p-value less than 0.05, indicate the correlation of the particular MPI with UPDRS-III scale (negative correlation). Figure 4 illustrates the mean absolute value and the standard deviation graph of Emg-mav basic measurement (1) calculated for patients and controls across eight EMG channels for RH-EE movement. The results underline the largest mean value differences between controls and patients on the channel 2 in the case of the right-hand movements and channel 6 for the left-hand movements. Figure 2 shows that those electrodes cover the same group of external forearm muscles in the case of both hands. In addition, channels 3 and 4 (right-hand movements) and channels 4 and 5 (left-hand movements) highlight the large differences, as well (external and upper flexor muscles). The data from all channels demonstrated statistically significant difference between patients and controls (p < 0.01). However, in the following analysis, we take into account channels that emphasize the largest difference between group mean values and consequently, the extraction of the basic measurements has been performed only for the signals from channel 2 for the right-hand movements and from channel 6 for the left-hand movements. The same results are confirmed for remaining EMG basic measurements (2 and 3) and all other collected movements. Figure 5 illustrates the mean absolute value and the standard deviation graph of Acc-ran and Gyro-ran basic measurement (5) calculated for patients and controls across three axes for RH-EE movement. The results underline the largest mean value differences between controls and patients on the Y-axis for Acc-ran and on the X-axis for Gyro-ran in the case of both, right-and left-hand movements. The same analysis is performed for the other ACC and GYRO basic measurement (4) and all other collected movements. In contrast to EMG channels, the axis of interest for ACC and GYRO basic measurements is different across movements, but for the particular movement, the axis of interest is the same for right and left-hand movements. Preliminary comparison between PD and controls The data from all axes demonstrated the statistically significant difference between patients and controls (p < 0.01). However, in the following analysis, for each movement, we take into account the axis that emphasizes the largest difference between group mean values. In total, we have extracted seven basic measurements ( Table 2) for each movement. We characterize twelve movements-six different movements (Table 1 and Figure 1) were performed by both left and right hand. Consequently, based on the seven basic measurements calculated for each movement, we obtained a total set of 84 Movement Performance Indicators (MPIs) for all movements (seven basic measurements times twelve movements). The design of these MPIs was grounded on the information provided by neurologists and therapists with the goal of delivering quantitative information about subject's performance. In the following sections, we will reveal which MPIs are the most relevant and informative, from the viewpoint of the particular clinical aspects. Quantitative assessment of Bradykinesia symptom In this section, we investigate whether our proposed MPIs can reveal the presence of bradykinesia symptom in patients. Two main properties of bradykinesia symptom are (1) slowness of the movements and (2) the progressive decrease in amplitude of sequential movements (so-called "sequence effect"). Figure 7 illustrates the bradykinesia pattern, relying on the designed MPIs. The difference in movement speed between patients and controls is demonstrated for GPP-HL movement since this movement was repeated five consecutive times during the experiment. Figure 7A shows the temporal evolution of the EMG-mav over window segments, for patients and controls, during the GPP-HL movement. The patients have demonstrated slower movements-they needed more time to perform five consecutive movements than controls. In order to investigate the presence of "sequence effect" in the context of our proposed basic measurements, we analyze their evolution during the movement performance. We focus on the TT-P and TT-D movements since those movements are recorded in the period of 30 s, which enables enough sensor data for sequence effect analysis. Figure 7C demonstrates the temporal evolution of Emg-mav basic measurement during TT-P movement for right-hand affected patient (third disease (1)). The decrease of Emg-mav basic measurement over time is slow, but constant ( Figure 7C). Such outcome suggests the presence of bradykinesia symptom. The bradykinesia symptom is visible from the time evolution of ACC and GYRO basic measurements, as well. Figure 7B illustrates the temporal evolution of the Gyro-ran over window segments, for patients and controls, during the GPP-HL movement. The result is the same as in the case of EMG data-slower movements at patients are confirmed based on the evolution of Gyro-ran basic measurement over time. Bradykinesia "sequence effect" is confirmed based on the ACC and GYRO basic measurements, as well. However, the decreasing pattern is different from EMG data. ACC-ran values are significantly larger in the first-half period compared to the second-half period ( Figure 7D). Finally, GYRO-ran basic measurement ( Figure 7E) shows the constant and significant drop in values over time. Dimensionality reduction and MPis selection We applied Linear Discriminant Analysis (LDA) (32) to determine the most relevant MPIs for the decision-making process based on the clinical group parameter, between patients and controls (diagnosis support) and between disease stages (monitoring support). The implementation of the LDA method is based on the procedure described in detail in our previous research (23). Information index plots (Figures 8A,B) show the importance of the MPIs for classification tasks from the ones most important toward less important MPIs. The LDA method results that, for keeping 80% of information from the original data set, it is sufficient to select first 13 out of 84 MPIs for both conditions: patients/controls ( Figure 8A) and disease stages ( Figure 8B). The selected MPIs are listed in Table 3. Information index plots also demonstrate that some MPIs have the negligible impact on the classification tasks. After the first 50 MPIs, adding more MPIs will not bring significant information. In order to verify the results obtained by LDA, we have used the LASSO regression analysis (33), which performs both feature selection and regularization, in order to enhance the classification accuracy. Using the LASSO regression, the response variable (corresponding class of the interest-patients/controls or disease stage) is modeled as a linear combination of the MPIs (model parameters). The model parameters with strongest dependence of the response variable will have higher coefficients, while the , and for the classification criterion between groups of interest, are listed in Table 3. Table 3 shows that the 13 most relevant MPIs (out of 84 MPIs) are Gyro-ssi, Gyro-ran and Emg-mav extracted mostly from the movements of object grasping, pick and place (GPP-EL and GPP-HL) and tapping test movements (TT-P and TT-D). The list of the most relevant MPIs is not the same in case of LDA and LASSO regression, but the majority of representative MPIs are selected by both methods (marked as bold text in Table 3). Such result can be a consequence of the adjustment of regularization parameter λ ∈ [0.01-0.5] during Lasso regression. This parameter determines the strength of the penalty. As λ increases, more coefficients of the model are reduced to zero, hence more parameters (MPIs) are excluded from the model. classification: Diagnosis and Monitoring evaluations Classifiers were built for four tasks: (i) PD patients vs controls (PD vs C); (ii) stage I vs stages II and III PD; (iii) stage II vs stages I and III PD; and (iv) stage III vs stages I and II PD, and by using two sets of MPIs: (a) original (full) set of 84 MPIs and (b) set of 13 MPIs selected by LDA in Table 3. As a criterion of the classification success, the area under the ROC curve (AUC) is calculated (35). ROC curve represents the graph of the true positive rate (TPR) against the false positive rate (FPR). AUC is the calculated surface area under the ROC curve. AUC values that indicate high-performance classifiers are in the range [0.80-1]. The performance of each classifier is assessed in a (10fold) cross-validation procedure, and the results are provided in Table 4 in form of a mean (standard deviation) calculated from 10-folds. Table 4 shows that the AUC values for all employed classification approaches are very high (near or equal to the perfect score of 1), suggesting that reliable decisions can be made by using the proposed MPIs. The most difficult task appears to be discerning the stage II patients from stages I and III PD, based on the selected subset of 13 features. However, K-Nearest Neighbor and Neural Network classifiers seem to achieve quite consistent high performance under all tested conditions. Also, using only the 13 features instead of all 84 results in just a slight reduction in performance, providing another evidence in favor of informativeness of the selected MPIs. left-right side analysis Results of the statistical analysis suggest that 14 MPIs out of 84 MPIs in total are relevant for the left-right side analysis between patients and controls ( Table 5). Such result indicates that EMG MPIs for grasping, pick and place movements are the most relevant for the left-right side analysis, as well as MPIs extracted from the rotation of the hand movement while the elbow is flexed. Figure 9A illustrates the mean and standard deviation graph for controls and right-side affected patients for Acc-ssi MPI (RH-EF movement). It can be seen the mean MPI values are almost the same in the case of controls, while in patients, the mean MPI value for the left hand movement is larger than for the right hand movement. Such outcome is expected, since the right side is affected by PD and consequently, has lower performance. The results of the statistical analysis suggest that 11 MPIs out of 84 MPIs in total are relevant for the left-right side analysis between disease stages ( Table 5). It turns out that the ACC and GYRO MPIs for RH-EF, GPP-EL, and GPP-HL are the most common MPIs to evaluate the difference in performance between left and right hand across the disease stages. Assessment of the Arm Movements in Parkinson's Disease Frontiers in Neurology | www.frontiersin.org August 2017 | Volume 8 | Article 388 Figure 9B illustrates the mean and standard deviation graph across disease stages for Gyro-ran MPI (GPP-HL movement). It can be seen that the mean MPI values decrease from the first to the third stage and their difference between the left-and the right-hand increases. Such result suggests that differences in the performance of the left and right hand become larger with the disease progression. It can be seen that in the case of the leftside affected group (first stage) the MPI values are greater for the right hand. The situation is opposite for the right-side affected group of the second and third disease stage. In both cases, MPI values are greater for the hand less affected by the disease, which is an expected outcome. correlations with clinical scales In this section, we want to investigate whether the proposed MPIs are correlated with clinical test and scales. This is particularly important for the possible inclusion of the proposed MPIs into medical protocols. All MPIs that satisfy correlation conditions (explained in the Section 2.4.5) for the tapping test and UPDRS-III scale are listed in Table 6. Scatter plots in Figure 10 illustrate a few examples of the correlation between MPIs and clinical parameters, where the line represents the regression curve. It can be seen that the selected MPIs have a positive correlation with the tapping test (Figures 10A,B), more concretely with the number of taps in two cases of the tapping test (procedure of the tapping test is explained in the Section 2.2). This is expected since the patients who have higher values of MPIs potentially can achieve a larger number of taps within defined time interval (30 s). On the other side, our MPIs have a negative correlation with the UPDRS-III general score ( Figure 10C) and subscore for bradykinesya (Figure 10D), since the lower values of our MPIs and higher values on UPDRS-III scale indicate a more severe state of the patient, i.e., more advanced disease stage. Results of the correlation analysis regarding the tapping test ( Table 6) have shown that the most correlated MPIs for both tapping tasks are the ones extracted from the tapping test movements (TT-P and TT-D). Such result is expected, since the same movements are tested during clinical protocol and our sensor measurements. Those MPIs refer to all ACC and GYRO MPIs of both, left-and right-hand movements. In addition to the tapping test movements, ACC and GYRO MPIs from the right-hand RH-EE and GPP-EL movements, as well as from the left hand RH-EF movement have high values of Spearman correlation coefficient ρ. MPIs extracted from EMG signals are mostly poorly correlated with tapping test (ρ < 0.5, p > 0.05), except EMG MPIs in the case of the left-hand RH-EE, RH-EF, and TT-P movements ( Table 6). Results of the correlation analysis regarding the UPDRS-III scale for the general score and bradykinesia subscore highlight mostly the same MPIs in both cases ( Table 6). The most correlated MPIs are the ones extracted from the rotation of the hand movements (RH-EE and RH-EF), Table 6. In addition to the rotation of the hand movements, the MPIs from right hand GPP-HL and TT-P movements, as well as MPIs from the left TT-P and TT-D movements have high (absolute) values of Spearman correlation coefficient ρ. Since higher values of ρ indicate better correlation, DiscUssiOn anD cOnclUsiOn In recent studies, the use of an armband device has been considered for medical and rehabilitation applications, especially for physiotherapy healthcare (27) and recovery after the stroke (28). The authors in Ref. (27) use MYO Diagnostics application for medical diagnosis and to understand how comfortable subjects feel while performing the movements using the armband device. The study (28) proposes a low-cost rehabilitation system for recovery after the stroke, which consists of an armband device and a data glove. The authors present just the concept of a rehabilitation system based on the virtual environment and gaming to enhance the patient's motivation. Both studies (27,28) lack the signal processing, feature extraction analysis, and decisionmaking procedure behind the interface. In Ref. (29) the authors propose a multi-sensory gesture-based occupational therapy system, which consists of a Kinect v2, a Leap motion sensor and a Myo armband device. The system is intended to support the everyday activities in the home environment and to encourage the patients to practice and obtain the feedback about their movement performance during usual daily routines. Again, as in Ref. (27,28) only the concept of the system is presented, along with the general implementation details. Lack of the sensor signal analysis and processing toward the extraction of the meaningful signal features, as well as the development of the clinically-oriented approaches based on the sensor movement data, are the main drawbacks of the related studies. We have used a wireless armband sensor to acquire arm/ hand movements defined by the PD protocol. We propose a set of 84 Movement Performance Indicators (MPIs) to characterize acquired movements. We conducted a thorough analysis of the properties of these MPIs, to identify their importance in terms of relevant clinical aspects ( Table 7): (i) reliability; (ii) classification between patients and controls and between disease stages (support to diagnosis and monitoring, respectively); (iii) left-right side analysis between controls and patients, as well as between disease stage groups; and (iv) correlation with clinical scales (tapping test and UPDRS-III). The overall conclusion is that Gyro-ssi and Gyro-ran MPIs are relevant according to all clinically relevant criterions. Particular EMG MPIs are important for the classification aspect and left-right side analysis, while the ACC MPIs are of interest for the left-right side analysis and correlation with clinical scales. This study complements our previous research (23) with an approach for quantitative movement analysis, based on the arm/ hand movement data acquired with an EMG sensor. Our results show that the proposed approach has the potential to be adopted by therapists, to enhance objectivity and precision, during the diagnosis/monitoring evaluations and bradykinesia assessment. At the same time, it opens the possibility of the low-cost assessment tool for patients with the mild to moderate PD stages (I-III according to the modified HY clinical scale). The armband electromyographic sensor is worn on the forearm and collects the data from the four groups of muscles-flexors, extensors, internal, and external forearm muscles (Section 2.2, Figure 2). One very important conclusion is that external forearm muscles of both hands in PD patients have demonstrated the lowest performance of all forearm muscles in the sense of the muscle activity compared with a control group. This result suggests that external forearm muscles are the most affected by the Parkinson's disease. Such result is derived from our sensor data but requires additional clinical testing and confirmation. In the Parkinson's disease, one side of the body is more affected than the other. Furthermore, the first symptoms of the disease are observed on a particular body side. Along with the disease progress, both sides become affected, but the side on which PD symptoms were first detected, is always affected more. The quantitative assessment of the difference between left and right side of the body would be significant information for the neurologists, since they cannot evaluate it directly or using subjective clinical scales. Consequently, we investigated the differences in the movement performance with left and right hand, relying on the proposed MPIs. Our finding is that those differences are negligible in control subjects, while they can become quite large for Parkinson's patients, depending on the disease stage. Collected sensor data in the context of designed MPIs have revealed the bradykinesia patterns in patient movement data. The slowness of the movement and sequential drop of the amplitude over time (so-called "sequence effect") are visible from the MPIs temporal evolution. Such results indicate the potential of our proposed MPIs to be used by therapists for quantitative assessment of bradykinesia. Finally, we conclude that sensor data collected from the wireless armband device successfully addressed the same set of relevant aspects in PD like the sensor glove data in our previous research (23). Even more, in this study, we have performed the left-right side analysis, which is not feasible with the sensor glove data, due to its right-hand design. Consequently, our results suggest that the wireless armband sensor can be a possible alternative for high-cost data glove that we used in our previous research. However, the experimental setup, tested movements and extracted Movement Performance Indicators (MPIs) are different in accordance with sensor choice. The advantage of the sensor glove data over the armband device is the quantification of the fine finger movements. One limitation of the study is the collection of the sensor measurements during the ON-stage only. It would be worth to investigate the movement data characteristics during OFFstage. The number of subjects and tested movements could be extended in the future. Finally, MPIs proposed in this study are the result of the signal processing in the time domain. Additional MPIs could be extracted from the frequency domain of the sensor signals. In the future work, we will focus on another important aspect of Parkinson's disease-balance and stability. We are considering using a low-cost device with sensors of pressure for balance quantification. Furthermore, we plan to test our system on patients recovering from the stroke. eThics sTaTeMenT This study was carried out in accordance with the recommendations of the Declaration of Helsinki and the Ethics Committee of the Medical Faculty of Military Medical Academy, University of Defence (Belgrade, Serbia) approved the present study. After the experimental procedures were explained, all subjects signed written informed consent forms. aUThOr cOnTriBUTiOns SS, TI, and JS-V designed the study; SS and TI collected the data; SS processed the data; SS and IS analyzed the data; SS wrote the manuscript; and TI, IS, VP, AR, and JS-V revised the manuscript. acKnOWleDgMenTs The authors would like to thank all volunteers who were willing to participate in this study.
9,797.2
2017-08-11T00:00:00.000
[ "Engineering", "Medicine" ]
Water Usage Charge in Brazil: Emergy Donor-Side Approach for Calculating Water Costs The Federal Water Law 9433 enacted in 1997 gave the legal frame relative of the water usage charge. The aim of this paper is to interpret legal terms through the emergy environmental accounting approach in order to establish the donor-side costs related to water usage under the user-pays and polluter-pays principles. The procedure was performed for agricultural activities at the Jundiai-Mirim Basin, located at Sao Paulo State, Brazil. The proposed procedure resulted in costs comparable to those already implemented at the watershed under study by using other economical procedures. Introduction In Brazil, payment for water had been already foreseen by the first water resources management legislation, the Federal Water Code, enacted in 1934. But it was the Federal Water Law (FWL) 9433 enacted in 1997 [1] that gave the legal frame for water usage charge. The Law also defines that water is a scarce resource, which has economic value, and recognizes the existence of multiple water uses and user rights. Only users who exploit water with economic benefits are subjected to charges. Another essential characteristic given by this legal document is the legitimation of the River Basin Committee to arbitrate in the first instance level conflicts in the watersheds, to implement methodologies to establish water charge values and to propose those values to the National Agency (ANA). Complementarily, the National Environmental Policy through the Law 6938/81 had already imposed "recuperation and/or compensation to polluters and predators for caused damages as well as payment to users for the environmental resources when used for economic purposes", reflecting the polluter-pay and the user-pay principles [2]. Some basins have already implemented the management instruments targeted at the Water Resource Management Policy (WRMP), including the fixation of the water usage charge. The PCJ basin (Piracicaba, Capivari, Jundiai basin) is an example of a basin that makes progress implementing all the necessary instruments for water managements. The PCJ basin adopted a weighted coefficients methodology to fix the usage charges. The objective of this work is not making a judgment on the criterion adopted but offering a proposal capable to unify criteria among the different basins with a background theory to sustain the accounting. To capture the economical value of natural resources is not easy and some economical instruments have been proposed with more or less success [3]. Motta [4] recognized the difficulty of fixing natural resources market prices that properly reflect the value assigned to them. Furthermore, he emphasizes [4] that each analyst will assume different hypothesis according to the valuation object, data availability and knowledge of the ecological dynamic of the resource. The emergy accounting, based on thermodynamics and systems theory provides an approach to evaluate natural resources contribution to society, by calculating the biosphere work directed to generate them and make them available. In this way, all type of natural resources can be evaluated and quantified with the same basis: biosphere cost in the form of solar energy. The state of water resources and the pressure exerted by human activities in Chinese cities [5], as well as the value of water resources in Chinese rivers [6] and Italian watershed [7] was studied using emergy. The methodology was proposed to capture the recovery cost for water usage within the Water Frame Directive (WFD) definitions [8]. Differently from the Brazilian water administration, the WFD provides a framework of definitions with the purpose of identifying the different kinds of water use costs (services, resources and environmental costs). Brazilian directive enables a broader interpretation of the terms involved since the extent of the use (UPP) and pollution-pays principle (PPP) is not completely defined. Kind of water usages subjected to charges include only those relative to water grant. Whether or not externalities should be included into the PPP or only direct pollution costs depends on the analyst expertise or interpretation, social interests or basin committee considerations, making difficult to propose a unified method to quantify charges. Different legal interpretations arise from the broad approach of the definitions and in order to establish a criterion, in the present work, the extent of the environmental effects due to usage included the externalities caused due to enterprises operation. The aim of this paper is to make an interpretation of the WRMP in terms of the emergy theory in order to establish the costs related to water usage under the UPP and PPP. Since the scenarios of usage are diverse only the agricultural case is shown and discussed here. Quantification for the other scenarios is in advance. Data from Jundiai-Mirim basin are employed for calculation. Selection of the Jundiai-Mirim watershed, a management unit (unit n° 5) that belongs to the JPC basin, as a case study was done since the macro-basin has a well-organized system of usage charge. Emergy Accounting Emergy is defined as the available energy of one kind previously used up directly and indirectly to make a service or a product [9]. A complete assessment of the methodology cannot be provided here, but the reader can refer to published reports [9]; [10]. In contrast to economic valuation, which has a user-side approach giving into account the users willingness-to-pay, emergy accounting provides a donor-side approach, quantifying the cost of nature to generate a service. From that cost quantification, the methodology can translate cost into currency flows, creating the interface with economy. In order to calculate the emergy of a resource, the quantity expressed in units of energy is determined and multiplied by its correspondent "transformity". "Transformity", expressed in seJ/J, is the factor to convert energy inputs in emergy and it represents all past environmental work necessary to obtain one joule of a given resource. When inputs are expressed in mass or money, the specific emergy or the emergy-money-ratio (EMR) is used to convert values into solar energy joules (seJ), respectively. The emergy-money-ratio used in this study was calculated by dividing the annual emergy (in seJ/y) of the São Paulo state economy by its gross national product (R$/y). On the contrary, conversion of emergy to currency is accomplished by dividing emergy values by the EMR corresponding to the economy where the study is conducted. The units derived from the division are defined as emR$ (em-real, real is the Brazilian currency) and serve to make an analogy with currency. The emergy flows are classified into three categories of resources: R as renewable resources, N as non-renewable resources and the inputs provided by the economy, F. The theory In order to carry out emergy flux determination, the planetary baseline of 15.83x10 24 seJ/year was adopted [11]; [12]. The transformity values that were calculated using another baseline were corrected and properly informed during calculation. Interpreting Law Terms trough the Emergy Approach When the emergy approach is used to interpret the terminology employed by the WRMP, two kinds of costs emerged, usage costs, related to the UPP and those derived by the PPP concept. UPP-related costs of water correspond to the quantity of water diverted and used to carry out the enterprise, in this case, for agriculture production. Although it is true that a parcel of the diverted water is not directly used and returned to the water-body or infiltrates, it will suffer modification if compared to the initial conditions. The natural water cycle is the responsible of the water presence at the basin, so the emergy costs can be assimilated to the emergy of the water itself. Geopotential energy and chemical potential energy are the two main components of emergy, of water and derived from rainfall onto the watershed area. In this way, CUPP (expressed in seJ/m3), the emergy usage cost, is defined as the emergy flow (Emrain) related to geopotential or chemical potential aspects of rain distributed through the whole volume of water (wrain) within the watershed, CUPP = Emrain/wrain. The division of CUPP by EMRSP converts emergy to equivalent monetary values, expressed in emR$. PPP related costs of water are considered here as those related to alteration of the physical and biological aspects of water bodies due to human activities of water usage. The usage of water in agricultural activities directs not only water but also other inputs, nonrenewable resources and market goods that generate a load in the used land [13]. The excess of local emergy density created as a consequence of the load from water usage used is then distributed through superficial and ground water along the watershed causing interference of water bodies. Accounting of these effects are calculated by CPPP = EmN+F/wdisch, (expressed in seJ/m3) being EmN+F the annual emergy flow of nonrenewable and purchased inputs involved in agricultural activities at the region and wdisch the volume of discharge water of the watershed (that portion of water that is not involved in evapotranspiration). The CPPP expression implies in the distribution of the load caused by the convergence of N and F inputs per area through the total volume of discharge water. Analogously, division of CPPP by EMRSP offers an emR$ value that can be considered as currency and compared to actual prices. Calculation considerations For the emergy flow (Emrain) calculations, the higher of the two flows from geopotential and chemical potential was selected in order to avoid double accounting according to the Emergy Theory-algebra [9]. In this case, chemical potential emergy flows were used. Since no data about volume of groundwater in the watershed is available, for the wdisch estimation the hydro balance of the region was used by means of the specific discharge value of 10.0 l/s km 2 from [14]. The EmN+F derives from the annual areal emergy intensity (expressed in seJ/ha y) of each kind of agricultural activity at the region multiplied by the area occupied by each activity (in ha). Two kinds of calculations were adopted in order to estimate the EmN+F value. From these two calculations two values of cost will arise which could be considered as the upper and lower limits of the cost interval. It is difficult to evaluate the extent and intensity of the damages exerted by human activities to water bodies and nature in general. Also, to establish the area that has direct influence on damages and disturbance is not trivial. In order to establish an interval of influence, two regions were considered to carry out calculation: the activities occurring at the permanent protection areas (PPA) and activities at the whole watershed. Activities occurring in both regions considered generate load due to the intensity increase of nonrenewable resources. Although it seems that those occurring at the PPA will create more disturbance, activities at the whole watershed certainly will also contribute. Jundiai-Mirim Micro-watershed The Jundiai-Mirim River watershed belongs to the São Paulo state, Brazil. It presents an area of 117.50 km 2 and is located within parallels 23° 00' and 23° 30'S and meridians 46° 30' and 47° 15'W. Jundiai-Mirim River has 16 km extension and is one of Jundiai River affluent. Diverse anthropogenic activities occur at the basin. Results and Discussion The evaluation of the UPP-related costs of the resource was performed by calculating the chemical potential energy in a yearly basis. Table 1 shows the CUPP in emergy values and their conversion to currency. [12]; *** EMRSP = 1.7E+12 seJ/$, from [15]. To estimate the PPP-related costs of water, the disturbance caused by the agricultural activities was calculated. Table 2 shows the upper and the lower values of the interval. The upper limit is almost 7 times greater than the lower load due to agricultural activities. To obtain CPPP, division by the discharge water (10 l/s km2 x 117.5 km2) was done, as shown in Table 3. To convert these values into currency, they were multiplied by the EMRSP (1.7E+12 seJ/$, from [15]), see Table 3. [17]. Table 3. The CPPP values expressed in emergy and the value expressed in currency for the agricultural activities performed in the PPA (lower limit) and in the whole watershed (upper limit). *CPPP = EmN+F/wdisch being water discharge estimated as (10l/s km 2 x 117.5 km 2 ); **EMRSP = 1.7E+12 seJ/$, from [15]. Lower limit The values calculated here for water charging pricing are comprised within the interval 0.91 and 1.04 emR$/m 3 for agricultural uses. They are computed as the sum of the two emergy costs after conversion to monetary values. Comparison with the results derived from the emergy approach performed for the Spanish basin [8], shows comparable but lower values for the present case study. On the other hand, the procedure already implemented at the PCJ committee, fixed basic unitary prices from of 0.01 to 0.10 R$/m 3 for catching and organic load, respectively. Concluding Remarks The emergy approach offers a tool to aid in water usage charge estimation. It provided a systemic point of view that in the present case results in costs comparable to those already implemented at the whole PCJ macro-basin, when the Jundiai-Mirim microbasin is located. Although the charge of water due to usage and pollution is still a controversial topic, the present work evidences that the whole biosphere contributes trough concentration of free natural resources to maintain the hydrological cycle and offer ecoservices to anthropogenic activities.
3,053.4
2016-09-03T00:00:00.000
[ "Environmental Science", "Economics" ]
High-Resolution, Low-Delay, and Error-Resilient Medical Ultrasound Video Communication Using H.264/AVC Over Mobile WiMAX Networks In this study, we describe an effective video communication framework for the wireless transmission of H.264/AVC medical ultrasound video over mobile WiMAX networks. Medical ultrasound video is encoded using diagnostically driven, error resilient encoding, where quantization levels are varied as a function of the diagnostic significance of each image region. We demonstrate how our proposed system allows for the transmission of high-resolution clinical video that is encoded at the clinical acquisition resolution and can then be decoded with low delay. To validate performance, we perform OPNET simulations of mobile WiMAX medium access control and physical layers characteristics that include service prioritization classes, different modulation and coding schemes, fading channel's conditions, and mobility. We encode the medical ultrasound videos at the 4CIF (704×576) resolution that can accommodate clinical acquisition that is typically performed at lower resolutions. Video quality assessment is based on both clinical (subjective) and objective evaluations. I. INTRODUCTION C ONTINUOUS advances in medical video coding, together with wider availability of current and emerging wireless network infrastructure, provide the key technologies that are needed to support m-health video communication technologies in standard clinical practice.Over the past decade, demand for mobile health systems has been growing [1]- [3].Demand is driven by the need for responsive emergency telematics, remote diagnosis and care, medical education, as well as for mass pop-ulation screening and emergency crisis management.Advancements in mobile health systems are expected to bring greater socioeconomic benefits, improving the quality of life of patients with mobility problems, the elderly, and people residing in remote areas, by enhancing their access to specialized care.Moreover, they will provide a critical time advantage that can prove life saving in life-threatening emergency incidents. Current research in m-health video communications systems include modality-aware (m-aware) diagnostically driven systems, which adapt to the underlying wireless transmission medium [3].Diagnostically driven systems often rely on the use of diagnostic regions of interest (ROIs) [4]- [6].Adaptation to the wireless network's characteristics includes diagnostically relevant selection of the source encoding parameters and error control for addressing inevitable transmission errors.Clinical video quality assessment (VQA) methods are vital for the systems' objective of communicating reliable medical video to the medical expert [4], [7], [8]. In terms of wireless infrastructure, thus far, m-health video systems have been primarily based on 3G wireless networks [3], [5].Given the limited upload data rates supported by these channels (up to 384 kb/s), the associated source encoding parameters were bounded to CIF resolution video size.As documented in [4] and [6], medical video resolution directly impacts the clinical capacity of the transmitted video.For atherosclerotic plaque ultrasound video, shifting from QCIF (176×144) to CIF (352×288) resolution enables the assessment of plaque type [6], providing critical clinical information to the medical expert for assessing the possibility of a plaque rupture, leading to stroke.Some recent studies that have briefly highlighted the benefits associated with streaming higher resolutions can be found in [4] and [9]- [11].However, these studies are based on a limited number of cases, while the clinical aspect has not been extensively addressed.Moreover, these previous studies did not address individual network parameters' issues associated with clinical capacity of high-resolution video transmission. As a result, there is a strong demand to investigate new 3.5G and 4G wireless technologies [12] facilitating medical video communication at the clinically acquired video resolution.Ultimately, the goal is to deliver sufficiently high resolutions and video frame rates with the low-delay and low packet loss rates (PLR) that can approach the experience of in-hospital examinations. In this study, we investigate the added clinical value of highresolution (4CIF-704×576) medical video communications over mobile worldwide interoperability for microwave access (WiMAX) networks for emergency telemedicine.The efficacy of the proposed end-to-end ultrasound video communication scheme is validated based on scalable clinical criteria.For this purpose, the clinically validated approach introduced [12] is extended from the CIF resolution to the higher resolution of 4CIF and the lower resolution of QCIF.In [6], diagnostically relevant selection of encoding parameters based on video region's clinical importance was used (see Fig. 1).Here, we extensively validate different medium access control (MAC) and Physical layer features of mobile WiMAX channels that can support efficient emergency telemedicine m-health systems.Most importantly, we clinically evaluate ultrasound videos transmitted using different network parameters configurations.The goal of the network study is to provide recommendations for resilient network parameter selection that will accommodate different emergency scenarios and varying network state. We summarize the primary contributions of this paper over previously published work (see, e.g., [6]) in three different areas. 1) Robust video encoding at three different resolutions: We consider QCIF, CIF, and 4CIF resolutions and carefully discuss the clinically validated criteria associated with each spatial resolution.For each case, we measure improvement in terms of the reduction in bitrate, which can be used for increasing the peak signal-to-noise ratio (PSNR) of the reconstructed video as compared to standard H.264/AVC encoding.For this purpose, we employ the BD-PSNR algorithm, which estimates the average bitrate gains for equivalent PSNR levels for a total of 240 cases of QCIF, CIF, and 4CIF resolutions.Beyond the cases considered in [6], we consider an additional 1500 4CIF transmission cases over Mobile WiMaX. 2) Relationship between spatial resolution (QCIF, CIF, 4CIF) and clinical diagnosis: We summarize the clinical criteria that can be addressed at different resolutions (see Section IV-C).This relationship is validated through separate evaluation of the different criteria.For example, the use of 4CIF (704×576) resolution allows the use of new clinical criteria that are closer to standards used for inhospital exams.For the current application, the additional resolution allows us to visualize atherosclerotic plaque morphology, not just the plaque type. 3) Medical video communications over Mobile WiMAX networks: We propose an emulation framework for mobile WiMAX medical video communication based on OPNET modeler.For this purpose, real ultrasound video encodings are used to generate video trace files imported to OPNET to model wireless video transmission.Following transmission, the communicated video packets are mapped back to the original files for decoding.The latter method allows us to realistically measure the effect of each investigated configuration setting both objectively and most importantly, subjectively (clinical evaluation).Based on the previously described method, we provide recommendations for mobile WIMAX network parameter utilization for maximizing the communicated video's clinical capacity.Typical emergency telemedicine scenarios are constructed as a function of three different channel modulation and coding schemes, hybrid signal attenuation model, various distances from the base station (BS), and diverse mobility patterns.Mobile WIMAX network's performance is validated using quality of service (QoS) measurements such as PLR, packet delay, and PSNR of reconstructed video bitstreams, for an overwhelming number of 1500 video cases.The rest of this paper is organized as follows: Section II provides a brief overview of mobile WiMAX networks and outlines their characteristics that relate to video transmission.Section III describes the undertaken methodology, while Section IV provides the experimental evaluation.Finally, Section V gives some concluding remarks. II. MOBILE WIMAX FOR VIDEO COMMUNICATIONS WiMAX was first standardized for fixed wireless applications in 2004 by the IEEE 802.and then for mobile applications in 2005 by the IEEE 802.16e standard [14].The current WiMAX standard 802.16m, also termed as IEEE WirelessMAN-Advanced, met the ITU-R IMT-advanced requirements and is considered to be a 4G technology.In what follows, we describe the WiMAX features provided at the physical layer (PHY) and the MAC layer. A. Physical Layer Features The primary features of the physical layer include adaptive modulation and coding (QPSK, 16-QAM, 64-QAM), hybrid automatic repeat request (hARQ), and fast channel feedback.WiMAX uses scalable orthogonal frequency division multiple access that divides the transmission bandwidth into multiple subcarriers.The number of subcarriers ranges from 128 for 1.25 MHz channel bandwidth and extends up to 2048 for 20-MHz channels.In this manner, dynamic QoS can be tailored to an individual application's requirements.In addition, orthogonality among subcarriers allows overlapping leading to flat fading.In other words, multipath interference is addressed by employing OFDM, while available bandwidth can be split and assigned to several requested parallel applications for improved system's efficiency.The latter is true for both downlink (DL) and uplink (UL).A multiple-input multiple-output antenna system improves communication performance, including significant increases in data throughput and link range, without additional bandwidth or increased transmit power. B. MAC Layer Features The most important features of the MAC layer include QoS provision through different prioritization classes, direct scheduling for DL and UL, efficient mobility management, and security.The five QoS categories are described in [14] and [15].Based on each application's requirements, we have an appropriate QoS class with its corresponding UL burst and data rate.For real-time video streaming, the best option is to use the real-time polling service (rtPS) QoS class.The rtPS class specifies the minimum sustained data rate, the maximum traffic burst, the maximum tolerated latency, and a traffic priority, which the WiMAX air interface scheduler is designed to accommodate [15]. Mobility management is well addressed in 802.16e and current 802.16m standards, which was an issue in 802.16d primary standard for fixed connections.With a theoretical support of serving users at 120 km/h in 802.16e, established connections provide adequate performance for vehicles moving with speeds between 50 and 100 km/h. III. METHODOLOGY We investigate high-resolution medical video communication performance over mobile WiMAX networks based on realistic clinical scenarios.The aim is to model realistic scenarios that can be used to evaluate the challenges associated with developing mhealth video systems for emergency telemedicine.Such a system is illustrated in Fig. 2. The key concept is to communicate the patient's video (trauma or ultrasound) to the hospital premises, for remote diagnosis and assistance with in-ambulance care, moreover for better triage and hospital admission related tasks (e.g., surgery chamber preparation). For the scenario depicted in Fig. 2, the medical ultrasound video transmission is launched once the paramedics have stabilized the patient, utilizing equipment residing in the ambulance.The simulated scenario models a typical route from the emergency incident to the hospital premises and highlights the technological challenges associated with the wireless communication of ultrasound video of adequate diagnostic quality. In what follows, we provide more detailed descriptions of each block component of the proposed medical video communication framework in the context of the scenario depicted in Fig. 2. A. Preprocessing This step typically involves video resolution and frame rate adjustments to match the available channel bandwidth (upload data rate) and end-user device capabilities.In this study, highbandwidth mobile WiMAX networks allow the investigation of the transmission of 4CIF video resolution at 15 frames/s. B. Diagnostically Relevant Encoding The proposed system uses a diagnostically relevant (m-aware) and resilient encoding scheme that has been described in [4] and [6].The key idea is to associate video ROIs with clinical criteria.Each video slice is then assigned a quality level based on its diagnostic significance.These quality levels are implemented by adjusting the values of the quantization parameter as demonstrated in Fig. 1.In this manner, significant bitrate requirements can be preserved by compressing the background (nondiagnostically important region).The basic ROI approach can be extended to different medical imaging modalities and is already gaining ground in the literature [3], [4], [9]. For atherosclerotic plaque ultrasound videos, the correspondence between the ROIs and the clinical significance are as follows (see Fig. 1).and motion: This is the primary ROI.By visualizing the plaque type and morphology, we can assess the stability of the plaque (e.g., darker plaques turn to be more dangerous).Plaque motion patterns can also help in assessing plaque stability.The use of 4CIF resolution over WiMAX networks is particularly critical for this region (see Fig. 1 and Table VI).2) Surrounding plaque region for visualizing stenosis: A high degree of stenosis is used as a strong predictor of the risk of stroke. 3) Near and far wall regions for visualizing wall motion: The interest in visualizing the near and far walls comes from the need to compare motion patterns with the plaque.Unstable plaques can have different motion patterns than the whole plaque.4) ECG region for visualizing ECG waveform: The ECG is used to help visualize plaque and stenosis changes through different parts of the cardiac cycle (e.g., during systole and diastole).In this study, we consider three different video resolutions, namely QCIF (176×144), CIF (352×288), and 4CIF (704×576), for the encoding setup depicted in the left column of Table I.The objective is to include scalable screen resolutions that are widely used in the literature today, in addition to investigating high-resolution encodings over mobile WiMAX networks.In the latter case, only 4CIF resolution with the recommended diagnostically acceptable QPs setting of 38/30/28 (see also [6]) is used. A series of ten videos encoded at 15 frames/s is used to evaluate the proposed concept.H.264/AVC error resilient tool, flexible macroblock ordering (FMO) type 2, is used to implement variable quality slice encoding.Baseline profile, universal variable length coding entropy coding, IPPP encoding structure, with an Intra update frame interval of 15 frames, and a total of 100 frames per video summarize the encoding parameters.The JM H.264/AVC reference software [16] has been used for encoding.For the mobile WiMAX video transmission experiments, the obtained results are averaged over ten simulations runs for each scenario.Redundant slices (RS) at the encoder (one every four coded frames) and simple frame copy error concealment at the decoder are used to recover from packet losses. C. Mobile WiMAX Video Transmission We investigate high-resolution video communication performance based on the scenario illustrated in Fig. 2. Our aim is to realistically model the varying state of wireless channels that contributes to ultrasound video degradation when transmitting from the ambulance to the hospital.For this typical scenario, we investigate the use of different channel modulation and coding schemes, signal attenuation due to different signal propagation models, mobility, distance from the BS, bandwidth availability through subcarriers scalability, and QoS prioritization classes. A synopsis of the parameters associated with the scenario of Fig. 2 appears in Table II, while the total number of processed videos is illustrated in the right column of Table I.The ambulance travels with speeds ranging from 60 to 100 km/h and traverses through locations situated near the effective coverage zone of the BS (distance range: 150 m-1.3 km).The multipath channel model is set to ITU Vehicular A, while a vehicular path loss model is also considered; with a shadow fading correction of 12 dBs (OPNET [17] implements differently path loss and multipath fading, and allows shadow fading correction for increasing possible signal attenuation combinations during simulations).Three different channel modulation and coding schemes are investigated, namely QPSK 1 2 , 16-QAM 3 / 4 , and 64-QAM 3 / 4 , with the number of subcarriers set to 512, besides QPSK 1 2 which is set to 2014. 1) Scenario 1: For a more realistic evaluation, the ultrasound video traffic sent through the network is modeled via trace files generated using real ultrasound video encodings.In Scenario 1, the ultrasound videos are looped over the entire route to examine the wireless channel's performance by measuring the average QoS parameters such as PLR, end-to-end delay, and delay jitter. 2) Scenario 2: In Scenario 2, ultrasound video traffic is initiated at four different locations (see Fig. 2) selected to highlight the wireless channel's ability to provide reliable medical video communications at different distances from the BS.Following the wireless transmission and the QoS measurements, the successfully streamed packets are mapped back to the original RTP files, decoded, and evaluated by the relevant medical expert.In addition to the wireless network's QoS measurements, VQA ratings described below summarize the evaluation setup. D. Video Quality Assessment VQA includes objective and subjective evaluations.Objective VQA is given in terms of the video quality metric (e.g., PSNR) computed over the specified video slices using the clinical criteria.Clinical evaluation is performed by the relevant medical expert for the clinical criteria provided in Table VI.Ratings are given in the range of 1-5.A rating of 5 is the highest possible and it signifies that the diagnostic information in the decoded video is of essentially the same quality as the original video.A rating of 4 indicates that there is a diagnostically acceptable loss of minor details.At the lowest scale, a rating of 1 signifies that the decoded video is of unacceptably low quality. IV. RESULTS AND DISCUSSION In this section, we discuss the experimental evaluation of the proposed medical ultrasound video transmission framework.We present results in terms of video encoding, medical video transmission over mobile WiMAX channels, and clinical evaluation. A. Diagnostically Relevant Encoding To demonstrate the efficiency of the proposed diagnostically relevant encoding scheme, we provide a comparative evaluation of: 1) FMO with constant QP video slices and 2) FMO with variable QPs and RS for communications in noisy environments.For each method, we had four sets of quantization levels for the video slices as depicted in Table I. Table III depicts the associated bitrate gains of the proposed variable quality slice encoding scheme when compared to the conventional uniformly encoded medical video.Fig. 3 uses boxplots to illustrate the bitrate requirements of the medical ultrasound video dataset, for the two investigated encoding schemes.Bitrate gains for equivalent perceptual quality are computed using the BD-PSNR algorithm [18], based on the four rate points shown in Fig. 3.The average bitrate demands reductions are 42.3% for 4CIF, 39.8% for CIF, and 34.7% for QCIF resolution videos.Bitrate gains are functions of the area occupied by the diagnostic ROIs, and most of the savings come from compressing the background (see Fig. 1).Here, the medical video dataset comprises videos with diagnostic ROIs ranging between 44% and 72% of the entire video [4]. B. Mobile WiMAX Medical Video Transmission 1) Scenario 1: Table IV records the averaged QoS measurements of all video transmission simulations.QPSK 1 2 channel modulation scheme provides for a more robust performance as the PLR measured are in the order of 1%.This is also highlighted in Fig. 4(a) for the video shown in Fig. 1(a).16-QAM 3 / 4 and 64-QAM 3 / 4 depict comparable performance with PLR extending up to 5%.At these PLR, the reconstructed ultrasound videos still yield acceptable diagnostic performance, due to the use of RS and FMO error-resilience features.By examining the PLR standard deviation in Table IV, however, we observe that PLR for 16-QAM 3 / 4 and 64-QAM 3 / 4 vary significantly and can reach unacceptably high rates.This is more clearly visualized in Fig. 4(b) and (c), where it is obvious that significant packet losses occur at large distances to the BS.This is not the case for QPSK 1 / 2 scheme, which exhibits a robust performance throughout the simulation irrespective of the varying channel conditions.The reasoning is that QPSK 1 / 2 requires lower signal-to-noise ratio (SNR) compared to 16-QAM 3 / 4 and 64-QAM 3 / 4 [19] to maintain a quality connection.This is clearly depicted in Fig. 4, where the upload SNR (measured each second) fluctuations experienced by packets traversing from the mobile station to the BS do not result in packet losses for QPSK 1 / 2 as in the rival channel modulation and coding schemes.This benefit comes at the expense of the channel's capacity as depicted in Table II.QPSK 1 / 2 conveys information at 1 bit/symbol/Hz, as compared to 16-QAM 3 / 4 and 64-QAM 3 / 4 , which provide 3 and 4.5 bits/symbol/Hz, respectively (mobile WiMAX capacity is given at mega symbols per second-Msps).As a result, QPSK 1 / 2 utilizes 2048 subcarriers at 20 MHz to meet the channel capacity required to transmit 4CIF resolution medical video, whereas 16-QAM 3 / 4 and 64-QAM 3 / 4 only require 512 subcarriers. Average end-to-end delay of transmitted packets for all three channel modulations and coding schemes examined is less than 22 ms, which is well within the acceptable bounds for medical video streaming applications [19] (300 ms but preferably less than 100 ms).Similarly, delay jitter is negligible for the presented scenario.This is partly due to the fact that in this particular scenario, no background traffic is modeled, while the RTP packets do not traverse through multiple nodes to reach their destination.However, even in the previously described circumstances, the use of service prioritization classes (as detailed in Section II) allows mobile WiMAX networks to meet the individual QoS requirements of each service. 2) Scenario 2: To better examine the performance of the system for transmission near the boundaries of the BS's effective coverage zone, a second scenario was implemented where the mobile station transmits only one video loop at each of the four different distances from the BS depicted in Fig. 2 (in Scenario 1, each video is looped until the end of each simulation).In this way, the actual quality of the transmitted video can be computed, both objectively and subjectively, as well as the specific QoS measurements at these locations, allowing accurate assumptions as to the extent (as a function of the distance from the BS and mobility) that the investigated channel modulation and coding schemes can be used. Results are shown in Table V and Fig. 5.As expected, QPSK 1 / 2 attains diagnostically acceptable QoS measurements in all four locations, with consistently low PLR and high PSNR scores around 39 dBs [see leftmost boxplots of Fig. 5(a)].Here, we use the term diagnostically acceptable to refer to the fact that the attained plaque ROI PSNR values are above 35 dB that were shown to qualify for clinical practice in [6]. 16-QAM 3 / 4 and 64-QAM 3 / 4 deliver diagnostically acceptable performance comparable to QPSK 1 / 2 only for the first location, which is situated closer to the BS [see location "1" in Fig. 2 and leftmost boxplots of Fig. 5(b)].As the mobile station (ambulance) moves away from the BS and signal attenuation increases, the quality of the video can be significantly degraded.At 1 km from the BS, diagnostically acceptable average PSNR ratings are still obtained.The experienced PLR and the 1 and used in Fig. 4) for each channel modulation and coding scheme, as a function of the distance from the BS.associated PLR standard deviation, however, suggest that ultrasound video of unacceptable clinical quality is transmitted at some occasions.This is more obvious at location "2" (1.1 km from BS) and 64-QAM 3 / 4 , as shown in Fig. 5(c).Here, it is important to note that the depicted results in Fig. 5(a) and (b) are boxplots based on PSNR averages of ten simulation runs of the ten videos parting the ultrasound video dataset.On the other hand, Fig. 5(c) depicts boxplots reporting the PSNR ratings for each of the ten simulation runs, for the video depicted in Fig. 1(a).The latter case demonstrates the extreme channel conditions a mobile station is likely to experience when transmitting at the effective coverage zone of the BS.As shown in Fig. 5(c), increased number of packet losses and high PLR standard deviation, as already discussed above, often results in diagnostically unacceptable PSNR ratings. The latter observation is verified during the clinical evaluation (see clinical evaluation section below) where different video instances are clinically validated.At the furthest location, high PLR make the transmitted video of limited clinical interest.Consequently, distances greater than 1 km from the BS provide for a boundary case for this scenario. The results from the objective evaluation significantly extend the findings of previous studies in the literature [4], [9]- [11].Here, additional experimentation allowed investigating higher mobile speeds up to 100 km/h (compared to 50 km/h [4], [9]), distances between 150 m and 1.3 km from the BS (compared to 500 m [4], [9]), hybrid vehicular multipath and path loss propagation models (compared to free space [10], [11] and vehicular models [9]), and subcarriers scalability up to 2048 (compared to 512 [9] and 1024 [4]), for a dataset composed of ten ultrasound videos and an overwhelming number of investigated cases.Most importantly, the clinical capacity of the communi- CIF and 4CIF resolutions.The medical expert was asked to comment on the clinical content of these two resolutions.Based on previous knowledge, CIF resolution provided for evaluating the clinical criterion of plaque type, something which was not feasible with lower QCIF resolution.The findings verified the hypothesis that higher resolution is associated with communicating a larger amount of clinical information.Detailed assessment of plaque morphology is made possible for 4CIF resolution medical video.This was not always the case with lower CIF resolution. However, the key finding in these experiments is that 4CIF resolution closely matches the clinical capacity of the original video.Similar to [7] and [8], the medical expert was also asked to rate whether the encoded video contained the same amount of clinical information as the original video.The medical expert concluded that the clinical information precision found in 4CIF resolution ultrasound video is comparable to that of the ultrasound device's monitor.It is associated with better assessment of the plaque motion and plaque components movement, which leads to confident assessment of plaque type and plaque morphology, aligned with diagnosing possibility of plaque rupture.Moreover, it facilitates better visualization of the intima of the near and far walls, of the plaque components, and the fibrous cap where this is applicable.In general, it is expected to reduce interobserver variability.On the other hand, a higher frame rate than 15 frames/s may be required to rival in-hospital examination. 2) Scenario 2: Table VII summarizes the clinical evaluation of investigated locations of the previously described Scenario 2. It evaluates the capability of mobile WiMAX networks to communicate high-resolution medical ultrasound video.Here, we present mean opinion scores of a representative sample of the 120 instances (3 channel modulation and coding schemes × 4 locations × 10 simulation runs = 120 instances) of the video shown in Fig. 1(a). Videos decoded after transmission using QPSK 1 / 2 attain the highest clinical ratings.This is aligned with the objective assessment depicted in Table V, and the associated high PSNR scores.16-QAM 3 / 4 and 64-QAM 3 / 4 attain diagnostically acceptable ratings (marginal at 1.1 km from the BS) in all but the most distant (last) location, where they fail to qualify for clinical practice.Clearly, when the distance from the BS exceeds 1 km, a switch to a more robust channel modulation scheme will prevent clinical quality to fall below of what is acceptable.V. CONCLUDING REMARKS This paper proposes an H.264/AVC-based framework for the wireless transmission of atherosclerotic plaque ultrasound video over mobile WiMAX networks.The depicted diagnostically driven encoding scheme shows that equivalent clinical quality can be obtained at significantly reduced bitrate demands.When combined with recent postprocessing error concealment techniques [11], it can provide for additional diagnostic resilience.Comprehensive experimentation showed that low-delay highresolution 4CIF ultrasound video transmission is possible over mobile WiMAX networks, even at speeds of 100 km/h and distances of 1 km from the BS.The investigated channel modulation and coding schemes verified that QPSK 1 / 2 is the most robust scheme, especially when transmitting from locations with low SNR.On the other hand, 16-QAM 3 / 4 and 64-QAM 3 / 4 provide higher network capacities and are preferable when the transmitting station is closer to the BS.The performance of the system in terms of transmitted video's quality was evaluated using both objective and subjective evaluations.Clinical validation verified the capacity of mobile WiMAX networks to provide robust, clinically acceptable 4CIF ultrasound video transmission, thus enabling the transmission of ultrasound video at resolutions close to the original's video acquired resolution. Ongoing research includes video transmission simulations over long-term evolution (LTE) and LTE-Advanced wireless channels using OPNET network simulator, as well as real-time setups.In the future, we also want to investigate how diagnostic encoding based on the emerging high efficiency video coding (HEVC) standard [20] can lead to more efficient, error-resilient encoding [21].Moreover, the proposed framework is currently validated for use in other medical video modalities. Fig. 2 . Fig. 2. Typical topology for simulating medical video transmission over mobile WiMAX networks.The ambulance travels with speeds ranging from 60 to 100 km/h following the course delineated by the black line.Vertices "1"-"4" depict the locations close to the BSs effective coverage zone used during Scenario 2. Fig. 3 . Fig. 3. Boxplots depicting bitrate requirements for equivalent perceptual quality of the two investigated encodings schemes, using four QPs and QCIF, CIF, and 4CIF video resolutions. Fig. 4 . Fig. 4. Packet losses as a function of the distance from the BS (illustrated via simulation time) and SNR for the investigated channel modulation and coding schemes (for a typical video looped over the entire route of Fig. 2, video duration = 7 s, Scenario 1).(a) Packet losses for QPSK 1 / 2 .(b) Packet losses for 16-QAM 3 / 4 .(c) Packet losses for 64-QAM 3 / 4 . Fig. 5 . Fig. 5. Scenario 2 QoS Evaluation: (a) Boxplots depicting the average PSNR ratings for the whole dataset and for each channel modulation and coding scheme, as a function of the distance from the BS.(b) Boxplots depicting the average PSNR ratings for the whole dataset and for each distance from the BS, as a function of the investigated channel modulation and coding schemes.(c) Boxplots depicting PSNR ratings for the ten random emulations of a single video (video shown in Fig.1and used in Fig.4) for each channel modulation and coding scheme, as a function of the distance from the BS. Fig. 6 depicts video image examples of the investigated schemes.The artifacts in Fig. 6(c) demonstrate the limits in trying to visualize plaque morphology at this larger distance.Even at 4CIF resolution, the plaque morphology cannot be visualized with 16-QAM 3 / 4 at a distance of 1.3 km. TABLE I TOTAL NUMBER OF PROCESSED VIDEOS IN THIS STUDY 1) Plaque region for visualizing plaque type, morphology, TABLE II MOBILE WIMAX NETWORK CONFIGURATION PARAMETERS TABLE III AVERAGE BIT RATE GAINS OF DIAGNOSTICALLY RELEVANT ENCODING WHILE MAINTAINING THE SAME DIAGNOSTIC QUALITY FOR FMO ROI RS VERSUS FMO 1 TABLE IV QOS MEASUREMENTS FORSCENARIO 1 TABLE V QOS MEASUREMENTS FORSCENARIO 2 TABLE VI RELATIONSHIP BETWEEN CLINICAL CRITERIA AND VIDEO RESOLUTION TABLE VII CLINICAL EVALUATION FOR THE INVESTIGATED CHANNEL MODULATION AND CODING SCHEMES AS A FUNCTION OF THE DISTANCE FROM THE BS cated ultrasound videos was supported by the clinical evaluation provided below.C.Clinical Evaluation 1) High-Resolution Encoding:Table VI depicts the added clinical value linked with higher resolution medical video communication.A total of ten videos were displayed at both
6,988.8
2013-03-07T00:00:00.000
[ "Computer Science" ]
Covering minimal separators and potential maximal cliques in $P_t$-free graphs A graph is called $P_t$-free} if it does not contain a $t$-vertex path as an induced subgraph. While $P_4$-free graphs are exactly cographs, the structure of $P_t$-free graphs for $t \geq 5$ remains little understood. On one hand, classic computational problems such as Maximum Weight Independent Set (MWIS) and $3$-Coloring are not known to be NP-hard on $P_t$-free graphs for any fixed $t$. On the other hand, despite significant effort, polynomial-time algorithms for MWIS in $P_6$-free graphs~[SODA 2019] and $3$-Coloring in $P_7$-free graphs~[Combinatorica 2018] have been found only recently. In both cases, the algorithms rely on deep structural insights into the considered graph classes. One of the main tools in the algorithms for MWIS in $P_5$-free graphs~[SODA 2014] and in $P_6$-free graphs~[SODA 2019] is the so-called Separator Covering Lemma that asserts that every minimal separator in the graph can be covered by the union of neighborhoods of a constant number of vertices. In this note we show that such a statement generalizes to $P_7$-free graphs and is false in $P_8$-free graphs. We also discuss analogues of such a statement for covering potential maximal cliques with unions of neighborhoods. Introduction By P t we denote a path on t vertices. A graph is H-free if it does not contain an induced subgraph isomorphic to H. We are interested in classifying the complexity of fundamental computational problems, such as Maximum Weight Independent Set (MWIS) or k-Coloring for fixed or arbitrary k, on various hereditary graph classes, in particular on H-free graphs for small fixed graphs H. As noted by Alekseev [1], MWIS is NP-hard on H-free graphs unless every connected component of H is a tree with at most three leaves. Similarly, 3-Coloring is known to be NP-hard on H-free graphs unless every connected component of H is a path [11]. On the other hand, it would be consistent with our knowledge if all the remaining cases actually led to polynomial-time solvability. This remains still unknown in spite of intensive work on the subject, which we review next. That MWIS is polynomial-time solvable on P 4 -free graphs (also known as cographs) follows from the observation that these graphs have bounded cliquewidth. A polynomialtime algorithm for MWIS on P 5 -free graphs was proposed by Lokshtanov et al. [17], which was followed by a polynomial-time algorithm on P 6 -free graphs due to the current authors [13]. Both these works heavily rely on the approach of potential maximal cliques, which of central interest in this work. It is also known that MWIS is polynomial-time solvable on claw-free graphs [19,21] and on fork-free graphs [18], where the claw is K 1,3 and the fork is the claw with one edge subdivided once. For coloring problems, it is known that 3-Coloring can be solved in polynomial time on P 7 -free graphs [3] and 4-Coloring can be solved in polynomial time on P 6 -free graphs [8]. It is also known that for every fixed k, k-Coloring is polynomial-time solvable on P 5 -free graphs [15]. While the polynomial-time algorithms presented above are rather limited in generality, much more encouraging results are known if the requirement of polynomial running time is relaxed. In a very recent breakthrough, Gartland and Lokshtanov [10] have shown a quasipolynomial-time algorithm, with running time n O(log 3 n) , for MWIS on P t -free graphs, for every fixed t. The running time has been improved to n O(log 2 n) by Pilipczuk et al. [20], who also observed that the same technique can be used to give an n O(log 2 n) -time algorithm for 3-Coloring on P t -free graphs, for every fixed t. Let us note that these results were established after the announcement of this work, however they were inspired by techniques introduced in an earlier line of work on subexponential-time algorithms for problems in question [2,6,12]. It is still unknown whether MWIS can be solved in quasi-polynomial time on H-free graphs whenever every connected component of H is a tree with at most 3 leaves, however Chudnovsky et al. [7] have given both a subexponential-time algorithm and a quasi-polynomial-time approximation scheme in this setting. The state-of-art presented above shows a large gap between cases where quasi-polynomial-time algorithms are known, and cases where actual polynomial-time solvability has been established. It seems that there is a certain lack of a deeper understanding of the structure of P t -free graphs for larger values of t, which prevents us from properly exploiting this structure in algorithm design. In this note we take a closer look at one property that appeared important in the algorithms for MWIS for P 5 -free and P 6 -free graphs [17,16,13], namely the possibility to cover a minimal separator with a small number of vertex neighborhoods. Let G be a graph. is chordal (i.e., does not contain an induced subgraph isomorphic to a cycle on at least four vertices). A set Ω ⊆ V (G) is a potential maximal clique (PMC) if there exists an (inclusion-wise) minimal chordal completion F of G such that Ω is a maximal clique of G + F . Potential maximal cliques and minimal separators are tightly connected: for example, a graph is chordal if and only if every minimal separator is a clique, and if Ω is a PMC in G, then for every connected component D of G − Ω the set N G (D) is a minimal separator with D being one of the full components. A framework of Bouchitté and Todinca [4,5], extended by Fomin, Todinca, and Villanger [9], allows solving multiple computational problems (including MWIS) on graph classes where graphs have only a polynomial number of PMCs. While P 5 -free graphs do not have this property, the crucial insight of the work of Lokshtanov, Villanger and Vatshelle [17] allows modifying the framework to work for P 5 -free graphs and, with more effort, for P 6 -free graphs [13]. A simple, but crucial in [17], insight about the structure of P 5 -free graphs is the following lemma. Lemma 1 ([17] ). Let G be a P 5 -free graph, let S be a minimal separator in G, and let A and B be two full components of S. Then for every a ∈ A and b ∈ B it holds that The above statement is per se false in P 6 -free graphs, but the following variant is true and turned out to be pivotal in [13]: Lemma 2 (Lemma 4.5 of [13], Lemma 18 in the arXiv version). Let G be a P 6 -free graph, let S be a minimal separator in G, and let A and B be two full components of S. Then there exist nonempty sets A ⊆ A and B ⊆ B such that |A | 3, |B | 3, and That is, every minimal separator in a P 6 -free graph has a dominating set of size at most 6, contained in the union of two full components of this separator. In Section 3 we extend the result to P 7 -free graphs as follows. Section 5 discusses a modified example from [16] that witnesses that no statement analogous to Theorem 3 can be true in P 8 -free graphs. Furthermore, observe that in the statements for P 5 -free and P 6 -free graphs the dominating set for the separator is guaranteed to be contained in two full components of the separator. This is no longer the case in Theorem 3 for a reason: in Section 5 we show examples of P 7 -free graphs where any constant-size dominating set of a minimal separator needs to contain a vertex from the said separator. The intuition behind the framework of PMCs, particularly visible in the quasi-polynomial-time algorithm for MWIS in P 6 -free graphs [16], is that potential maximal cliques can serve as balanced separators of a graph. Here, X ⊆ V (G) is a balanced separator of G if every connected component of G − X has at most |V (G)|/2 vertices. The quasipolynomial-time algorithm of [16] tried to recursively split the graph into significantly smaller pieces by branching and deleting as large as possible pieces of such a PMC. Motivated by this intuition, in Section 4 we generalize Theorem 3 to dominating potential maximal cliques: Theorem 4. Let G be a P 7 -free graph and let Ω be a potential maximal clique in G. Then there exists a set Ω ⊆ V (G) of size at most 68 such that Ω ⊆ N G [Ω ]. Since every minimal separator is a subset of some potential maximal clique in a graph, Theorem 4 generalizes Theorem 3. For the same reason, our examples for P 8 -free graphs also prohibit extending Theorem 4 to P 8 -free graphs. Preliminaries For basic graph notation, we follow [13]. We outline here only nonstandard notation that is not presented in the introduction. For a set X ⊆ V (G), by cc(G − X) we denote the family of connected components of G − X. A set A is complete to a set B if every vertex of A is adjacent to every vertex of B. Potential maximal cliques. A set Ω ⊆ V (G) is a potential maximal clique (PMC) if: (PMC1) none of the connected components of cc(G − Ω) is full to Ω; and In the second condition, we will say that the component D covers the non-edge uv. As announced in the introduction, we have the following. We will also need the following statement. Lemma 6 (cf. Proposition 2.7 of [13], Proposition 8 of the arXiv version). For every PMC Ω of G and every D ∈ cc(G − Ω), the set N G (D) is a minimal separator. It is well-known (cf. [14,Lemma 2]) that if |V (G)| > 1 then the family of (inclusionwise) maximal strong modules of G forms a modular partition of G whose quotient graph is either an independent set (if G is not connected), a clique (if the complement of G is not connected), or a prime graph (otherwise). We denote this modular partition by Covering minimal separators in P 7 -free graphs This section is devoted to the proof of Theorem 3. We need the following two results from [13]. Lemma 7 (Bi-ranking Lemma, Lemma 4.1 of [13], Lemma 17 of the arXiv version). Suppose X is a non-empty finite set and (X, 1 ) and (X, 2 ) are two quasi-orders. Suppose further that every pair of two different elements of X is comparable either with respect to 1 or with respect to 2 . Then there exists an element x ∈ X such that for every y ∈ X we have either x 1 y or x 2 y. In particular, if Quo(D) is not a clique, then the last condition cannot hold. Let G be a P 7 -free graph, let S be a minimal separator in G, and let A 1 and A 2 be the vertex sets of two full components of S. If |A i | = 1 for some i ∈ {1, 2}, then Theorem 3 holds by setting S := A i , so we may assume |A 1 |, |A 2 | > 1. For each i ∈ {1, 2}, fix two different maximal strong modules M p i and M q i of G[A i ] that are adjacent in Quo(A i ). Furthermore, pick arbitrary p i ∈ M p i and q i ∈ M q i . For each i ∈ {1, 2}, we apply Lemma 8 to D := A i and N (D) = S. We say that a vertex x ∈ S is of type (a) i if x ∈ N (p i ) ∪ N (q i ). We say that a vertex x ∈ S is of type (b) i if x is not of type (a) i and there is an induced P 4 in G with x being one of the endpoints and the other three vertices belonging to A i . Finally, we say that a vertex x ∈ S is of type (c) i if x is neither of type (a) i nor (b) i . Lemma 8 asserts that if there are vertices of type (c) i , then Quo(A i ) is a clique and the neighborhood in A i of every vertex of this type is the union of a collection of maximal strong modules of G[A i ]. For α, β ∈ {a, b, c}, let S αβ be the set of vertices x ∈ S that are of type (α) 1 and (β) 2 . We need the following claim. Proof. By Lemma 8, Quo(A i ) is a clique and both A i ∩(N (x)\N (y)) and A i ∩(N (y)\N (x)) are the unions of some disjoint collections of maximal strong modules of G[A i ]. The claim follows. Since G is P 7 -free, S bb = ∅. Furthermore, if we set R a := {p 1 , q 1 , p 2 , q 2 }, then In the rest of the proof, we construct sets R bc , R cb and R cc such that S αβ ⊆ N [R αβ ] for αβ ∈ {bc, cb, cc}. We will conclude that S := R a ∪ R bc ∪ R cb ∪ R cc satisfies the statement of the lemma, because |R a | = 4 and we will ensure that |R bc |, |R cb | 5 and |R cc | 8. We start with constructing the set R bc . If S bc = ∅, then we set R bc = ∅. Otherwise, let v ∈ S bc be a vertex with inclusion-wise minimal set A 2 ∩N (v). Furthermore, let w ∈ A 2 be an arbitrary neighbor of v in A 2 ; w exists since A 2 is a full component of S. Also, let v, u 1 , u 2 , u 3 be vertices of an induced P 4 with u 1 , u 2 , u 3 ∈ A 1 ; recall here that v is of type (b) 1 . We set R bc := {u 1 , u 2 , u 3 , v, w} and claim that S bc ⊆ N [R bc ]. Assume the contrary, and let v ∈ S bc \ N [R bc ]. By the choice of v and since w ∈ Hence, we constructed R bc ⊆ V (G) of size at most 5 such that S bc ⊆ N [R bc ]. A symmetric reasoning yields R cb ⊆ V (G) of size at most 5 such that S cb ⊆ N [R cb ]. We are left with constructing R cc . If S cc = ∅, then we take R cc = ∅ and conclude. In the remaining case, S cc is non-empty, so both Quo(A 1 ) and Quo(A 2 ) are cliques. For each i ∈ {1, 2}, we define a quasi-order i on S cc as follows. For x, y ∈ S cc , is a butterfly if x and y are incomparable both in 1 and in 2 , that is, if each of the following four sets is nonempty: See Figure 1 for an illustration. Lemma 7 allows us to easily dominate subsets of S cc that do not contain any butterflies: Claim 10. Let T ⊆ S cc be such that there is no butterfly xy with x, y ∈ T . Then there exist vertices a 1 ∈ A 1 and a 2 ∈ A 2 such that T ⊆ N (a 1 ) ∪ N (a 2 ). Proof. If T = ∅, the claim is trivial, so assume otherwise. Let us focus on quasi-orders 1 and 2 , restricted to T . Since there are no butterflies in T , the prerequisities of Lemma 7 are satisfied for (T, 1 ) and (T, 2 ). Hence, there exists x ∈ T with x 1 y or x 2 y for every y ∈ T . For i ∈ {1, 2}, let a i be an arbitrary neighbor of x in A i (it exists as A i is a full component of S). For every y ∈ T , there exists i ∈ {1, 2} such that x i y, hence a i y ∈ E(G). We conclude that T ⊆ N (a 1 ) ∪ N (a 2 ), as desired. If there is no butterfly at all, then we apply Claim 10 to T = S cc , obtaining vertices a 1 , a 2 and set R cc = {a 1 , a 2 }. Thus, we are left with the case where at least one butterfly exists. Let xy be a butterfly with inclusion-wise minimal set the electronic journal of combinatorics 28(1) (2021), #P1.29 Figure 1: A butterfly xy. The gray grid presents the partion of G[A i ] into maximal strong modules. Furthermore, pick the following four vertices . . We claim the following: Claim 11. There is no butterfly x y with x , y ∈ T . Proof. Assume the contrary, and let x y be a butterfly with x , y ∈ T . By the minimality of xy, as u x N ({x, y})). By symmetry, assume that w ∈ A 1 and x w ∈ E(G); see Figure 2. By Claim 9, wu x 1 ∈ E(G) and wu y 1 ∈ E(G). If xy ∈ E(G), then x −w−u x 1 −x−y−u y 2 − p 2 would be an induced P 7 in G. Otherwise, if xy / ∈ E(G), then x −w −u x 1 −x−u x 2 −u y 2 −y would be an induced P 7 in G. As in both cases we have obtained a contradiction, this finishes the proof. By Claim 11, we can apply Claim 10 to T and obtain vertices a 1 ∈ A 1 and a 2 ∈ A 2 with T ⊆ N (a 1 ) ∪ N (a 2 ). Hence, we can take R cc := R ∪ {a 1 , a 2 }, thus concluding the proof of Theorem 3. Figure 2: Two cases where a P 7 appears in the proof of Claim 11. Covering PMCs in P 7 -free graphs We now prove the following statement which, together with Theorem 3 and Lemma 6, immediately implies Theorem 4. In the remaining case, pick arbitrary components We claim that we can set Ω = {u, v} and D = {D, D u , D v }. That is, we claim that Assume the contrary, and let x ∈ Ω be such that xu / Let y u be an arbitrary neighbor of u in D u , let y v be an arbitrary neighbor of v in D v , let P u be a shortest path from u to x with all internal vertices in D xu , and let P v be a shortest path from v to x with all internal vertices in D xv . Then, y u −u−P u −x−P v −v−y v is an induced path with at least 7 vertices, a contradiction. This proves (1) and concludes the proof of Lemma 12. Examples In this section we discuss two examples showing tightness of the statement of Theorem 3: we show that it cannot be generalized to P 8 -free graphs and that a small dominating set of a minimal separator may need to contain elements of the said separator. The examples are modifications of a corresponding example presented in the conclusions of [16]. First example. Consider the following graph G. We create three sets of n vertices each, For the edge set of G, we turn A 1 and A 2 into cliques and add edges s j a j 1 and s j a j 2 , for all 1 j n. This concludes the description of the graph G; see Figure 4. Note that S is a minimal separator in G with A 1 and A 2 being two full components of S. First, note that for every v ∈ V (G), |N [v] ∩ S| = 1. Thus, any set dominating S has to contain at least n vertices. Second, note that G is P 8 -free. To see this, let P be an induced path in G. Since A 1 and A 2 are cliques, P contains at most two vertices from each A i , i ∈ {1, 2}, and these vertices are consecutive on P . Since S is an independent set in G, P cannot contain more than one vertex of S in a row. Hence, P contains at most three vertices of S. Consequently |V (P )| 7, as desired. Note that if n 3, then there is an induced P 7 in G, for example Second example. Here, let us modify the graph G from the first example by turning S into a clique. Still, S is a minimal separator in G with A 1 and A 2 being two full components of S. First, note that for every v ∈ A 1 ∪ A 2 , we still have |N [v] ∩ S| = 1. Thus, any set dominating S that is disjoint with S has to contain at least n vertices. Second, note that G is P 7 -free. To see this, observe that G can be partitioned into three cliques, A 1 , A 2 , and S, and any induced path in G contains at most two vertices from each of the cliques. Note that if n 3, then there is an induced P 6 in G, for example a 3 1 − a 1 1 − s 1 − s 2 − a 2 2 − a 3 2 . While the two examples above refute the possibility of covering a minimal separator or a PMC by a constant number of vertex neighborhoods, they do not refute a weaker statement such as the one of Lemma 12: covering a PMC with a constant number of vertex or component neighborhoods. We were not able to construct a counterexample to such a statement. On the contrary, we conjecture the following: Conjecture 13. For every integer t 2 there exists an integer M t such that for every P tfree graph G and every PMC Ω in G, there exists a set X ⊆ V (G) and a set D ⊆ cc(G−Ω), both of size at most M t , such that Ω ⊆ N [X] ∪ D∈D N (D). S A 1 A 2 Figure 4: The first example for n = 5. In the second example we turn S into a clique. Lemma 12 provides the proof for t = 7 with M 7 = 3. In support of the above conjecture let us recall the following technical statement of [16]: Observe that an iterative application of Lemma 14 with H = P t to a P t -free graph G and a PMC Ω yields a weaker version of Conjecture 13: Lemma 15. For every t, every P t -free graph G, and every PMC Ω in G, there exists a set X ⊆ V (G) and a set D ⊆ cc(G − Ω), both of size O(t 2 log |Ω|), such that Ω ⊆ N [X] ∪ D∈D N (D). Proof. We compute X and D via the following process, initiated with X = ∅ and D = ∅. At each step, given X ⊆ V (G) and D ⊆ cc(G−Ω), compute A := Ω\(N [X]∪ D∈D N (D)). If A = ∅, then stop, returning X and D. Otherwise, apply Lemma 14 to H = P t , G, Ω, and µ being a uniform distribution over A. Since G is P t -free, Lemma 14 finds a vertex v with µ(N [v] ∩ Ω) 1/2t 2 or a component D ∈ cc(G − Ω) with µ(N (D)) 1/2t 2 . In the first case, add v to X, and in the second case, add D to D, and go to the next step. Observe that by the choice of µ, in the next iteration of the process the set A is smaller by at least a factor of (1 − 1/2t 2 ), proving the final bound on |X| and |D|. However, we remark that it is unclear whether a positive resolution of Conjecture 13 would have any application for polynomial-time algorithms in the class of P t -free graphs.
5,791.8
2020-03-27T00:00:00.000
[ "Mathematics" ]
Self-assembling behavior and interface structure in vertically aligned nanocomposite (Pr0.5Ba0.5MnO3)1-x:(CeO2)x films on (001) (La,Sr)(Al,Ta)O3 substrates Heteroepitaxial oxide-based nanocomposite films possessing a variety of functional properties have attracted tremendous research interest. Here, self-assembled vertically aligned nanocomposite (Pr0.5Ba0.5MnO3)1-x:(CeO2)x (x = 0.2 and 0.5) films have been successfully grown on single-crystalline (001) (La,Sr)(Al,Ta)O3 substrates by the pulsed laser deposition technique. Self-assembling behavior of the nanocomposite films and atomic-scale interface structure between Pr0.5Ba0.5MnO3 matrix and CeO2 nanopillars have been investigated by advanced electron microscopy techniques. Two different orientation relationships, (001)[100]Pr0.5Ba0.5MnO3//(001)[1-10]CeO2 and (001)[100]Pr0.5Ba0.5MnO3//(110)[1-10]CeO2, form between Pr0.5Ba0.5MnO3 and CeO2 in the (Pr0.5Ba0.5MnO3)0.8:(CeO2)0.2 film along the film growth direction, which is essentially different from vertically aligned nanocomposite (Pr0.5Ba0.5MnO3)0.5:(CeO2)0.5 films having only (001)[100]Pr0.5Ba0.5MnO3//(001)[1-10]CeO2 orientation relationship. Both coherent and semi-coherent Pr0.5Ba0.5MnO3/CeO2 interface appear in the films. In contrast to semi-coherent interface with regular distribution of interfacial dislocations, interface reconstruction occurs at the coherent Pr0.5Ba0.5MnO3/CeO2 interface. Our findings indicate that epitaxial strain imposed by the concentration of CeO2 in the nanocomposite films affects the self-assembling behavior of the vertically aligned nanocomposite (Pr0.5Ba0.5MnO3)1-x:(CeO2)x films. Results and Discussion PBMO zone axis, which shows the existence of two types of OR between CeO 2 and PBMO in the nanocomposite films. It is known that under the HAADF imaging conditions, the atomic columns appear dots in a dark background, and the intensity (I) of bright dots is roughly proportional to the square of the atomic number (Z) of the atom column 24 . The CeO 2 nanopillars have a bright contrast in the PBMO matrix. It is found that CeO 2 /PBMO interface can be either semi-coherent or coherent along the film-growth direction, as shown by yellow dashed lines and by red dashed lines, respectively. Interfacial dislocations are visible at the semi-coherent interface, as demonstrated by a horizontal yellow arrow. The atom-scale structure of the semi-coherent PBMO/CeO 2 interface has been investigated by EDS element mapping 28 . Fig. 2b is a typical high-resolution HAADF-STEM image of the PBMO/CeO 2 interface. The corresponding EDS maps of Mn, Ba, Ce and Pr are shown in Figs. 2c−f, respectively. In the PBMO matrix, Pr and Ba cations site at the same atomic columns, indicating that A-site disordered PBMO is obtained. According to the EDS measurements, no elemental segregation at the PBMO/CeO 2 interface. In the CeO 2 nanopillar, Pr and Ce site at the same atomic columns, implying that Pr 3+ ions dope into CeO 2 and partially replace Ce 4+ ions. The substitution of Pr 3+ in Ce 4+ can result in the formation of (Ce,Pr)O 2-δ , and oxygen vacancies generated in the (Ce,Pr)O 2-δ In other words, interface reconstruction occurs at the PBMO/CeO 2 interface, resulting in the formation of a single unit-cell thickness of A-site ordered PBMO structure. It is worth mentioning that the distortion of MnO 6 octahedra is different between A-site ordered and disordered PBMO 34 . In addition, the A-site ordered PBMO occurs a ferromagnetic-paramagnetic transition at about 320 K, while A-site disordered PBMO has T C ≈ 140 K 25 . It is worth noting that in our work the VAN (PBMO) 1-x :(CeO 2 ) x (x = 0.2 and 0.5) films coherently grow on the LSAT substrates. For the CeO 2 nanopillars embedded in the PBMO matrix with the OR-I, with the change of the molar ratio (x) of CeO 2 to PBMO, the strain of the VAN (PBMO) 1-x :(CeO 2 ) x films can be estimated by www.nature.com/scientificreports www.nature.com/scientificreports/ is in agreement with the experimental observations that no OR-II occurs between CeO 2 and PBMO in the VAN (PBMO) 0.5 :(CeO 2 ) 0.5 film. Compare with A-site disordered PBMO, A-site ordered PBMO has a relative low Ms and high magnetoresistance at low temperatures 25 . Nevertheless, considering a very small volume fraction (~20%) of the A-site ordered PBMO in the (PBMO) 0.8 :(CeO 2 ) 0.2 film, the magnetic properties (e.g., M s ) of the (PBMO) 1-x :(CeO 2 ) x films on the LSAT substrates would be mainly affected by the epitaxial strain imposed by the CeO 2 nanopillars within the films 23 . In other words, the volume fraction of CeO 2 and the crystallographic OR between CeO 2 and PBMO in the VAN films change the strain state and the magnetic properties of the PBMO film 22,23 . In addition, it was found that the electrical resistivity of the VAN (PBMO) 0.5 :(CeO 2 ) 0.5 film is over 4 times larger than that of the pure PBMO film in our previous work 22 . It is believed that the vertical semi-coherent phase boundary can increase the difficulty of charge carriers transport, and result in an increase of resistivity of the film system 9,36 . The appearance of A-site ordered PBMO at the coherent PBMO/CeO 2 interface could lead to a decrease of electrical www.nature.com/scientificreports www.nature.com/scientificreports/ resistivity since A-site ordered PBMO has two orders lower electrical resistivity than A-site disordered PBMO 25 . But, the A-site ordered PBMO in the (PBMO) 0.8 :(CeO 2 ) 0.2 film has a very small volume fraction, which could not strongly affect the resistivity of the VAN (PBMO) 0.8 :(CeO 2 ) 0.2 film 23 . Importantly, our work demonstrates that the epitaxial strain can lead to the formation of A-site ordered PBMO at the heterointerface, which provides a strategy to fabricate A-site ordered PBMO thin films on the substrates (e.g., CeO 2 /YSZ buffered Si substrates 37,38 ) by using strain engineering in the heterosystems. In summary, the VAN (PBMO) 1-x :(CeO 2 ) x films prepared on (001)-oriented LSAT substrates have been systematically studied by advanced electron microscopy. While the VAN and PBMO. In addition, interface reconstruction occurs at the coherent PBMO/CeO 2 interface, resulting in the formation of a single unit-cell-thick layer of A-site ordered PBMO at the interface. Our results demonstrate that self-assembling behavior of the nanocomposite (PBMO) 1-x :(CeO 2 ) x films can be modulated by epitaxial strain. Material and Methods Thin film preparation. The composite targets of (PBMO) 1-x :(CeO 2 ) x (x = 0.2 and 0.5) were sintered by a standard ceramic sintering method. The (PBMO) 1-x :(CeO 2 ) x films were fabricated on (001) LSAT single-crystalline substrates by a KrF (wavelength λ = 248 nm) excimer pulsed laser deposition system with laser energy density of 2.0 J cm −2 at 3 Hz. During the film deposition, oxygen pressure is 250 mTorr and substrate temperature is 800 °C. Thin film characterization. Cross-sectional transmission and scanning transmission electron microscopy (TEM/STEM) specimens were prepared by focused ion beam (FIB) milling (FEI Helios NanoLab 600i) 39 . Diffraction contrast imaging, selected-area electron diffraction (SAED), high-angle annular dark-field (HAADF) and annular bright-field (ABF) imaging, energy dispersive X-ray spectroscopy (EDS) mapping and electron energy-loss spectroscopy (EELS) mapping were carried out on a probe aberration-corrected JEOL JEM-ARM200F equipped with an Oxford X-Max N 100TLE spectrometer and a Gatan Enfina spectrometer, operated at 200 kV. In STEM mode, a probe size of 0.1 nm at semi-convergence angle of 22 mrad was used. HAADF and ABF detectors covered angular ranges of 90-176 and 11-22 mrad, respectively. EELS collection angle was 72 mrad and energy resolution was 1.2 eV at the dispersion of 0.3 eV/pixel. Data availability All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors.
1,730.4
2020-02-11T00:00:00.000
[ "Materials Science" ]
Generalization Strategies in the Problem Solving of Derivative and Integral This study proposes a learning strategy of derivatives and integrals (LSDI) based on specialized forms of generalization strategies to improve undergraduate students’ problem solving of derivative and integral. The main goal of this study is to evaluate the effects of LSDI on students’ problemsolving of derivative and integral. The samples of this study were 63 undergraduate students who took Calculus at the Islamic Azad University of Gachsaran, Iran. The students were divided into two groups based on their scores in the pre-test of derivative and integral. The results indicated that there was a significant difference between the achievements of students in experimental and control groups after treatment. Thus, the findings reveal that using generalization strategies improves students’ achievements in solving problems of derivative and integral. Some methods are introduced to support the students to overcome their problemsolving difficulties in the learning of derivative and integral. Researchers endeavour to support students' problem-solving in the learning of calculus by promoting mathematical thinking with or without a computer (Tall, 2008;Rahman, 2009;Kashefi, Ismail, & Yusof, 2012). There are extensive studies in promoting mathematical thinking to help students' understanding of calculus, especially derivative and integral (Dubinsky, 1991;Schoenfeld, 1992;Tall, 1995;Watson & Mason, 1998;Yusof & Tall, 1999;Gray & Tall, 2001;Mason, 2002;Rahman, 2009;Mason, Stacey, & Burton, 2010). Mathematical thinking is an active process which improves students' understanding of highly complex activities such as specializing, conjecturing, and generalizing (Tall, 2002b;Yusof & Rahman, 2004;Stacey, 2006;Mason et al., 2010;Kashefi et al., 2012). According to Tall (2004b) and Tall (2008), mathematical thinking process occurs in three worlds of mathematics, such as the embodied world, the symbolic world, and the formal world. The embodied world of sensory meaning and action involve reflection on perception and action. The symbolic world contains computing and manipulating symbols in arithmetic and algebraic forms. The formal world is the world of axioms, formal definitions, and formal proof of theorems (Tall, 2002b). Therefore, in the teaching and learning of calculus specifically derivative and integral, the focus is on embodied and symbolic worlds (Tall, 2011). Generalization as an important element of mathematical thinking process in problem solving can be supported to overcome students' difficulties in the learning of calculus especially derivative and integral (Polya, 1988;Cruz & Martinon, 1998;Larsen, 1999;Tall, 2002b;Tall, 2004b;Tall, 2011;Sriraman, 2004;Mason et al., 2010;Kabael, 2011). Tall (2002b) asserts that generalization strategies in mathematical thinking worlds are an expansive, reconstructive, and disjunctive generalization. First, expansive generalization happens when the notion expands the applicability range of an existing schema without reconstructing it. Second, reconstructive generalization occurs when the subject reconstructs an existing schema to widen its applicability range. Last, disjunctive generalization occurs when the subject moves from a familiar context to a new one. The subject constructs a new disjoint schema to deal with the new context and adds it to the array of schemas available (Harel & Tall, 1991). Mason et al. (2010) propose the three steps of problem-solving framework namely entry, attack, and review using mathematical thinking activities such as specializing, conjecturing, and generalization. Mason et al. (2010) believe that specialization and generalization as the main steps of mathematical thinking processes involve three problem-solving phases. Specialization involves entry and attack, and generalization covers attack and review. The key idea of their framework is a mulling circle between the entry and attack phases. Also, the activity of conjecturing has an important role in connecting specialization to generalization . Tall believes that a majority of using generalization happens in the symbolic world as an expansive generalization and this kind of generalization is not able to make the connection between the graphical and symbolical world of mathematics for derivative and integral (Tall, 2002b;Tall, 2004b). Tall (2002b) and Tall (2011) believe that reconstructive generalization enables the making of relationship and connection between concepts of derivative and integral. It is because this kind of generalization changes the contracture of presentation, protects the meaning of concepts and transits it to the new world. Harel & Tall (1991) and Tall (2002b) state that disjunctive generalization also can protect the relationship and connection between concepts, but it has less effect as compared to expansive and reconstructive. This study adopts a learning strategy of derivatives and integrals (LSDI) which is designed based on generalization strategies and mathematical thinking process through three worlds of mathematics to improve undergraduate students' problem solving of derivative and integral. Furthermore, the effectiveness of LSDI in enhancing students' problem solving is also being evaluated. LEARNING STRATEGIES FOR DERIVATIVE AND INTEGRAL (LSDI) This study uses the theory of three worlds of mathematics (Tall, 2004a;Tall, 2004b), viewpoint of Tall about generalization (Tall, 2002a), and the mathematical thinking process framework of Mason et al. (2010) in order to design strategies for learning derivative and integral (LSDI) to improve problem solving. In the designing of LSDI, the generalization process in the framework of Mason (specialization, conjecturing and generalization) was being modified along with the three generalization strategies namely; expansive, reconstructive and disjunctive (Tall, 2002b). Then, the modified generalization strategies were merged with three worlds of mathematics to design LSDI. Figure 1 represents the theory and the framework which were established in designing LSDI. Mathematical thinking worlds were selected because it covers both graphical and symbolical aspects of derivative and integral to overcome students' difficulties in problem-solving. Through mathematical thinking worlds, both graphical and symbolic aspects can be connected by reconstructive generalization. Moreover, expansive generalization can be supported in the embodied world to foster students' problem solving by the graphical aspect. Therefore, based on this study, mathematical thinking and generalization strategies should be considered in designing learning strategies for problem-solving. Scruggs & Mastropieri (1993) believe that learning strategies are established based on the use of tasks, and they involve how students structure and apply a collection of skills to learn content or to carry out a particular task more effectively and efficiently. Besides, Watson & Mason (1998) assert that learning strategies contain what we think such as planning, realizing and memorizing previous knowledge through doing the problem-solving process. According to Watson & Mason (1998) and Watson (2002), prompts and questions are more appropriate tasks use by teachers as guidance for developing mathematical thinking in the classroom within the problem-solving process. The questions assist students in focussing on particular strategies and helping them to see patterns and relationships . These questions provide the establishment of a strong conceptual network. Consequently, the questions can be used as a prompt when students become 'stuck'. Teachers are often tempted to turn these questions into prompts to stimulate thinking and incorporate students in the problem-solving process activities (Watson & Mason, 1998;Watson & Mason, 2006;Mason et al., 2010). Therefore, referring to this study, the contents of designed learning Generalization (Tall, 2002b) strategies based on prompts and questions should cover generalization strategies, mathematical thinking process, and three worlds of mathematics. The learning tasks of derivative and integral were prepared based on the LSDI strategies through prompts and questions that are shown in Tables 1 and Table 2. Table 1 presents an example of the designed prompts and questions based on LSDI which was prepared for the derivative task. -What is the general rule for the derivative figure based on the original graph? -Can you show its general form graphically and algebraically? -What is the same and the difference in this example? -What information do you need to write general forms of other topics, e.g. function and limit? -Write general ideas for this example. -Classify the ideas. -Give the general form and describe it? -When the average velocity gets close to absolute velocity? -Describe this example and the general form with the graph of this example. Formal World By using and , show that the function below are derivable: In Table 1, prompts and questions were used to design the derivative and integral tasks based on LSDI. These concepts were presented through different worlds of mathematics based on specialization, conjecturing and generalization as components of the mathematical thinking process. Moreover, the three generalization types were considered in the generalization phase of the mathematical thinking process of the problem-solving framework. Table 2 shows that the prompts and questions were designed as learning strategies of integral through mathematical thinking worlds based on specialization, conjecturing and generalization (involved three strategies namely; expansive, reconstructive and disjunctive). -Find the area between the graph and x-axes from = −1 to = 1 …(i) -Find the area between the graph and x-axes from = −2 to = 2 … (ii) -Compare (i) -Please give a more general example. -Describe how you can connect this example to algebraic form. -Find the area between the graph and x-axes from = to = . -Give another example and find the area between the graph and xaxes from = to = . -Give the general form of founded original function symbolically. -Connect the general form to its figure logically. Formal World Show that if is integrable then; RESEARCH METHOD Method of the study involves discussions on research design, sample, instruments, data collection and data analysis which are described sequentially. Research Design This experimental study tries to highlight the impacts of using LSDI especially generalization strategies on students' problem solving of derivative and integral. After comparing the scores between two groups of students as control and experimental, the rates of using generalization strategies due to their responds to open-ended problems were compared through pre and post-test of both groups. Sample Two classes were randomly selected among 12 classes which offered Calculus I at the University of Gachsaran in Iran. One of the two classes was selected randomly as an experimental group, and it involved 33 students. Another class acted as a control group and consisted of 30 students. Based on the results of pre-test, all students in both groups were in having the same level of problem-solving of derivative and integral. In the control group, derivative and integral were taught by using traditional and common methods used in the teaching of these concepts. In contrast, derivative and integral taught in experimental class was based on designed strategies of LSDI. The duration of the teaching of derivative and integral in both groups was 8 weeks. A post-test was administered to both classes based on problem-solving of derivative and integral at the end of week 8. Instrument This study used two instruments for pre-test and post-test which were designed to understand students' problem-solving abilities of derivative and integral. Three experts who were familiar in this case verified the questions of problem-solving in this research. For pre-test, 9 problems (6 for derivative and 3 for integral) were given to the students, and 11 problems (6 for derivative and 5 for integral) were given in the posttest. Data Analysis The goal of the analysis was to see if LSDI affected students' scores and rates of using LSDI specially generalization strategies in the experimental group as compared to the control group. Students' scores were obtained based on their responses to the problems in the pre and post-tests. The students' responses were scored from 20based on Iranian scoring schema. The differences in the mean of those scores were compared within and between groups. This comparison was done with independent samples ttest because the data had the assumption of the t-test. Moreover, the Kappa test was used to see the agreements of students' scores in problem-solving between groups through a pre-test. After implementing LSDI in the experimental group and taking posttest, the two-mixed ANOVA design test and independent samples t-test were applied to highlight the improvement of students' scores in experimental group from the pre to post-test. After scoring students' answer sheets and comparing them within each group and between two groups, the rates of using LSDI especially generalization strategies were compared within and between groups. A rubric was designed based on LSDI to compare the rates of its application specifically in the application of generalization strategies within and between groups. The rubric contained 13 items and was designed based on the blending of generalization strategies of Tall (2002b) and mathematical thinking process of Mason et al. (2010). The rubric and its items are presented in Table 3. Based on the rubric, the rates of using LSDI particular generalization strategies were investigated based on the students' responses to the open-ended questions. Students' answers to each question were checked to see if they had considered the items of LSDI specifically on a generalization or not. For each item, the score is either 0 or 1. If a student used that item, he or she would be given 1. A Student was given 0 if there was no consideration of the item in responding to that question. To illustrate, if a student considered E1 in his (her) response to the first problem, the given score was 1 for E1. This process was repeated for all of the items given in Table 3. Therefore, students' responses were checked to obtain the scores for the items. Finally, the given scores to all of the items were tabulated to compare within and between groups. The score for each component was a summation of its related items. For example, to know the score of specialization, the given scores of S1, S2 and S3 should be calculated. Therefore, the score for each student in responding to each problem can be from 0 to 3 for specialization. The scores in all components for each student were calculated based on their items and analyzed quantitatively within and between groups. Findings of Students' Scores The results of comparing pre-test scores indicated there was no difference between the mean of student' scores in the control group and experimental group based on independent samples t-test results. For more confidence about balancing of two groups, in the beginning, the Kappa test was done based on students' scores. According to the results such as Value=0.786 and Approx. Sig.= 0.001, there were good agreements between students' scores of pre-test in problem-solving of derivative and integral in both groups. Pallant (2010) asserts that if the value of Kappa is bigger than 0.70, there is a good agreement between the data of different groups. Therefore, LSDI could be implemented in an experimental group to see its effects on this group by taking post-test at the end of the teaching process for the concepts. The data which were collected from students' scores in problem-solving of derivative and integral were normal in the post-test. Also, the Levene's test indicated that there are significant differences between the variances of two groups based on collected data in post-test. Therefore, the parametric tests such as two-mixed ANOVA design and independent samples t-test could be used for comparing the students' scores between and within the groups through pre and post-test. The results of comparing the scores are presented in Table 4. Table 4. Comparison of students' scores in pre-test and post-tests for PS of derivative and integral The output of two-mixed ANOVA test indicates that students' scores between two groups and changing the scores from pre-test to post-test within groups were statistically significant (p= 0.001). Also, the results of independent samples t-test showed that there was no significant difference for students' scores of pre-test between the control group and the experimental group. Although the mean of scores in the control group was different from the mean of scores in the experimental group in the pre-test, the difference was not significant. However, independent samples t-test showed that there was a significant difference (with p= 0.001) between the mean of students' scores of post-test in the control group and the mean of students' scores of post-test in the experimental group. After analyzing students' scores, the differences in the rates of using LSDI especially generalization strategies were analyzed within and between groups through pre and post-test based on investigating students' responses to the problems. Rates of Using LSDI in Solving of Derivative and Integral Problems The components of LSDI involving the framework of Mason and the generalization strategies of Tall based on mathematical thinking such as specialization, conjecturing and generalization strategies (expansive, reconstructive and disjunctive) were investigated in the pre-test and post-test within and between groups. The data for the rates of applying LSDI in solving problems based on students' responses to the problems of derivative and integral were not normal data. Thus, non-parametric tests should be used to see the differences in utilization rate for LSDI within and between control and experimental groups through the pre and post-tests. The rates of using generalization as the main part of LSDI in pre-test (PrG) and rate of its utilization in post-test (PoG) were compared within groups. The PrG and PoG are the summations of utilization rates of three generalization strategies in the pre and post-tests. Moreover, the differences in using each component of LSDI between the pretest and post-test were demonstrated within the group. Also, the rates of applying each component of LSDI were also compared between groups for the pre-test and post-test. To indicate differences within the group, the Wilcoxon test was used for non-parametric data, and to see differences between two groups Mann-Whitney test was administered (Pallant, 2010;Kirkpatrick & Feeney, 2012). In Table 5, the results of the comparison of LSDI components specifically generalization are given within each group. It should be noted that Pr is an abbreviation of pre-test and Po is an abbreviation of post-test, the first letter of each component added to Pr and Po. For examples in Tables 5 and 6, PrS means specialization in the pre-test, PoS means specialization in post-test. The results of the Wilcoxon test demonstrate that there is no significant difference in the rate of using components of LSDI especially generalization in the control group between pre-test and post-test. The results of this test such as Wilcoxon N=30and p>0.05 show no difference in the rates of applying generalization strategies between the pre and post-tests. However, the results of the Wilcoxon test in the experimental group show different postures of using items of LSDI. In this group, Wilcoxon N=33and p<0.05 indicate that there significant differences between the pre-test and post-test in the rates of using LSDI in the problem solving of derivative and integral within the experimental group. Furthermore, the rate of using items of LSDI was measured between two groups in the pre-test and post-test by using the Mann-Whitney U test. Also, the rates of using generalization strategies were compared between control and experimental groups as presented in Table 6. According to the results which are presented in Table 6, the Mann-Whitney U-Test reveals that the rates of using LSDI especially generalization strategies from control group are not significantly different from the experimental group in the pre-test. The results such as U, Zand two-tailed p>0.05 approve this assertion through the pretest in problem-solving of derivative and integral. Meanwhile, the Mann-Whitney Utest reveals that the rates of all components of LSDI specifically in the experimental group are significantly higher than the rates from the control group in the post-test (U, Zand two-tailed p<0.05). Furthermore, the mean of ranks from each group in different tests verifies this result. Discussion The results of comparing scores and rates of using generalization strategies are discussed to see if LSDI affected students' scores and their utilization rates of generalization strategies based on the responses to the problems of derivative and integral. Although there is no significant difference between the mean of students' scores in control group with the mean of scores in the experimental group for the problems solving of derivative and integral in the pre-test, the scenario in responding to the posttest problems is different. The results indicate that the students' scores in the experimental group are better than the students' scores in the control group through solving post-test problems of derivative and integral. It indicates that the scores of students who learned derivative and integral based on LSDI especially generalization strategies were in the high range of qualification according to Iranian scoring rate at the undergraduate level. Also, the two-mixed ANOVA design test indicates that the improvements of students' scores within each group from the pre-test to post-test are dissimilar in different groups. Based on the results, the improvement of scores from the pre-test to post-test for experimental group students is higher than the improvement of scores in the control group students from the pre-test to post-test in solving problems of derivative and integral. It can be identified that the achievements of the experimental group students who used LSDI especially generalization strategies were better as compared to the other group. Thus, to know the role of LSDI particularly generalization strategies in improving students' scores in problem-solving, the components of LSDI were investigated based on the responses to the problems of derivative and integral. Generalization strategies as main components of LSDI in problem-solving of derivative and integral were used remarkably in solving the post-test problems among experimental group students. Although students did not display the utilizations of generalization strategies in solving the pre-test problems, they tried to use the strategies many times through solving the post-test problems. Hence, generalization strategies such as expansive, reconstructive and disjunctive were applied in answering questions of problem-solving of derivative and integral. It should be mentioned that specialization and conjecturing as two fundamental bases for generalization were better considered in the post-test than the pre-test among students in the experimental group. However, in the control group, there was no difference in the utilization of specialization, conjecturing and generalization strategies when responding to the pretest and post-test of derivative and integral. The control group students did not show applications of generalization strategies in responding to the pre and post-tests. There is no difference in the utilization rates of generalization strategies such as expansive, reconstructive and disjunctive and their bases such as specialization and conjecturing in answering the questions of problemsolving of derivative and integral between the pre-test and post-test in this group. However, the posture of the utilization rate for generalization strategies is different between the two groups. Although the rates of using specialization, conjecturing and generalization strategies are not meaningfully different between two groups in solving the pre-test problems of derivative and integral, there are significant differences of using these strategies between groups in solving the problem of derivative and integral in the posttest. Based on the rubric and students' responses in their answer sheets, the experimental group students used more examples which were related to the problems when solving them. They tried to find the similar properties and ideas of examples in the entry phase of the problem solving framework. Also, the experimental group students categorized the ideas according to the properties of examples through the attack phase. Moreover, the students attempted to give a general guess to solve the problems based on related examples. Besides, they formulated the ideas based on the examples within the attack phase of Mason's problem-solving framework. They also chose the best method to find the answer for problems of derivative and integral within the attack phase. The expansive generalization was used by the experimental group students extensively when responding to the post-test. The students found answers to some problems by using more related examples of those problems which belonged to the attack phase of problem-solving. The students in the experimental group tried to check the written solutions for problems in the same cases using review phase. However, control group students did not demonstrate these kinds of solutions when solving problems. Furthermore, the experimental group students solved problems by generating ideas in both symbolic and embodied worlds of mathematics using properties of reconstructive generalization in the attack phase. Subsequently, they created new IJEME, Vol. 3, No. 1, March 2019, 77-92. 88 schemas which were based on applying the properties of the review phase of the problem solving framework. Also, in a few cases, the students attempted to extend the solution idea to higher level problems. In contrast, usages of these factors were not apparent in solving problems of derivative and integral among the control group students. The utilization of disjunctive generalization was rated less as compared to other generalization strategies when solving problems by the experimental group. Some students tried to find answers by using familiar contexts. However, the control group students found the answers by using disconnected pieces of information, and also they generated the ideas of the solution in the wrong way in the same cases. Based on the framework of problem-solving, three phases namely entry, attack and review was well considered among students in the experimental group. It can be concluded that using components of LSDI specifically generalization strategies has remarkable effectiveness in improving students' scores in the experimental group as compared to the control group students. CONCLUSION LSDI was implemented based on strategies such as presenting concepts within three worlds, using mathematical thinking process, prompts and questions to teach derivative and integral in the experimental group. Using components of LSDI namely; specialization, conjecturing, expansive generalization, reconstructive generalization and disjunctive generalization based on mathematical thinking worlds play an important role to enhance problem-solving of derivative and integral. The results indicate that there is an improvement in the problem-solving scores of students who experienced strategies of LSDI when learning derivative and integral. The improvement of scores in the experimental group is better than the control group students' scores when comparing the pre-test and post-test. The components of LSDI are considered remarkable in solving problems among the experimental group students. Specialization and generalization as main activities of mathematical thinking process were used in solving the problems of derivative and integral in the experimental group. Also, specialization involves two phases of problem-solving framework; entry and attack, and generalization cover attack and review. The utilization of problemsolving framework (entry, attack and review) among the experimental group students were more remarkable. Thus, using LSDI especially generalization strategies based on three worlds of mathematics improves students' problem-solving achievements in the learning of derivative and integral. Moreover, adopting a problem solving framework which involves entry, attack and review help to improve students' performance. This study recommends evaluating the effectiveness of LSDI on other concepts of calculus for further research. Besides, investigating the process of using LSDI based on students' thinking can be used for future studies. Teachers should put more emphasis on the use of LSDI to teach calculus concepts, especially derivative and integral to enhance their students' performance.
6,116.6
2019-03-06T00:00:00.000
[ "Mathematics" ]
Dynamic Instability Analysis of a Double-Blade Centrifugal Pump : The flow instability of a double-blade centrifugal pump is more serious due to its special design feature with two blades and large flow passages. The dynamic instabilities and pressure pulsations can affect the pump performance and operating lifetime. In the present study, a numerical investigation of unsteady flow and time variation of pressure within a complete double-blade centrifugal pump was carried out. The time domain and frequency domain of pressure pulsations were extracted at 16 monitoring locations covering the important regions to analyze the internal flow instabilities of the pump model. The frequency spectra of pressure pulsations were decomposed into Strouhal number dependent functions. This led to the conclusion that the blade passing frequency (BPF) related vibrations are exclusively flow-induced. Large vortices were observed in the flow passages of the pump at low flow rate. It is noted that high vorticity magnitude occurred in the vicinities of the blade trailing edge and tongue of the volute, due to the rotor-stator interaction between impeller and volute. Introduction Double-blade centrifugal pumps are widely used to transport liquids containing solid particles, for instance, wastewater treatment, chemical process and liquid food transportation. The impeller with two blades can be capable of handling the liquids with large solid particles or long fibers, due to its larger flow passages [1]. However, compared with multiblade centrifugal pump, the internal flow instability of the double-blade centrifugal pump is more serious, and the corresponding slip is increased, because the flow control capability of the impeller is decreased due to its large passage. From an engineering viewpoint, to extend the operating lifetime of double-blade centrifugal pump, we should focus on lowering the vibration level of the pump. Typically, the vibration generated in a centrifugal pump consists of tonal components and broadband components. Tonal components are often dominated by the tones at blade passing frequency and its harmonics [2][3][4]. This is a consequence of the strong interaction between the impeller and tongue of volute [5,6]. Additionally, broadband vibration is generated by flow separations on the impeller blades or pump volute, turbulence boundary layers, back flow and vortices [7][8][9]. Studies on the flow instability of centrifugal pumps are numerous. In the early stage, Guelich et al. [10] pointed out that pressure pulsations in centrifugal pump are created by the unsteady flow, for instance, the wake flow from impeller blade trailing edge, the large-scale turbulence and vortices generated by flow separation and flow recirculation at part load conditions. Afterwards, Spence et al. [11] adopted the numerical methods to investigate the pressure pulsations and unsteady flow inside the pump, and a better agreement between computational fluid dynamics (CFD) data and experimental data was achieved. Pavesi et al. [12] carried out the study on flow instability of a centrifugal pump, and a correlation between the fluid dynamics and the emitted noise was investigated. It was identified that a fluid-dynamical origin is connected with the jet-wake phenomenon. Recently, the flow instability of a multistage centrifugal pump at off-design conditions was investigated by Shibata et al. [13]. Pressure pulsations at diffuser inlet were measured and rotating stall phenomenon was observed with CFD tools. In addition, some studies were carried out to do the deep analysis on the unsteady flow phenomena in centrifugal pump, for instance, flow separation [14][15][16], rotating stall [17][18][19] and recirculation [20,21]. Those directly cause the formation of flow instability, pressure pulsations and pump vibration. Although much work has been done on flow instability of centrifugal pump, there is sparse literature available to analyze the flow instability and vibration characteristics of a double-blade centrifugal pump. Thus, the objectives of this study are: (1) identify the unsteady flow phenomena and pressure pulsations of the double-blade centrifugal pump; (2) evaluate the vibration characteristics and its relationship with flow instability; (3) explain the flow instability generating mechanism of double-blade centrifugal pump. Pump Model A typical double-blade centrifugal pump with large flow passages is selected as a study object. Basically, the pump is constructed with an impeller with two blades, volute, gas-liquid separation chamber, suction, shaft, shaft seal and key (shown in Figure 1). The fluid domains were constructed to do the fluid simulation. Both the designed operation parameters and pump geometrical parameters were illustrated in Table 1. Appl. Sci. 2021, 11, x FOR PEER REVIEW 2 of 1 edge, the large-scale turbulence and vortices generated by flow separation and flow re circulation at part load conditions. Afterwards, Spence et al. [11] adopted the numerica methods to investigate the pressure pulsations and unsteady flow inside the pump, and better agreement between computational fluid dynamics (CFD) data and experimenta data was achieved. Pavesi et al. [12] carried out the study on flow instability of a cen trifugal pump, and a correlation between the fluid dynamics and the emitted noise wa investigated. It was identified that a fluid-dynamical origin is connected with th jet-wake phenomenon. Recently, the flow instability of a multistage centrifugal pump a off-design conditions was investigated by Shibata et al. [13]. Pressure pulsations at dif fuser inlet were measured and rotating stall phenomenon was observed with CFD tools In addition, some studies were carried out to do the deep analysis on the unsteady flow phenomena in centrifugal pump, for instance, flow separation [14][15][16], rotating stall [17 19] and recirculation [20,21]. Those directly cause the formation of flow instability pressure pulsations and pump vibration. Although much work has been done on flow instability of centrifugal pump, there i sparse literature available to analyze the flow instability and vibration characteristics of double-blade centrifugal pump. Thus, the objectives of this study are: (1) identify th unsteady flow phenomena and pressure pulsations of the double-blade centrifuga pump; (2) evaluate the vibration characteristics and its relationship with flow instability (3) explain the flow instability generating mechanism of double-blade centrifugal pump Pump Model A typical double-blade centrifugal pump with large flow passages is selected as study object. Basically, the pump is constructed with an impeller with two blades, volute gas-liquid separation chamber, suction, shaft, shaft seal and key (shown in Figure 1). Th fluid domains were constructed to do the fluid simulation. Both the designed operation parameters and pump geometrical parameters were illustrated in Table 1. Simulation Setup The ANSYS CFX 19.0 was used as a simulation tool. In the simulation, the inle boundary condition was set at total pressure (1 atm), and the outlet boundary condition was set as mass-flow. Impeller is the rotating part, and other parts are the stationar parts. Both steady simulation and transient simulation strategies were undertaken in th Simulation Setup The ANSYS CFX 19.0 was used as a simulation tool. In the simulation, the inlet boundary condition was set at total pressure (1 atm), and the outlet boundary condition was set as mass-flow. Impeller is the rotating part, and other parts are the stationary parts. Both steady simulation and transient simulation strategies were undertaken in the research. In the steady calculation, the interface between the rotating impeller and the volute was set as the frozen rotor, and in the transient calculation, it was set as the transient rotor-stator interface. The spatial discretization of the governing equation is based on the finite volume method, and the transient calculation time step is 1.72414 × 10 −4 s, which is 1/120 of the rotation period. The high resolution is selected for advection scheme, transient scheme is second order backward Euler, turbulence numeric is high resolution, the convergence target is set to 10 −5 , and the maximum number of iteration steps in each time step is 15. Standard wall function was used to process the near wall surface, and the solid wall surface was set to be no slip. In addition, the impeller, gas-liquid separation chamber, suction, and volute wall surface were set to rough wall surfaces with a roughness of 0.04 mm, and the remaining solid walls were set to smooth wall surface. The quality of the grid directly affects the accuracy and time of numerical calculation. In the CFD calculation, the structured meshes were constructed using Gridpro software, as shown in Figure 2. The grid sensitivity analysis was carried out to determine the appropriate number of grids. Figure 3 shows that as the number of cells exceeds about 3 million, the head and torque basically remain constant and their values are about 16.7 m and 19.4 N·m under designed flow rate condition. In order to balance the calculation accuracy and calculation time, the total mesh number of the final model was set at 4.4 million, in which the number of cells of impeller, volute, gas-liquid separation chamber and suction channel are 680,000, 460,000, 215,000, 170,000, respectively; and the remaining parts of the computational domain consists of 940,000 cells. Table 2 shows the verification results of the independence of turbulence models under designed flow rate condition. It can be seen that the influence of different turbulence models on the calculation results is small, and the relative error is lower than 3%, which satisfies the precise requirements of the relevant calculation. In this study, SST k-ω was selected as the turbulent model for subsequent calculation, because it can obtain the wall separation flow and shear flow under the off-design condition well [22][23][24]. Experimental Setup The pump is directly driven by a Siemens motor and the speed is controlled by an ABB frequency converter. Figure 4 presents the pump test rig. It includes: 1-inlet tank, 2-flow meter, 3-inlet pipe with pressure detecting port, 4-outlet pipe with pressure detecting port, 5-test pump, 6-outlet tank, 7-inlet pressure transducer, and 8-outlet pressure transducer. To ensure a stable water flow at the pump inlet, the inlet tank was equipped with a flow straightener for stabilizing the flow. The error of the measuring devices is as follows: (1) the pressure error is ±0.2%; (2) the flow rate error is ±0.2%; (3) the speed error is ±1 rpm; (4) the power error is ±1.5%. In addition, the repeatability of test fills the ISO 9906-2012 Grade 1 requirement. Experimental Setup The pump is directly driven by a Siemens motor and the speed is controlled by an ABB frequency converter. Figure 4 presents the pump test rig. It includes: 1-inlet tank, 2-flow meter, 3-inlet pipe with pressure detecting port, 4-outlet pipe with pressure detecting port, 5-test pump, 6-outlet tank, 7-inlet pressure transducer, and 8-outlet pressure transducer. To ensure a stable water flow at the pump inlet, the inlet tank was equipped with a flow straightener for stabilizing the flow. The error of the measuring devices is as follows: (1) the pressure error is ±0.2%; (2) the flow rate error is ±0.2%; (3) the speed error is ±1 rpm; (4) the power error is ±1.5%. In addition, the repeatability of test fills the ISO 9906-2012 Grade 1 requirement. Experimental Setup The pump is directly driven by a Siemens motor and the speed is controlled by an ABB frequency converter. Figure 4 presents the pump test rig. It includes: 1-inlet tank, 2-flow meter, 3-inlet pipe with pressure detecting port, 4-outlet pipe with pressure detecting port, 5-test pump, 6-outlet tank, 7-inlet pressure transducer, and 8-outlet pressure transducer. To ensure a stable water flow at the pump inlet, the inlet tank was equipped with a flow straightener for stabilizing the flow. The error of the measuring devices is as follows: (1) the pressure error is ±0.2%; (2) the flow rate error is ±0.2%; (3) the speed error is ±1 rpm; (4) the power error is ±1.5%. In addition, the repeatability of test fills the ISO 9906-2012 Grade 1 requirement. Additionally, a piezoelectric vibration acceleration sensor and a signal acquisition and analysis system were used to measure the vibration levels of the pump. According to the requirements of ISO 10816 [25], 5 vibration acceleration monitoring points were arranged in the vibration sensitive area of the pump unit to obtain sufficient vibration information, see in Figure 5. No. 1 measuring sensor is located on top of pump housing (Z direction), No. 2 measuring sensor is located on side wall of pump housing (Y direction), No. 3 measuring sensor is located on foot of pump housing (Z direction), No. 4 measuring sensor is located on bearing housing (Z direction), No. 5 measuring sensor is located on bearing housing (Y direction). The technical information of acceleration sensors was described in Table 3. However, the special design feature of the pump volute is that it was covered by chamber, so it is difficult to mount the dynamic pressure sensors in the pump volute to measure the dynamic pressure pulsations. Therefore, the vibration data were measured as a limited experimental dataset to explain the dynamic instability of the pump. Additionally, a piezoelectric vibration acceleration sensor and a signal acquisition and analysis system were used to measure the vibration levels of the pump. According to the requirements of ISO 10816 [25], 5 vibration acceleration monitoring points were arranged in the vibration sensitive area of the pump unit to obtain sufficient vibration information, see in Figure 5. No. 1 measuring sensor is located on top of pump housing (Z direction), No. 2 measuring sensor is located on side wall of pump housing (Y direction), No. 3 measuring sensor is located on foot of pump housing (Z direction), No. 4 measuring sensor is located on bearing housing (Z direction), No. 5 measuring sensor is located on bearing housing (Y direction). The technical information of acceleration sensors was described in Table 3. However, the special design feature of the pump volute is that it was covered by chamber, so it is difficult to mount the dynamic pressure sensors in the pump volute to measure the dynamic pressure pulsations. Therefore, the vibration data were measured as a limited experimental dataset to explain the dynamic instability of the pump. Layout of Monitoring Points In order to study the pressure pulsation characteristics at different positions of the volute and the impeller by CFD, 10 monitoring points were arranged in the middle section of the volute, and 6 monitoring points were arranged at the mid-span of blade suction and pressure surface from inlet to outlet (shown in Figure 6). In detail, P1 is located Layout of Monitoring Points In order to study the pressure pulsation characteristics at different positions of the volute and the impeller by CFD, 10 monitoring points were arranged in the middle section of the volute, and 6 monitoring points were arranged at the mid-span of blade suction and pressure surface from inlet to outlet (shown in Figure 6). In detail, P1 is located at tongue area, P2-P9 are located at different section of volute, and P10 is located at diffuser area. X1-X3 are the three monitoring points from inlet to outlet of the blade suction surface, and Y1-Y3 are the three monitoring points from inlet to outlet of the blade pressure surface. Layout of Monitoring Points In order to study the pressure pulsation characteristics at different positions of the volute and the impeller by CFD, 10 monitoring points were arranged in the middle section of the volute, and 6 monitoring points were arranged at the mid-span of blade suction and pressure surface from inlet to outlet (shown in Figure 6). In detail, P1 is located at tongue area, P2-P9 are located at different section of volute, and P10 is located at diffuser area. X1-X3 are the three monitoring points from inlet to outlet of the blade suction surface, and Y1-Y3 are the three monitoring points from inlet to outlet of the blade pressure surface. Simulation Validation A comparison between the numerical results and the experimental results is shown in Figure 7. The disc friction loss is taken into account in the computation of the efficiency. Equations of disc friction loss can be found in [26]. It can be found that both the head and efficiency error at the designed condition are less than 2%; however, at part-load flow rates, the maximum error of head and efficiency is up to 5%. The main reason is that at part-load flow rate, the flow structures inside the pump are very complicated and unstable, which leads to large deviations in calculation and test. In general, the numerical calculation results are in good agreement with the experimental results, and the accuracy basically meets the requirements of subsequent calculation and analysis. In addition, the typical flow rates, e.g., 0.4Q d , 0.6Q d , 0.8Q d , 1.0Q d , and 1.2Q d , were selected to do the further analysis. part-load flow rates, the maximum error of head and efficiency is up to 5%. The main reason is that at part-load flow rate, the flow structures inside the pump are very complicated and unstable, which leads to large deviations in calculation and test. In general, the numerical calculation results are in good agreement with the experimental results, and the accuracy basically meets the requirements of subsequent calculation and analysis. In addition, the typical flow rates, e.g., 0.4Qd, 0.6Qd, 0.8Qd, 1.0Qd, and 1.2Qd, were selected to do the further analysis. Pressure Pulsations of Volute at Different Flow Rates The dimensionless unsteady pressure coefficient is calculated in Equation (1), which is a normalization of the obtained pressure values based on the dynamic pressure in the impeller outlet ( (1) where N = 120 is the number of samples. Pressure Pulsations of Volute at Different Flow Rates The dimensionless unsteady pressure coefficient is calculated in Equation (1), which is a normalization of the obtained pressure values based on the dynamic pressure in the impeller outlet (0.5ρu 2 2 ). The intensity of pressure pulsation µ p can be calculated in Equation (3) and the frequency of pressure pulsations is decomposed into Strouhal number (St) in Equation (4). where N = 120 is the number of samples. Figure 8 presents the intensity of pressure pulsations at different monitoring points in volute under different flow rates. It is noted that pump volute is asymmetry by having a feature of a tongue. The flow passage in the tongue area is quite narrow and pressure changes dramatically under the interference between impeller and tongue. The vicinity of the tongue (P1-P3) is the area with the most serious pressure pulsation in the entire volute channel. For the P1-P3 monitoring points, the change of flow rate caused the change of the pressure pulsation amplitude. At 0.4Q d , the change of pressure near the tongue area (P2) was the most significant (shown in Figure 9a). As the relative gap between the impeller outlet and the volute wall surface increases, the intensity of rotor-stator interference weakens, and the influence of the flow rate on the pressure pulsation decreases. For the P4-P6 monitoring points, the pressure pulsation amplitude of different flow rates is getting smaller (shown in Figure 8). For the P7 monitoring point, the pressure pulsation amplitude achieves the minimum value (shown in Figure 8), and the changes in flow rate have a slight effect on pressure pulsation (shown in Figure 9b). In Figure 9, the data were captured from the last two rotation cycles Appl. Sci. 2021, 11, 8180 8 of 18 of impeller, and the time required for one rotation cycle (T) is about 0.02 s. There are two main peaks and valleys in one rotation cycle that is directly linked to the number of blades (z = 2). At the same time, this also shows that the rotor-stator interference between the pump's dynamic part (impeller) and static part (volute) was the main source of pressure pulsation at each monitoring point. Appl. Sci. 2021, 11, x FOR PEER REVIEW 8 of 19 Figure 8 presents the intensity of pressure pulsations at different monitoring points in volute under different flow rates. It is noted that pump volute is asymmetry by having a feature of a tongue. The flow passage in the tongue area is quite narrow and pressure changes dramatically under the interference between impeller and tongue. The vicinity of the tongue (P1-P3) is the area with the most serious pressure pulsation in the entire volute channel. For the P1-P3 monitoring points, the change of flow rate caused the change of the pressure pulsation amplitude. At 0.4Qd, the change of pressure near the tongue area (P2) was the most significant (shown in Figure 9a). As the relative gap between the impeller outlet and the volute wall surface increases, the intensity of rotor-stator interference weakens, and the influence of the flow rate on the pressure pulsation decreases. For the P4-P6 monitoring points, the pressure pulsation amplitude of different flow rates is getting smaller (shown in Figure 8). For the P7 monitoring point, the pressure pulsation amplitude achieves the minimum value (shown in Figure 8), and the changes in flow rate have a slight effect on pressure pulsation (shown in Figure 9b). In Figure 9, the data were captured from the last two rotation cycles of impeller, and the time required for one rotation cycle (T) is about 0.02 s. There are two main peaks and valleys in one rotation cycle that is directly linked to the number of blades (z = 2). At the same time, this also shows that the rotor-stator interference between the pump's dynamic part (impeller) and static part (volute) was the main source of pressure pulsation at each monitoring point. Frequency spectra of pressure pulsations at the volute under different flow rates are shown in Figure 10. It is presented that the domain frequencies of the pressure pulsation under different flow rates are the blade passing frequency (St = 1) and its multipliers (St = 2, 3, 4 and etc.). At 1.2Qd, the amplitude of pressure pulsation at the blade passing frequency is larger when the interaction occurs in the vicinity of the tongue at P2. However, at P7, the high amplitude of pressure pulsation occurs at a low flow rate (0.4Qd). In addition, especially at P2, it is noted that the broadband pulsation occurs at 0.4Qd, which means some unsteady flow structures could be generated at this area, e.g., vortex, back- ). At 1.2Q d , the amplitude of pressure pulsation at the blade passing frequency is larger when the interaction occurs in the vicinity of the tongue at P2. However, at P7, the high amplitude of pressure pulsation occurs at a low flow rate (0.4Q d ). In addition, especially at P2, it is noted that the broadband pulsation occurs at 0.4Q d , which means some unsteady flow structures could be generated at this area, e.g., vortex, backflow and flow separation. This is clearly shown in Figure 11, where the backflow phenomena are presented in the outlet of impeller, and some vortices are produced near the suction side of the blade. Actually, the flow fields at t 0 and t 2 are quite similar, and the flow field at t 1 is quite similar to that at t 3 , thanks to the effect of the relative position of blade to tongue of volute. Instantaneous fields of vorticity magnitude at the low flow rate are illustrated in Figure 11c. The vorticity magnitude Ω was calculated according to the definition of the new omega vortex [27]. High vorticity magnitude is occurred in the vicinities of the blade trailing edge and tongue of volute, due to the rotor-stator interaction between impeller and volute. The vorticity magnitude layer on the suction side of the blade shows the flow separation. Figure 12 shows the flow velocity streamlines of the pump under different flow rates. It can be found that the flow separation is weakened as the flow rate increases. At 1.0Q d and 1.2Q d , there is no vortex inside the pump. It is clear evidence to support the low-pressure pulsation under the designed flow rate. Figure 13 shows the pressure pulsations of different monitoring points in impeller at different flow rates. There is no rotor-stator interference in the impeller flow channel, but unsteady flow phenomena such as flow separation, backflow, and stalls make the pressure pulsation characteristics in the impeller flow channel more complicated. Compared with the periodic pressure pulsation in the volute, the periodicity of the pressure pulsation in the impeller is obviously weakened. For the suction side of the blade (X1-X3), the pressure pulsation amplitude is relative stable and fluctuates in the range of ±0.5. The change of flow rate has a slight effect on the pluses at X1-X3. However, for the pressure side of the blade (Y1-Y3), the amplitude of the pressure pulsation changes greatly due to the jet-wake effect. Especially, near the trailing edge of blade pressure surface (Y3), the amplitude of pressure pulsation is at the highest level. Figure 13 shows the pressure pulsations of different monitoring points in impeller at different flow rates. There is no rotor-stator interference in the impeller flow channel, but unsteady flow phenomena such as flow separation, backflow, and stalls make the pressure pulsation characteristics in the impeller flow channel more complicated. Compared with the periodic pressure pulsation in the volute, the periodicity of the pressure pulsation in the impeller is obviously weakened. For the suction side of the blade (X1-X3), the pressure pulsation amplitude is relative stable and fluctuates in the range of ±0.5. The change of flow rate has a slight effect on the pluses at X1-X3. However, for the pressure side of the blade (Y1-Y3), the amplitude of the pressure pulsation changes greatly due to the jet-wake effect. Especially, near the trailing edge of blade pressure surface (Y3), the amplitude of pressure pulsation is at the highest level. In order to reveal the spectrum characteristics of impeller at different flow rates, the typical point X3 and Y3 were selected for further spectral analysis. The spectra of pressure pulsation at X3 and Y3 are presented in Figure 14. For all the cases, the main excitation frequencies for the pressure pulsation at X3 and Y3 are impeller rotational frequency and blade passing frequency (BPF). In order to reveal the spectrum characteristics of impeller at different flow rates, the typical point X3 and Y3 were selected for further spectral analysis. The spectra of pressure pulsation at X3 and Y3 are presented in Figure 14. For all the cases, the main excitation frequencies for the pressure pulsation at X3 and Y3 are impeller rotational frequency and blade passing frequency (BPF). Figure 15 presents the distribution of radial force Fx and Fy on X and Y direction of a blade during one single rotation. It is obvious to discover that the shape of Fx and Fy like a dipole under different flow rates due to the effect of two blades. It further notes that a bigger force on the blade is generated when a blade passes the tongue of the volute. On the contrary, when the blade is away from the tongue of the volute, the radial force is smaller. In addition, comparing the radial forces at different flow rates, it can be seen that the largest radial forces were generated at 0.4Qd. This is caused by the uneven pressure distribution. However, the lowest radial forces were found at 1.0Qd. Figure 15 presents the distribution of radial force F x and F y on X and Y direction of a blade during one single rotation. It is obvious to discover that the shape of F x and F y like a dipole under different flow rates due to the effect of two blades. It further notes that a bigger force on the blade is generated when a blade passes the tongue of the volute. On the contrary, when the blade is away from the tongue of the volute, the radial force is smaller. In addition, comparing the radial forces at different flow rates, it can be seen that the largest radial forces were generated at 0.4Q d . This is caused by the uneven pressure distribution. However, the lowest radial forces were found at 1.0Q d . Vibration Characteristics of Pump To evaluate the vibration levels and main excitation frequencies of the pump model under different flow rates, one typical vibration spectrum was obtained in Figure 16. This indicates that the frequencies at St = 1, 2, 3, 4 and 5 are the main excitation frequencies which induce pump vibration. Vibration Characteristics of Pump To evaluate the vibration levels and main excitation frequencies of the pump model under different flow rates, one typical vibration spectrum was obtained in Figure 16. This indicates that the frequencies at St = 1, 2, 3, 4 and 5 are the main excitation frequencies which induce pump vibration. Vibration Characteristics of Pump To evaluate the vibration levels and main excitation frequencies of the pump model under different flow rates, one typical vibration spectrum was obtained in Figure 16. This indicates that the frequencies at St = 1, 2, 3, 4 and 5 are the main excitation frequencies which induce pump vibration. By comparing the vibration levels of different measuring points, the acceleration data at main excitation frequencies were calculated for five vibration measuring points under different flow rates (shown in Figure 17). In Figure 17a, for the No. 1 measuring point, the vibration level at St = 1 is higher than others. Moreover, the vibration level at 0.4Q d is highest and its value is about 4.3 mm 2 /s, while the lowest vibration level can be found at 1.0Q d . From Figure 17b, the vibration level at St = 1 of No. 2 measuring point is lower than that of No. 1 measuring point. However, the vibration levels at St = 2, 3, 4 and 5 are higher than those of No. 1. As can be seen in Figure 17c,d, the distribution of vibration levels of No. 3 and 4 measuring points at different Strouhal numbers are similar to No. 1 measuring point. Figure 17e presents that the overall vibration level of No. 5 measuring point is lower than others. This indicates that the low vibration energy was generated at pump feet. under different flow rates (shown in Figure 17). In Figure 17a, for the No. 1 measuring point, the vibration level at St = 1 is higher than others. Moreover, the vibration level at 0.4Qd is highest and its value is about 4.3 mm 2 /s, while the lowest vibration level can be found at 1.0Qd. From Figure 17b, the vibration level at St = 1 of No. 2 measuring point is lower than that of No. 1 measuring point. However, the vibration levels at St = 2, 3, 4 and 5 are higher than those of No. 1. As can be seen in Figure 17c,d, the distribution of vibration levels of No. 3 and 4 measuring points at different Strouhal numbers are similar to No. 1 measuring point. Figure 17e presents that the overall vibration level of No. 5 measuring point is lower than others. This indicates that the low vibration energy was generated at pump feet. Relationship between the Unsteady Flow and the Structural Vibration As discussed above, the rotating flow instability of impeller is coupled to the unsteady flow fluctuations in the impeller flow passages. High intensive pressure pulsations are generated near the trailing edges of blade surfaces, due to the destabilized passage flow by interacting between impeller blades and volute, especially near the tongue of volute. It is believed that this is a major source of fluid-induced vibration. By analyzing the pressure pulsations spectra of volute ( Figure 10) and vibration spectra of pump (Figure 17), it can be seen that the peak at blade passing frequency (St = 1) generates more vibration energy than the other frequencies at St = 2, 3, 4 and 5. This demonstrates that the spectral peaks that correspond to multiples of the number of impeller blades are the main contributors to the pump vibration. Conclusions In this study, the flow instability of a double-blade centrifugal pump was investigated. The following specific conclusions can be obtained: (1) The region in the pump experiencing the largest pressure pulsation is located at the vicinity of tongue of volute. The amplitude of pressure pulsation decreases as the relative gap between the impeller outlet and the volute wall surface increases. The rotor-stator interference between the impeller and volute is the main source of pressure pulsation. Highly intensive pressure pulsation is found at part-load conditions due to serious flow instability. (2) A strong vorticity fields in the vicinities of volute tongue and suction side of impeller blade were observed. These vortices are unstable, and they influence the normal flow streamlines, and destabilize the radial forces on each blade. Additionally, the interaction between blade and tongue causes the dynamic fluctuation of blade loads. (3) The main excitation frequencies were identified by measuring the vibration data of the double-blade centrifugal pump. In general, vibration amplitude at blade passing frequency (St = 1) is higher than others. There is a direct link between pressure pulsation and pump vibration, and pressure pulsation is a major source of fluidinduced vibration. (4) Considering the difficulties in performing the dynamic pressure measurement on pump volute, due to the pump volute being located inside the chamber, CFD was used as a main tool to analyze the dynamic instability of pump model. In future, it will be more comprehensive to involve the measurement on rotor movement orbit to help observe the dynamic behaviors of double-blade centrifugal pump.
7,732.2
2021-09-03T00:00:00.000
[ "Engineering", "Physics" ]
How Does Digital Economy Affect Rural Revitalization? The Mediating Effect of Industrial Upgrading : Since the reform and opening up in 1978, China’s economy has grown significantly, but rural development still lags. China has implemented a rural revitalization strategy to reduce the gap between urban and rural areas. Meanwhile, the digital economy has gradually become a new economic growth engine for China. With the digitalization of rural industries, the digital economy gradually integrated into rural development and revitalization. However, how the digital economy impacts rural revitalization remains unclear. Based on the entropy method, previous studies measured rural revitalization levels from the perspectives of economy, civilization, and ecological environment. In this paper, using panel data from 11 prefecture-level cities in Zhejiang Province from 2011 to 2019, we use the entropy method to quantify the development level of the digital economy, industrial upgrading, and rural revitalization. Then, we investigate the relationship among them using fixed effect regression. The empirical results show that the digital economy obviously promotes rural revitalization. The mediation effect test shows that industrial upgrading plays a mediating mechanism between the digital economy and rural revitalization. In addition, heterogeneity analysis reveals that the promotion effect of the digital economy on rural development in southwestern Zhejiang is stronger than that of northeastern Zhejiang. The results imply that government should strengthen digital infrastructure construction in rural areas to promote rural revitalization. Moreover, rural areas with different economic development levels should implement a differentiated rural revitalization strategy. Introduction In recent years, the digital economy has attracted increasing attention from academia, institutions, and policymakers [1,2].Information processing technologies are becoming increasingly mature, strengthening human data analysis capabilities.As a result, data have become a critical factor of production.The digital economy has fostered formation of new industries, new models, and new power [3].Digital technology lowers economic costs, including search, replication, transportation, tracking, and verification costs [4].In addition, the digital economy also lowers information asymmetry and facilitates information dissemination.Digital economic activities have accelerated industrial digitization and digital industrialization of various countries [5,6].In the last twenty years, China's digital economy has developed fast.According to the Chinese Academy of Information and Communications, from 2015 to 2020, China's digital economy of GDP increased from 27.0% to 38.6%.In 2020, China's digital economy was CNY 39.2 trillion.The data-driven digital economy has emerged as a crucial driving force in China's economic future growth and recovery after the COVID-19 pandemic [7].More and more business activities have intensely depended on network connectivity and digital devices after the COVID-19 pandemic [8]. Since China implemented economic reform and opening up, China's GDP has grown significantly.However, China's rural development is still lagging and faces acute problems due to environmental deterioration, poor sustainability, insufficient financial supply, weak infrastructure, lacking economic opportunity, and so on [9][10][11].At the same time, aging and depopulation in the rural society is severe [11].Lacking sufficient labor force in rural industries has aggregated unbalanced economic development between urban and rural areas [12].In order to promote rural development, China has implemented the rural revitalization strategy.In 2021, the No. 1 document of the Central Committee of the Communist Party stressed that rural revitalization should be prioritized in the modernization process.Rural revitalization is a critical strategy to solve the unbalanced social development between rural and urban China.It is a meaningful way to achieve the common prosperity strategy of China.The strategy emphasizes using diversity policies and instruments to promote rural development.Rural revitalization requires various industries, because the sole dependence on the agricultural sector will result in the decline of rural communities [13].Therefore, achieving the goals of rural revitalization requires new economic development engines.It is critical to the sustainable development of China's rural economy and society. Meanwhile, digital technologies have pervaded integrally and invisibly into rural life in the digital era [14].The convergence between the digital economy and rural revitalization has been widely discussed by the government and the industry [12,13,15].Digitalization provides new driving power for rural and agricultural development.Digital technologies enhance the matching ability of supply side and demand side.Disconnected Internet will hinder the recovery of rural small and medium enterprises [8].Expanding the depth and breadth of the digital economy will benefit traditional rural industries upgrading.For example, banking armed with digital technologies has resulted in e-banking, which promotes financial inclusion and eases the financial constraint of rural small enterprises [16].Therefore, how does the digital economy promote rural revitalization?What is the promotion mechanism?What is the heterogeneous effect?Clarifying these issues can enrich the theoretical research on the digital economy and rural economy.It also provides empirical evidence for policymakers to promote rural revitalization implementation. In order to answer the above questions, this paper selects Zhejiang Province for research as a case study, in which the digital economy has been fully developed.Zhejiang is one of the most economically developed provinces in China.In 2003, Zhejiang implemented the "Beautiful Village" plan, dramatically improving the life quality of rural residents and the rural ecological environment.At the same time, digital finance developed rapidly in Zhejiang, where Alibaba is based.For a long time, the Zhejiang government has attached great importance to using digital finance to serve the real economy.Therefore, the case of Zhejiang is representative when exploring the impact of the digital economy on rural revitalization.Based on the data from 2011 to 2019, this paper comprehensively measures the digital economy and rural revitalization level of 11 cities in Zhejiang Province.Then, this paper uses panel data to empirically test the impact of the digital economy on rural vibration.The results show that the digital economy effectively promotes rural revitalization, and the mediation mechanism is rural industrial upgrading.The heterogeneity test shows that the promotion effect exists in regional heterogeneity.After a series of robustness tests, the conclusions still hold.The remaining sections are organized as follows.The relevant literature is presented in Section 2. The theoretical analysis and the formulation of research hypotheses are provided in Section 3. Section 4 provides the econometric design, which includes the variable construction, data sources, and econometric model.The empirical results are reported in Section 5. Section 6 provides the conclusion and policy implications. Literature Review This paper relates to prior research on the digital economy and rural revitalization.The digital economy has a variety of definitions.Tapscott (1996) related the digital economy to economic activities that can be enhanced by the Internet [17].Similarly, Mesenbourg (2001) divided the digital economy into the construction and economic application of ICT infrastructure [18].Since then, the boundary of the digital economy has gradually expanded [6].According to the OECD, the digital economy refers to all economic activities that depend on, or are greatly enhanced by, digital inputs, such as digital technologies, digital infrastructure, and digital services [2].According to Bukht and Heeks (2017), the core of the digital economy includes hardware manufacturing, software and IT consulting, information services, and telecommunications [6].The digital economy includes a narrow scope and a broad scope.The narrow digital economy evolves into digital services, such as platform economy, sharing economy, and the gig economy.E-business, e-commerce, Industry 4.0, precision agriculture, and algorithmic economy belong to the broad digital economy [6].Moreover, digital finance is an important part of the digital economy.Cevik et al. (2022) found that there is a two-way Granger causality between Bitcoin spot and futures [19].Therefore, digital currencies can be used as financial instruments to hedge energy price fluctuation [20]. Research related to rural revitalization primarily focuses on the connotation, evaluation system, and promotion path.Scholars explained the connotation of rural revitalization from different perspectives, including urban-rural integrated development, rural agricultural modernization, high-quality development, and modernization processes [21,22].Based on connotation of rural revitalization, researchers built an evaluation system of rural revitalization from different perspectives, including economic, political, social, cultural, and ecological perspectives [23,24].After selecting a series of comprehensive indicators, the entropy weight method and the principal component analysis method were applied to measure rural revitalization level.Some researchers proposed strategic paths to promote rural revitalization, such as urban-rural integrated development, deepening rural reform, and rural agricultural modernization [10,25].Other researchers considered more specific implementation paths of rural revitalization, such as talent training, system construction, technical innovation, financial support, and tourism [26][27][28]. Studies related to this paper also involve integration of digital economy and rural economy.Chen (2021) found that the digital economy and rural industry can be coupled and integrated, despite insufficient methods and support [29].The digital economy helps the supply side in remote and underdeveloped rural regions access markets that were hard to reach in the past.Tapscott (1996) argued that the digital economy facilitated information transition and communication [17].The ability of the Internet has facilitated commercial transactions [18].For example, e-commerce platforms can expand the scope of the sales market as well as the channels of agricultural products.Internet platforms, as information distribution centers, can effectively match agricultural supply side and demand side.Buyers and sellers of agricultural products can trade directly in the digital trade platform, reducing the cost of agricultural products deals.Therefore, the digital economy improves the production efficiency of agriculture, and thus enhances competitiveness of agriculture.Broadband in rural areas has expanded the accumulation of rural human capital, and promotes high-quality development of the rural economy [30,31].The digital economy encourages an open, inclusive, co-governance and sharing, intelligent, and efficient model [32].This is consistent with the aim of sustainable rural development concepts, which are innovative, coordinated, green, open, and shared.Digital technologies reduce the front-end transaction costs for rural finance, provide technical support for rural financial innovation, ease rural financial constraints, and contribute to rural economic development [33].Xiao (2019) highlighted that big data could improve environmental protection and innovate the ways of interactive cultural communication, thereby promoting rural sustainable development [34].Erdogan et al. (2022) found that the important source of carbon emissions from tourism is transportation, and environmental innovations can significantly lower carbon emissions and eliminate the negative effects of international tourism on the environment [35].According to Cao et al. (2021), digital finance effectively improved energy-environmental performance by promoting green technology innovation [36].Therefore, the digital economy is expected to reduce carbon emission of the rural tourism industry. In summary, previous studies provide valuable insights into the digital economy and rural economy.However, direct research on the impact of the digital economy on rural revitalization is rare.Moreover, previous research mainly used the normative method, concentrating on concepts and policy paths, lacking empirical research.The quantitative impact of the digital economy on rural revitalization still needs to be further explored. Theoretical Analysis and Hypothesis Development In theory, the promotion of the digital economy on rural revitalization can be reflected in four aspects.Firstly, the digitization of agricultural trade.The combination of digital economy and trade produces digital agricultural trade, which will reduce agricultural trade costs and accelerate agricultural modernization.Secondly, digitalization of agricultural trade can decrease agriculture circulation costs, diminish logistics time, and lower logistics losses of agricultural products.Thereby, the profit margin of agricultural products and farmers' income is largely improved [37].On the one hand, the digital transformation of agriculture can precisely control and trace the whole agricultural production process, including agricultural input, production, and circulation.Thus, the digital economy can raise the quality of the agricultural supply side.On the other hand, digital devices of agriculture obtain agricultural data in real time, improving the agricultural production process and helping to form a whole-process standardized production system that will boost agricultural production and operation efficiency, reducing production costs [38].Thirdly, digital governance improves rural governance capabilities and administrative efficiency.Digital rural government can promptly obtain more information to improve the level of scientific decision-making.Internet service platforms can also promote information sharing, eliminate data gaps, and improve rural administrative efficiency.Finally, digital finance eases financial constraints for rural residents.Digital finance enhances the availability, convenience, and comprehensiveness of financial services.The marginal cost of digital financial inclusion is nearly zero, significantly reducing the threshold of financial services for rural residents [39].Poor farmers can obtain better access to financial resources thanks to digital finance, which also eases rural financial constraints.Therefore, the digital economy expands the coverage and depth of rural financial services [40].According to the research presented above, the digital economy promotes rural revitalization through rural digitization and modernization.As a result, the first hypothesis is presented. Hypothesis 1. The digital economy promotes rural revitalization. The digital economy has become an important driving force for industrial upgrading, because industrial structure layout can be optimized by digital technologies [41].Digital infrastructures include mobile Internet, big data, cloud computing, and artificial intelligence.These technologies can improve the traditional industrial structure, supporting the upgrading and transformation of the industrial structure.The integration of digital technologies with agriculture, manufacturing, and service industries has spawned new industries, formats, and models.Sheng (2020) pointed out that the digital economy will promote the upgrading, production efficiency, and high-quality development of traditional industries [42].At the same time, county-level industrial upgrading has become the key driving force for economic growth.The industrial structure dividend accounts for 4% of the county-level GDP and contributes 24% to the GDP growth [43].Industrial upgrading has continued extending the urban secondary and tertiary industries to rural areas.Digital technology is driving the integrated development of the entire industry in rural areas.New business models, such as sightseeing agriculture and rural tourism, are constantly emerging.Industrial upgrading also provides farmers with more nonagricultural employment opportunities and enhances the revenue sources available to people in rural areas.Therefore, the second hypothesis is presented as follows. Hypothesis 2. The digital economy affects rural revitalization through industrial upgrading. The digital economy has the characteristics of high technology, high growth, and high permeability.It compresses information dependence on geographical time and space by digital technologies, enhancing the interconnection depth and breadth among surrounding areas.However, the digital economy growth differs significantly in different regions of China.In areas without Internet, the role of the digital economy in rural development is negligible.Areas with smooth Internet speed will have a better digital economic infrastructure.At the same time, scale of the digital economy needs to reach a certain critical point before the promotion effect can be instantly amplified.As a result, promotion effect of the digital economy on rural revitalization will be different due to various digital economic infrastructure.Many studies have shown that the impacts of the digital economy on high-quality economic development, industrial upgrading, and regionally coordinated development exhibit regional heterogeneity [44].Zhao and Long (2021) found noticeable regional differences in the impact of China's digital economy on rural revitalization based on the panel data of China from 2015 to 2019 [45].The economic development of Zhejiang Province is unbalanced because of its history and geographical environment.The digital economy circle in northeastern Zhejiang developed better than that in southwestern Zhejiang [46].Therefore, we propose the second hypothesis of this paper.Hypothesis 3. The impact of the digital economy on rural revitalization is regionally heterogeneous. Data Source This paper chooses 11 prefecture-level cities in Zhejiang Province as the research samples.The entropy method was used to construct development indexes for the digital economy, rural revitalization, and industrial upgrading.The data span from 2011 to 2019 and come from three sources.First, the digital finance index used for constructing the digital economy comes from the Peking University Digital Inclusive Finance Index (2011-2020), released by the Digital Finance Research Center of Peking University [47].Second, democratic and legal village data come from the Law Popularization Office of Zhejiang Province.Other data are sourced from the "Zhejiang Statistical Yearbook" and the statistical yearbooks of prefecture-level cities over the years.Third, democratic and legal village data come from the Law Popularization Office of Zhejiang Province.Missing data were filled in with interpolation. Variable Selection 4.2.1. Explained Variable The explained variable is the rural revitalization index (Rural), which is measured following the method of Zhang et al. (2018) [23].Combined with China's "Rural Revitalization Strategic Plan (2018-2022)", the evaluation system selects 5 first-level and 11 second-level indicators.These indicators are shown in Table 1.The first-level indicators include prosperous industry, effective governance, prosperous life, ecological livability, and rural civilization.Then, the entropy method is used to calculating the rural revitalization index. Mediating Variable The mediating variable is the industrial upgrading index (Isu).Chang et al. ( 2019) built an industrial upgrading evaluation based on the industrial structure evolution law of the Petty-Clark theorem and Kuznets curve [49].Following the research of Chang et al. (2019) [49], we construct an industrial upgrading index to measure the level of industrial upgrading.The calculation formula is as follows. q i is the proportion of the i-th industry output value.The Isu index ranges from 1 to 3. The low Isu value reflects that the industrial structure upgrading speed is slow in the region.If Isu is close to 1, the industrial upgrading level is low.On the contrary, the higher the Isu value, the higher the industrial upgrading level of the region. Control Variables Considering that economic development, urbanization level, fiscal expenditure level, and industrial structure have a significant impact on rural revitalization, referring to previous research [47,49], these factors are selected as the control variables. Descriptive Statistics Table 3 shows the meaning of the main variables and their summary statistics.The mean value of Rural is 1.01, the maximum value is 2.56, and the minimum value is 0.14.The mean value of Digital is 1.01, the maximum value is 1.92, and the minimum value is 0.18.The digital economy development and rural revitalization levels vary widely among Zhejiang Province's cities.The mean value of Isu is 2.42. Model Setting The following model is constructed to examine the relationship between the digital economy and rural revitalization. To alleviate heteroscedasticity and multicollinearity problems, all variables are treated logarithmically.Among them, i represents the i-th city; t represents the time; Digital it is the core independent variable; the vector X it represents control variables; λ i represents individual fixed effects that do not vary over time; ε it denotes the error term.If α 1 > 0 , assumption 1 holds.According to Baron and Kenny (1986) [50], the mediating effect model is constructed as follows: The existence of mediation effect of industrial upgrading requires that α 1 is significant in Equation ( 1).The following situations are considered: (1) If β 1 and γ 2 are significant, but γ 1 is not significant, the industrial upgrading is a complete mediating effect.(2) If β 1 , γ 1 , and γ 2 are significant, the industrial upgrading is a partial mediating effect, which can be calculated by β 1 × γ 2 .(3) If β 1 or γ 2 are not significant but still past the Sobel test, then the industrial upgrading is a partial mediating effect. The Direct Impact of the Digital Economy on Rural Revitalization Table 4 shows the baseline results of the impact of the digital economy on rural revitalization.The fixed effect model is used in order to alleviate the endogeneity problem as much as possible.The control variable is added gradually from columns (1) to (5).All the estimated coefficients of the digital economy index are positive at the 1% significance level, indicating that the digital economy plays a significant role in promoting rural revitalization, which verifies Hypothesis 1. Column (5) shows that if the digital economy index rises by 1%, the rural revitalization index will increase by 0.338%.The coefficient of economic development level (lnpGDP) is notably negative, which is consistent with Cai et al. ( 2019) [9].The possible reason could be that cities are the primary source of GDP growth in Zhejiang rather than rural areas.The level of urbanization (lnUrban) has no significant impact on rural revival.Because urbanization will lead to a large-scale transfer of rural labor to cities, resulting in a shortage of rural labor, the level of fiscal expenditure (lnFiscal) has a significant role in promoting rural revitalization.The impact of industrial structure (lnIndustry) on rural revitalization is insignificant because this paper measures the industrial structure with the proportion of the secondary industry, which accounts for a relatively small proportion of rural industries. Mediating Effect Analysis In order to investigate whether the digital economy promotes rural revitalization by affecting industrial upgrading, this paper constructs the industrial upgrading level (Isu) as the mediating variable to test the mediating effect.Table 5 displays the results.Panel A represents the two-step mediation test, while Panel B represents the Sobel test.Column (1) in Panel A shows that the impact of the digital economy on the upgrading of the industrial structure is significantly positive at the 1% significance level.Column (2) shows that after adding the mediating variable (lnIsu), the regression coefficient of the digital economy and rural revitalization is still significantly positive at 1%, suggesting that the mediating effect of industrial upgrading exists.The Sobel test also showed a mediating effect, with an indirect effect coefficient of 0.053, accounting for 18.7% of the total effect.The above conclusion is consistent with Tao et al. (2022), who found that developing e-commerce in rural areas optimizes county industrial structure, which is beneficial to rural revitalization [51].To sum up, the mediating effect test clearly shows that the transmission path of "digital economy→industrial upgrading→rural revitalization" exists.Hypothesis 2 holds. Heterogeneity Analysis Considering that the economic development level and geographical environment of northeastern Zhejiang and southwestern Zhejiang are pretty different, this paper tests the heterogeneity of these two regions.The results are shown in Table 6.For both regions, the regression coefficients of the digital economy are significantly positive at the 1% significance level, indicating that the digital economy plays a significant role in promoting rural revitalization.It is worth noting that the coefficients in the two regions are different.The regression coefficient of the digital economy in southwestern Zhejiang is 0.318, which is greater than 0.164 in northeastern Zhejiang.This implies that the promotion effect of southwestern Zhejiang is more significant than that of northeastern Zhejiang.Hypothesis 3 holds.This conclusion is analogous to Cao et al. (2021), who found that digital finance has a greater promotion effect on environmental performance in regions with underdeveloped capital markets than in regions with developed capital markets [36]. Robustness Test Digital economy development can promote rural revitalization, but the more developed the rural areas, the better the digital infrastructure.Thus, the relationship between the digital economy and rural revitalization is endogeneousbecause of reverse causality.This paper executes robustness tests to confirm the above conclusions by using instrumental variable regression, replacing the core explanatory variable, and using different estimating strategies.The results are displayed in Table 7. Instrumental Variable Regression An eligible variable must be exogenous to the explained variable and correlate to the endogenous variable.Yi and Zhou (2018) constructed an instrumental variable for the digital economy by using the product of lag term and first-order difference term (lnDigital t−1 * ∆lnDigital t−1 ) [52].This instrumental variable satisfies the exogenous requirements, because it cannot be influenced by the current period of rural revitalization levels.At the same time, the constructed instrumental variable will closely relate to the digital economy, becausethe lag term of the digital economy relates to the digital economy, considering that the digital economy develops continuously.Following Yi and Zhou (2018), this paper constructed the instrumental variable and used the TSLS method to estimate the results, shown in Column (1).The coefficient of the digital economy is 0.68 and is significant at the 5% significance level, implying the promotion effect of the digital economy on rural revitalization. Replacing Core Variable The core explanatory variable is replaced with the digital inclusive finance index (referred to as DIFI) released by the Digital Finance Research Center of Peking University.The results are shown in Column (2) of Table 7.The regression coefficient of the digital finance is 0.337 and it is significant at the 1% significance level.The results still support the conclusion that digital finance encourages rural revitalization. Changing Estimation Method We alter the estimation method by employing random effect regression and pooled regression, and the results are displayed in columns (3) and (4) of Table 7, respectively.The coefficient of the random effect estimation is 0.348, and the coefficient of pooled regression estimation is 0.64.Both the coefficients are significant at the 1% significance level.The results still support the promotion effect of the digital economy on rural revitalization. Conclusions As the digital economy develops rapidly in China, understanding the link between the digital economy and rural rejuvenation is crucial for the long-term sustainability of rural areas.Based on samples from 11 prefecture-level cities in Zhejiang Province, this paper builds evaluation systems through the entropy method to quantify the development level of the digital economy, industrial upgrading, and rural revitalization.Then, the research used panel data regression to explore relationship between the digital economy and rural revitalization.The following empirical results are drawn: First, the digital economy promotes rural revitalization.This conclusion is still valid after a series of robustness tests, which include using instrumental variables, substituting the primary explanatory variable, and modifying the panel data estimating strategies.Second, industrial upgrading plays a crucial mediating role in which the digital economy promotes rural revitalization.Third, the effect of the digital economy on rural revitalization is regionally heterogeneous.Compared with the northeastern Zhejiang Province, the promotion effect of the digital economy on rural revitalization is higher in the southwestern Zhejiang Province. Policy Implications According to the empirical results, the following policy implications are provided.Firstly, the Chinese government should increase investment in rural digital infrastructure to promote rural rejuvenation.The Internet penetration and network quality in rural areas is essential to increasing the breadth of coverage of the digital economy.Therefore, the government should construct more rural digital infrastructure and provide digital upgrades for the traditional rural infrastructure.Specifically, the deployment of commercial 5G networks, big data, and artificial intelligence should be accelerated.Secondly, the Chinese government should guide the integration between digital technologies and rural industries.The digital economy promotes rural revitalization by the mediating mechanism of industrial upgrading.Digitalization will cultivate new industries, forms, and models for traditional rural industries.Thus, the government should accelerate the digital industrialization and industrial digitalization process by cultivating the data element market and coordinating digital infrastructure layout.Thirdly, policymakers should implement distinctive digital development strategies for rural regions with different economic conditions.Developing the digital economy can promote rural revitalization in both poor and wealthy areas.Considering that regional heterogeneity exists in the effect of the digital economy on rural revitalization, it is necessary to implement differentiated strategies for rural areas with different economic conditions.Finally, the government should train people in rural areas to use the Internet and digital devices.Popularizing digital technology in rural areas is closely related to education.On the one hand, the Chinese government should encourage outstanding talents to join rural digitalization.On the other hand, local governments should provide personnel training for farmers to use digital devices. Limitations and Future Research This paper still has some limitations which can be improved from the following aspects in the future.First, the data are limited to Zhejiang Province, which is a developed regions in China.It is not clear whether the conclusions hold or not in underdeveloped regions.Thus, samples in the future research can be extended to more provinces in China.Second, the measurement of rural revitalization is not comprehensive enough.Rural revitalization involves both economic and cultural development.In the rural areas of Zhejiang, there are a large number of ancient cultural elites, which represents traditional culture [53].Future research should fully take into account ancient cultural factors.Moreover, better methods should be considered to measure level of the digital economy and rural revitalization.In addition, determining how to better overcome the endogeneity problem in the econometric model is also a future research direction. Table 1 . [47]et al. (2020)ion evaluation index system.The core explanatory variable is the digital economy development index (Digital).Following the method of Zhao et al. (2020)[48], the digital economy development index is calculated using the entropy method from the perspective of the Internet development and digital finance inclusion index.The indicators are shown in Table2.The Internet development indicators include the Internet penetration rate, mobile phone penetration rate, and total telecommunications services per capita.The digital financial inclusion index comes fromGuo et al. (2020)[47]. Table 2 . The digital economy evaluation index system. Table 3 . Variable definition and descriptive statistics. Table 5 . Mediating effect testing results.
6,393.8
2022-12-18T00:00:00.000
[ "Economics", "Computer Science" ]
Phytochemical Profiling of the Leaf Extract of Ximenia americana var. caffra and Its Antioxidant, Antibacterial, and Antiaging Activities In Vitro and in Caenorhabditis elegans: A Cosmeceutical and Dermatological Approach We previously annotated the phytochemical constituents of a root extract from Ximenia americana var. caffra and highlighted its hepatoprotective and hypoglycemic properties. We here extended our study on the leaf extract and identified its phytoconstituents using HPLC-PDA-ESI-MS/MS. In addition, we explored its antioxidant, antibacterial, and antiaging activities in vitro and in an animal model, Caenorhabditis elegans. Results from HPLC-PDA-ESI-MS/MS confirmed that the leaves contain 23 secondary metabolites consisting of condensed tannins, flavonol glycosides, flavone glycosides, and flavonol diglycosides. The leaf extract demonstrated significant antioxidant activity in vitro with IC50 value of 5 μg/mL in the DPPH assay and 18.32 μg/mL in the FRAP assay. It also inhibited four enzymes (collagenase, elastase, hyaluronidase, and tyrosinase) crucially involved in skin remodeling and aging processes with comparable activities to reference drugs along with four pure secondary metabolites identified from the extract. In accordance with the in vitro result, in vivo tests using two transgenic strains of C. elegans demonstrated its ability to reverse oxidative stress. Evidence included an increased survival rate in nematodes treated with the prooxidant juglone to 68.9% compared to the 24.8% in untreated worms and a reduced accumulation of intracellular reactive oxygen species (ROS) in a dose-dependent manner to 77.8%. The leaf extract also reduced levels of the expression of HSP 16.2 in a dose-dependent manner to 86.4%. Nuclear localization of the transcription factor DAF-16 was up to 10 times higher in worms treated with the leaf extract than in the untreated worms. The extract also inhibited the biofilm formation of Pseudomonas aeruginosa (a pathogen in skin infections) and reduced the swimming and swarming mobilities in a dose-dependent fashion. In conclusion, leaves of X. americana are a promising candidate for preventing oxidative stress-induced conditions, including skin aging. Introduction The aging population is increasing fast as people worldwide are living longer. It is estimated to reach one-sixth of the global population by 2030. Many human organs change structure and function with increasing ages, including the skin. Agerelated physiological changes in skin structure, integrity, and function can cause vulnerability to many dermatology problems such as pruritus, dermatitis, and infections [1]. Skin aging is closely associated with degradation and disorganization of collagen in the dermis and at the dermal-epidermal junction, causing deterioration of epidermal cell-cell junctions. Consequently, aging skin experiences a decline in the skin barrier function, changing the skin microenvironment and cutaneous immunity system, and therefore, the skin becomes susceptible to bacterial infections [2]. Aging is a natural process triggered by the presence of intracellular and extracellular reactive oxygen species (ROS), including skin aging, which is characterized by wrinkles and changes on pigmentation [3]. Natural antioxidant can prevent several conditions related to aging [4], including dermal aging. The mechanisms include the inhibition of key enzymes of aging, namely, collagenase, elastase, hyaluronidase, and tyrosinase [4]. Collagenase and elastase are fundamental enzymes that degrade major components of the extracellular matrix, collagen, and elastin and thus accelerate wrinkle formation and its featuring skin aging [3]. Hyaluronidase is a proteolytic enzyme responsible for the degradation of hyaluronan in the extracellular matrix. Likely, loss of hyaluronan causes wrinkles and sagging in the skin [3]. Tyrosinase is an enzyme that converts tyrosine to melanin [5], and thus, inhibition of tyrosinase activity plays an important role in skin-lightening process [6]. ROS as well as nonradical derivatives of oxygen and nitrogen are formed as by-product from aerobic metabolic metabolism in mitochondria [7]. ROS can be also generated as a cellular response to xenobiotics, cytokines, microbial attack, radiation, and environmental pollution [8]. An overproduction of ROS can cause oxidative stress. The latter represents the imbalance of radicals or oxidants, beyond the capacity of the endogenous antioxidant enzymes and effective antioxidant response, resulting in macromolecular and tissue damage [9]. Especially important are mutations caused by ROS in that the DNA base guanosine is converted to 8-oxoguansosine, which pairs with adenosine instead of cytosine. Causing intracellular damage and mutations, oxidative stress is directly implicated in the mechanism and progression of several diseases such as diabetes, metabolic syndrome, obesity, atherosclerosis, cancer, and neurodegeneration and aging [9]. Natural phytochemicals (flavonoids, tannins, carotenoids, vitamins C and E, mustard oils, and allicin) from vegetables, fruits, and herbs are known for their antioxidant capacity and anti-inflammatory and antibacterial properties, making them excellent candidates to improve aging skin and develop into low-cost, high-efficacy, and tolerance dermocosmetic products [10,11]. Ximenia constitutes one of 25 genera of the family Oleaceae. The large sour plum, Ximenia americana var. caffra, is a deciduous tree, native to Africa, and is cultivated from Tanzania in East Africa to South Africa, crossing Namibia and Botswana. The plant is also distributed in other tropical regions such as India, South East Asia, Australia, New Zealand, Pacific Islands, West Indies, and Central and South America [12]. X. americana has been prescribed as an herb in traditional medicine for ages [13]. In this study, we extended our investigation on the chemical composition of the leaf extract from Tanzania using HPLC-PDA-ESI-MS/MS, as no study from east African plants has been found and evaluated its antioxidant and antiaging effects in vitro using DPPH and FRAP assays and the inhibition of four enzymes involved in skin aging, namely, collagenase, elastase, hyaluronidase, and tyrosinase by quinic acid, rutin, catechin, and epicatechin, compounds identified from the extract. Furthermore, we investigated the biofilm inhibition activities of the extract against the common encapsulated gram-negative bacteria, Pseudomonas aeruginosa, a common pathogen in skin infections. Caenorhabditis elegans was employed as an animal model for studying the antioxidant properties of the leaf extract. Plant Material and Extraction. Plant leaves were gathered from the Lupaga site in Shinyanga, Tanzania. DNA barcoding techniques amplifying the rbcL fragment were used to identify the plant. Upon the identification, plant specimens were deposited at the Institute of Pharmacy and Molecular Biotechnology, Heidelberg University, under accession number P7344 [16]. Dried leaves (100 g) were powdered before being subjected to methanol for extraction at room temperature for 3 days. The extract was filtered and then evaporated under optimum vacuum condition, and the obtained residue was freeze-dried to -70°C to yield fine dried powder (19 g). 2 Oxidative Medicine and Cellular Longevity 2.2. HPLC-PDA-ESI-MS/MS. The composition of the leaf constituents was analyzed using a Thermo Finnigan system (Thermo Electron Corporation, USA) tandem with LCQ-Duo ion trap mass spectrometer with an ESI source (ThermoQuest) as described in our previous study [17]. Separation of constituents was attained using Zorbax Eclipse XDB-C18 (4:6 × 150 mm, 3.5 μm, Agilent, USA). Gradient eluent is composed of water and acetonitrile with 0.1% formic acid in each eluent, from 5 to 30% acetonitrile in 60 min, and flow rate 1 mL/min with a split ratio of 1 : 1 to enter the ESI interface. Xcalibur software (Xcaliburۛ 2.0.7, Thermo Scientific) was used to command the machine [13]. 2.3. Total Phenolic and Antioxidant Activity In Vitro. Total phenolic content was estimated using the Folin-Ciocalteu method as described in a previous study [18], while antioxidant activity was determined using two colorimetric methods, DPPH assay [19] and FRAP assay [20]. The analysis was performed as in the previous study [18]. Enzymatic Activities 2.4.1. Collagenase Inhibition. The assay was performed according to a previous study [21]. Collagenase was obtained from Clostridium histolyticum (ChC-EC.3.4.23.3), and the assay was performed in 50 mM Tricine buffer consisting of 400 mM NaCl and 10 mM CaCl 2 (pH 7.5). Collagenase was dissolved in the buffer to achieve an enzyme activity concentration of 0.8 unit/mL, and the synthetic substrate N-[3-(2furyl) acryloyl]-Leu-Gly-Pro-Ala (FALGPA) was dissolved in the buffer to make up the final concentration of 2 mM. Serial diluted samples were incubated with the enzyme for 15 min. After that, the substrate was added to each serial solution. Absorbance was measured using a microplate reader (Costar, 96) at 490 nm. Each measurement was made in triplicate. Percentage of collagenase inhibition (%) was calculated according to the formula: inhibition ð%Þ = ½ðA control − A sample / A control Þ × 100. A sample : absorbance value of the collagenase activity from samples or positive control (quercetin) A control : absorbance value of the collagenase activity from negative control (solution without samples) Tyrosinase Inhibition. This assay was performed using mushroom tyrosinase and L-DOPA as substrates, according to a previous study [21] with slight modifications. The reaction mixture consisted of 20 μL of mushroom tyrosinase (2500 U mL −1 ), 20 μL of samples, 20 μL of 5 mM L-DOPA, and 100 μL of phosphate buffer (0.05 M, pH 6.5). After adding L-DOPA, the reaction mixture was monitored at 475 nm for dopachrome formation using a microplate reader (Costar, 96). Each measurement was made in triplicate. The percentage of tyrosinase inhibition (%) was calculated according to the formula: inhibition ð%Þ = ½ðA control − A sample /A control Þ × 100. A sample : absorbance value of the tyrosinase activity from samples or positive control (kojic acid) A control : absorbance value of the tyrosinase activity from negative control (solution without samples) Elastase Inhibition. This assay was performed according to a previous study [21]. Porcine pancreatic elastase was used in this assay at stock concentration of 3.33 mg/ mL in sterile water. N-Succinyl-Ala-Ala-Ala-p-nitroanilide (AAAPVN) was used as a substrate and dissolved in buffer to make up 1.6 mM solution. A serial dilution of the extract with amount of 50 μL each was incubated with the enzyme for 15 min, and the substrate was added subsequently and makes up the final reaction mixture of 250 μL. Absorbance value was measured at 400 nm using a microplate reader. Each measurement was made in triplicate. The percentage of elastase inhibition (%) was calculated according to the formula: inhibition ð%Þ = ½ðA control − A sample /A control Þ × 100. A sample : absorbance value of the elastase activity from samples or positive control (kojic acid) A control : absorbance value of the elastase activity from negative control (solution without samples) 2.4.4. Hyaluronidase Inhibition. The hyaluronidase inhibition assay was performed using hyaluronidase (1.5 mg/mL) and the substrate hyaluronic acid (1 mg/ml in 0.1 M acetate buffer; pH 3.5), according to a previous study [21]. The reaction mixture was prepared consisting of 25 μL of CaCl 2 (12.5 mM), 12.5 μL of each test sample (serial dilution), hyaluronidase (1.5 mg/mL), and 100 μL substrate hyaluronic acid. Afterwards, reaction mixture was placed in a water bath at 100°C for 3 min, and subsequently, 25 μL of 0.8 M KBO 2 was added. After cooling to the room temperature, 800 μL DMAB (4g DMAB in 40 mL acetic acid and 5 mL 10 N HCl) was added to the mixture following the incubation for 20 min. The absorbance was measured at 600 nm. Each measurement was made in triplicate. The percentage of hyaluronidase inhibition (%) was calculated according to the formula: inhibition ð%Þ = ½ðA control − A sample /A control Þ × 100. A sample : absorbance value of the hyaluronidase activity from samples or positive control (quercetin) A control : absorbance value of the hyaluronidase activity from negative control (solution without samples) 2.5. Antioxidant Activities In Vivo 2.5.1. Caenorhabditis elegans Strains and Culture. C. elegans grows optimally at 20°C in NGM medium supplemented with living E. coli OP50. Age-synchronized animals were used for each experiment. EGCG from green tea served as a positive control for an antioxidant natural product. The synchronization was attained by treating gravid adult nematodes with sodium hypochlorite to hatch the eggs in M9 buffer. The growing larvae were transferred to S-media containing bacteria (OD 600 = 1:0) (T 2006). Wild type (N2), TJ375 [Phsp-16.2: GFP(gpls1)], and TJ356 were obtained from the Caenorhabditis Genetic Center (CGC), University of Minnesota, USA, and they were used in the current study. 2.6. Survival Assay. This assay employed wild-type worms from an early larval stage (L1) as developed in other studies [22]. Extracts at different concentrations from 25 to 100 μg extracts/mL were administered to L1 worms growing in Smedium at 20°C for 48 h. The control group received no 3 Oxidative Medicine and Cellular Longevity extract. Oxidative stress was induced by adding the naphthoquinone juglone from Juglans regia at the final concentration of 80 μM. After 24 h under stress conditions, living worms were counted and the results were presented as the percentage of living worms. 2.7. Inhibition of Intracellular ROS Production. Like in the survival assay, L1 wild-type worms were maintained in Smedium at 20°C and three different concentrations of the extract were applied for 48 h. Afterwards, the worms from each treatment were transferred to M9 buffer containing 20 μM H2DCF-DA for further incubation at 20°C for 30 min before being observed under a fluorescence microscope (Keyence, BZ-9000, Osaka, Japan). Fluorescence intensity was quantified using ImageJ 1.48 software (National Institutes of Health, Bethesda, MD) [23]. 2.8. Expression of Hsp-16.2: GFP Reporter. This assay employed the TJ375 transgenic strain with GFP fusion to heat shock protein (hsp) as an oxidative stress reporter. Early larval stage (L1) worms were maintained at S-medium at 20°C as developed in other studies [22]. After administration of three different concentrations on separated culture plates for 48 h, worms were subjected to stress oxidation using 20 μM juglone for 24 h. Fluorescence intensity was observed using a fluorescence microscope and quantified using ImageJ 1.48 software. 2.10. Antibacterial Activity. The antibacterial activities were evaluated using the broth microdilution assay using a 96well microtiter microplate. The extract was two-fold serially diluted in MH broth (100, 50, 25, 12.5, 6.25, 3.125, and 1.562 mg/mL), filtered using 0.22 μm sterile syringe filters, and transferred to the microplate's wells in quadruplicate (200 μL per well). Next, 2 μL of a fresh overnight bacterial culture of P. aeruginosa adjusted to OD 600 nm = 1 using MH broth was inoculated into each well of the microplate and incubated at 37°C under shaking at 150 rpm for 18 h. The minimum inhibitory concentration (MIC) was defined as the lowest concentration that inhibits visible microbial growth [24]. 2.11. Bacterial Biofilm Inhibition Assay. The antibiofilm potential of the extract was assessed using a colorimetric assay [24,25]. Firstly, the extract at the doses of 3.13 mg/mL (1/8 MIC) and 6.25 mg/mL (1/4 MIC) was prepared in Mueller Hinton (MH) broth and then filtered using 0.22 μm syringe filters. Filtered extracts were transferred, in quadruplicate, into a 96-well microtiter microplate. Next, 2 μL of a fresh overnight bacterial culture of Pseudomonas aeruginosa adjusted to O D 600 nm = 1 using MH broth was inoculated into each well of the microplate and incubated at 37°C under shaking at 150 rpm. Media without bacterial inoculation were used as negative controls. After 18 h of incubation, the culture suspensions were discarded, and the wells were washed four times with a phosphate-buffered saline solution to get rid of planktonic bacteria. To reveal the biofilm production, each well was stained with a 1% crystal violet solution and incubated for 15 min at room temperature. A second vigorous washing with distilled water was performed to remove excess dye. The produced biofilm was solubilized using 95% ethanol, and the absorbance of dye in ethanol was measured at OD 600 nm using a multimode plate reader. The attached biofilms to wells' surfaces were microscopically observed before and after solubilization by ethanol. Swimming and Swarming Inhibition Tests. The extracts were studied for their effect on the swimming and swarming motility of P. aeruginosa on solid media. The swimming (1% tryptone, 0.5% sodium chloride, and 0.3% agar) and swarming (semisolid LB medium, 0.6%) media were aseptically supplemented with the doses of 3.13 mg/mL (1/8 MIC) and 6.25 mg/mL (1/4 MIC) of the extract. Afterwards, 7 μL of a fresh overnight culture of P. aeruginosa (OD 600 nm = 1) was deposited at the center of each plate and incubated at 37°C for 24 h. The swimming and swarming zone diameters were measured in cm [24]. 2.13. Statistical Analysis. Inhibitory concentration 50 (IC 50 ) was determined by extrapolating the dose-response curve using GraphPad Prism software. IC 50 value represents the capacity of the extract (substance) to inhibit 50% of radical oxidant-and enzyme-associated aging. Statistical analysis was performed using the Statistical Package of the Social Sciences (SPSS) version 23 (IBM SPSS software, USA) from at least three triplicate experimental data. The data were expressed as a mean ± SEM. Significant difference was analyzed using one-way analysis of variance (ANOVA) followed by the Duncan analysis. The presence of significant differences in treatment was at 95% confidence interval. Antioxidant Activities In Vitro. The extract revealed pronounced antioxidant activities in DPPH and FRAP assays ( Table 2). Similar activities were also reported of the leaf extract from Namibia and the root extract from Tanzania [13,14]. These activities can be attributed to the existence of high amount of phenolics of 451 mg gallic acid equivalent/g extract as determined by the Folin-Ciocalteu assay. The presence of procyanidins remarkably contributes to the activity as demonstrated in many other antioxidant plants including Schotia brachypetala Sond. [23], grape seed (Vitis vinifera L.) [26], and cocoa (Theobroma cacao) [27]. Quercetin-related compounds (rutin, isoquercetin, and avicularin), found in the leaf extract, exhibit exceptional antioxidant activity [28]. In addition to quercetin, also flavonoid glycosides, such as those found in the leaf of X. a. var. caffra: kaempferol 3-O-glucoside, kaempferol 3-neohesperidoside, and kaempferol 3-O-arabinoside (Figure 1(b)), are responsible for important antioxidant activity [21,29]. Many of flavonoid glycosides (which are the storage form in plant vacuoles) demonstrate higher solubility in water than their parent flavonoid as a consequence of hydrophilic property of the sugar moieties thus affecting the antioxidant capacity [30,31]. For instance, the glycoside derivative of luteolin, orientin, demonstrates higher antioxidant activity than luteolin due to sugar substitution that decreases negative charge on one of the oxygen atoms, resulting in a higher activity of orientin [32]. Hypericum perforatum L., rich in rutin, hyperoside, isoquercitrin, avicularin, quercitrin, and quercetin, has demonstrated profound antioxidant activity including free radical scavenging activity, metal-chelation activity, and reactive oxygen quenching activity [33]. The presence of rutin, isoquercetin, and avicularin in Malus domestica leaves is responsible for their antioxidant activity [34]. Enzymatic Activities. In the current study, the leaf methanol extract of X. a. caffra was examined against four key enzymes of dermal aging ( Table 3). The extract demonstrated elastase inhibitory and hyaluronidase activity comparable to the reference kojic acid (Table 3). We also explored the antiaging activities of four compounds from the extract and their furnished considerable activities (Table 3). Plants rich in antioxidant compounds such as catechin, kaempferol, and quercetin and their related compounds revealed antiaging potential via inhibition activity against the mentioned aging key enzymes [4,35]. Catechin, such as found in green tea, exhibits protective effect against skin Oxidative Medicine and Cellular Longevity stress-induced photoaging [36]. A plant extract, rich in rutin, hyperoside, isoquercitrin, avicularin, quercitrin, and quercetin, such as those from the genus Hypericum, has demonstrated profound antioxidant, antiaging, and antityrosinase activity [37]. The leaf methanol extract of X. a. caffra is rich in antioxidant compounds (Table 1); they can act synergistically to inhibit the activity of skin aging enzymes and thus potentially delay the skin aging process. Antioxidant Activities In Vivo. The X. a. caffra leaf extract was able to enhance the survival of nematodes under stress conditions induced by juglone. Increased concentration of the leaf extract significantly improved the number of surviving worms in a dose-dependent fashion (Figure 2(a)). Other experiments supported the antioxidant activities, one of them being the decreased amount of intracellular ROS production after treatment with the leaf extract in a dosedependent manner, ranging from 55.87 to 77.75% of ROS reduction (Figure 2(b)). Quercetin and its related compounds have shown profound in vivo antioxidant activity, including the activity to regulate glutathione synthesis in animal and cell models [38,39]. In addition, quercetin and rutin have also shown strong antioxidant enzymatic activity using a rat's brain homogenate [40], upregulating the expression of antioxidant enzymes heme-oxygenase-(HO-) 1 and superoxide-dismutase-(SOD-) 1 in primary human osteoblasts exposed to stress condition from cigarette smoke [41]. Quercetin can improve the antioxidant defense system by upregulating superoxide dismutase and catalase and reducing the level of malondialdehyde [42]. The presence of procyanidin B1, rutin, isoquercetin, avicularin, kaempferol 3-O-glucoside, kaempferol 3-neohesperidoside, and kaempferol 3-O-arabinoside only in the leaf extract may synergistically activate the intracellular antioxidant defense mechanism [43], thus reducing the production of ROS, preventing cell damage, and prolonging worm survival [23]. Heat shock proteins (Hsps) are a large family member of molecular chaperons that are ubiquitous proteins responsible for correcting protein refolding [44]. Hsps are upregulated after stress stimuli, such as xenobiotics, hyperthermia, and hypoxia [44]. Due to the biological role, Hsps have become a prominent biomarker for stress and thus has been developed for model organisms for undertaking experimental stress conditions in the laboratory, for instance, C. elegans [45]. Our study used transgenic C. elegans TJ375 in which small hsp and hsp-16.2p are coupled with the green fluorescence protein Table 1. (GFP) reporter to detect the expression of hsp when exposed to stress conditions. Figure 2(c) demonstrates that increasing concentrations of the extract can significantly reduce hsp-16.2 expression in juglone-treated worms, reported by a lower intensity of GFP fluorescence. Another transgenic strain used in the current study was TJ365, in which the DAF-16 gene is coupled to the GFP protein. About 8 different genes are responsible and involved in dauer arrest and longevity. One of them is the transcription factor DAF-16/FOXO that modulates signals responsible for aging and longevity when migrating from the cytoplasm to the nucleus. Localization of DAF-16 into the nucleus indicates the response to stress condition and capacity to prevent damage and arrest cell and increased Significant differences relate to the control group at * * p < 0:01 and * * * p < 0:001 by one-way ANOVA and Tukey's post hoc test. 8 Oxidative Medicine and Cellular Longevity endurance and survival under stress condition [46]. Our current study has shown a 4 to 10 times higher nuclear DAF-16 localization in worms treated with the leaf extract in a dose-dependent manner, as compared to untreated worms ( Figure 2(d)). Cassia abbreviata, rich in epicatechin and other tannins, has demonstrated a modulation toward stress resistance in C. elegans [47]. The flavonoid kaempferol demonstrates translocation of the FOXO transcription factor DAF-16 from the cytosol to the nucleus in C. elegans, indicating that the flavonoid modulates aging and longevity signaling cascade [48]. Similar findings on increased stress resistance on C. elegans have also been shown by rutin via distinct pathways, including redox-sensitive signaling pathways [49]. Quercetin-related compounds have also demonstrated capacity to increase stress resistance and lifespan on C. elegans via the insulin-like signaling pathway and p38-mitogenactivated protein kinase pathway [50]. Effects of the Extract on Biofilm Inhibition and Swimming and Swarming Mobilities of P. aeruginosa. The opportunistic pathogenic bacterium, P. aeruginosa, commonly found in skin infections, has shown antimicrobial resistance due to its ability to form biofilms that embed the bacteria in a self-produced extracellular matrix. The biofilm is difficult to eradicate with antibiotic treatment as a limited number of antibiotics could diffuse easily through the matrix [51]. The MIC determination showed that the extract completely inhibited the growth of P. aeruginosa at 25 mg/mL. The doses 3.13 and 6.25 mg/mL, corresponding to 1/8 and 1/4 MIC, respectively, did not exert any significant inhibition of the tested strain; however, they inhibited the biofilm production of P. aeruginosa in a dosedependent manner and were able to decrease biofilm amounts by 25 and 70%, respectively ( Figure 3). The mobility of P. aeruginosa on plates was impaired when the medium is amended with the extract in a dosedependent manner (Figure 4). The extract reduced the swimming and swarming mobility by 41.35 and 4% at 3.23 mg/mL and by 50.37 and 34.21% at 6.25 mg/mL, respectively. Interestingly, the swimming mobility was significantly affected at 3.125 mg/mL but not the swarming mobility, which was significantly decreased only at 6.25 mg/mL (Figure 4). Similar activities were reported from polyphenol-rich extracts such as Salix tetrasperma (bark and flowers) [24]. Conclusions In this study, a total of 23 mostly phenolic compounds were detected in a methanol leaf extract of X. a. var. caffra. The leaf extract showed substantial antioxidant activity in vitro and in vivo using the C. elegans animal model. It also inhibited four crucial enzymes of skin aging (collagenase, elastase, hyaluronidase, and tyrosinase) and the biofilm formation of Oxidative Medicine and Cellular Longevity P. aeruginosa. To sum up, the leaf extract exhibits interesting dermato-cosmeceutical properties that might be attributed to its antioxidant and antibacterial activities. Current findings could provide scientific evidence for possible utilization of the X. a. var. caffra leaf extract and its compounds as an antiaging agent. Data Availability All data are included in the manuscript. Conflicts of Interest The authors declare that they have no conflicts of interest.
5,801.6
2022-03-28T00:00:00.000
[ "Medicine", "Environmental Science", "Chemistry", "Biology" ]
Ecotoxicological Assays with the Calanoid Copepod Acartia tonsa : A Comparison between Mediterranean and Baltic Strains : The use of marine invertebrates in ecotoxicology is important for an integrated approach which takes into consideration physiological responses and chemical levels in environmental matrices. Standard protocols have been developed and organisms belonging to different trophic levels are needed as model organisms to evaluate toxicant bioavailability and assess their impact on marine biota. The calanoid copepod Acartia tonsa is commonly used in ecotoxicology due to its widespread distribution and well-studied biology. However, different strains coming from various geographical areas are available, and possible variations in physiological characteristics raise concerns about the comparability of ecotoxicological results. This study compares the life cycle assessment and sensitivity of Adriatic and Baltic strains of A. tonsa exposed to nickel (Ni 2+ ) in standardized acute and semi-chronic tests. Life cycle assessments revealed differences in egg production, egg-hatching success, and naupliar viability between the strains. The acute toxicity test demonstrated the significantly higher sensitivity of Adriatic strain nauplii to Ni 2+ compared to the Baltic strain, whereas the semi-chronic test showed no significant difference in sensitivity between the strains. These findings suggest that while strain-specific differences exist in different geographical populations, responses to toxicants are not significantly different. Particularly, the semi-chronic assessments with both A. tonsa strains emphasized the robustness of this species as a model organism in ecotoxicology. Introduction The use of marine invertebrates as model organisms in ecotoxicology and the application of an integrated approach, which takes into consideration both physiological responses and chemical levels in the environment, have become routinary activities to monitor marine environment.These approaches are considered valid methods to better evaluate the bioavailability of toxicants and their potential threats to marine organisms [1] and are included in the Italian legislation [2] to regulate the management of polluted matrices in order to protect marine environments.Moreover, an integrated approach can also be a valuable tool for the management of protected marine sites to characterize the overall ecological status of these areas and improve conservation strategies in highly anthropized environmental contexts.The choice of the better organism to be used in ecotoxicology depends mainly on the relevance of the trophic level to be studied, the matrices to be assessed (sediment or water), and also the availability of organisms [3].Most of the species used as models can be reared, purchased, or collected directly from the environment.Finding a good model organism in ecotoxicology and a common method for assessing environmental risk are mandatory to compare results in different geographical areas and at different times, so in recent years, standard protocols have been proposed for species belonging to various trophic levels (e.g., bacteria [4], microalgae [5,6], mollusks [7,8]; rotifers [9]; echinoderms [10]).All of these protocols consider many different physiological end points, depending on the species and duration of the exposure.Among marine crustaceans, the marine planktonic calanoid copepod Acartia tonsa (Dana, 1849) is a cosmopolitan, eurythermal, and euryhaline marine zooplanktonic organism common in subtropical and temperate latitudes and abundant in coastal and estuarine waters [11,12].This is a well-known model species, with its biology and ecology having been studied since the 1980s [13,14].Moreover, this planktonic organism is distributed worldwide in marine and estuarine environments, being the most abundant species in the Atlantic and Pacific American coasts [15,16].In the Mediterranean and Baltic seas, A. tonsa was introduced by ship ballast waters in the 1980s [17,18] and has adapted well, since then, to euryhaline conditions, surviving sudden changes in salinities [19]. For these reasons, A. tonsa is widely used in ecotoxicology; bioassay protocols consider different end points, such as mortality, development, or fecundity in different life stages (eggs, nauplii, adults) after 48 h short-term exposure (acute test) [20,21] after 5-6 days in a larval development assay [22], after 7 days in a semi-chronic test [23], and after 4 days in a long-term incubation test [24].Therefore, several studies in which A. tonsa has been used for the assessment of the quality of marine-brackish sediments [24,25] and for the toxicity of emerging contaminants [26][27][28] have been published. Commonly, Acartia tonsa copepods are collected directly in the natural environment, reared in laboratory conditions, or purchased.However, different places of origin of A. tonsa (i.e., Adriatic, Baltic or Atlantic strains) may result in different physiological characteristics which can compromise the interpretation of the ecotoxicological results.For example, it has been demonstrated that Adriatic and Baltic strains of Acartia spp.showed different egg survival rates after storage at low temperatures [14].The hypothesis that the geographical origin of the strain/population selected for ecotoxicological tests may influence the results is realistic, with the risk of under-or over-estimating the ecological risk assessment. The aim of this study was to compare the ecotoxicological responses of two strains of Acartia tonsa, Adriatic and Baltic, exposed to the same reference toxicant.Both strains, widely used in standardized acute and semi-chronic bioassays, were exposed to the same reference toxicant, nickel chloride (NiCl 2 ), following the standard procedures reported in the UNICHIM [21,23] protocol.The end point of naupliar immobilization or mortality, recorded after 48 h and 7 days exposure, in acute and semi-chronic tests, respectively, was considered.Moreover, both Adriatic and Baltic A. tonsa life traits, reared with the same laboratory protocols, were followed and compared.Life cycle, daily adult survival, egg production (fecundity), percentage of egg-hatching success, and naupliar viability were recorded.The final aim was to ascertain the comparability of the ecotoxicological responses of A. tonsa strains coming from different geographical areas. Copepod Culture The Mediterranean strain of the calanoid copepod Acartia tonsa was originally collected in the northern Adriatic Sea (referred to as the Adriatic strain) and reared in a laboratory (ISPRA, Italy) for several generations.The Baltic strain was purchased from Guernsey Sea Farms (Guernsey, UK) and reared in a laboratory for several generations.Copepod cultures were reared as reported by Zhang et al. [30], in 20 L propylene tanks containing 0.22 µm of filtered seawater (FSW) (Millipore 90 mm holder YY3009000 Merk Life Science Srl, Milan, Italy) at 30 Practical Salinity Units (PSU).Natural seawater collected from an unpolluted site along the Tyrrhenian coast was filtered with a 0.22 µm mesh net filter, and the salinity was adjusted with distilled MilliQ water (BdW) at 30 PSU. Copepods were fed twice a week with a mixed algal diet of Isochrys galbana, Rhodomonas baltica, and Rhinomonas reticulata in the exponential-growth phase and cultured in f/2 medium without silicates at the final concentration of 1500 µg C L −1 .Both A. tonsa and the monoalgal cultures were maintained in a thermostatic chamber at 20 ± 1 • C and a 14:10 h L:D photoperiod. Life Cycle Assessment To analyze the life cycle, adults of the A. tonsa Adriatic and Baltic strains were collected from the main cultures and incubated in 800 mL beakers containing FSW and 3000 µg carbon L −1 of I. galbana and R. reticulata at the density of maximum 1 ind.mL −1 [30].After 24 h, almost 2000 eggs were collected from the bottom of the beaker, counted under a stereomicroscope (Leica Microsystems S9i, Milan, Italy), and transferred in new 800 mL beakers containing 500 mL of FSW.Early naupliar stages (F1) were supplied with 750 µg carbon L −1 of I. galbana and 750 µg carbon L −1 of R. reticulata every 48 h.In the copepodite stages (CI), a monoalgal diet of R. reticulata was supplied at 1500 µg carbon L −1 every 48 h.When F1 reached the adult stage and few eggs were counted on the bottom of the beaker, 4 males and 4 females were sorted and isolated pairwise in 4 crystallizers filled with 50 mL of FSW and R. reticulata at 1500 µg carbon L −1 for the first 15 days, which was doubled after 16 days up to the end of the experiment.Couples were transferred to a new crystallizer with FSW and R. reticulata every 24 h and observed under a stereomicroscope to control viability.Females and males that died in the first 20 days of the experiment were replaced with adults coming from the main F1 culture from which they had been isolated.In this case, the replacement adults were the same age as the dead ones (Figure 1). The main F1 culture was maintained in an 800 mL FSW beaker in the same condition as the isolated couples, and every 48 h, the eggs were removed to avoid the mixing of different generations. For each couple of A. tonsa, the following aspects were evaluated: Daily egg production per female (EPR) (eggs female −1 day −1 ); Percentage of egg-hatching success (EHS) calculated 48 h after egg laying, as follows: Percentage of naupliar viability (NV) calculated as follows: The following flow-chart in Figure 1 synthetizes the method used: The following flow-chart in Figure 1 synthetizes the method used: Acartia tonsa culture method.Flow chart synthetized the method used to collect eggs and adults fro the analysis of egg production rate (EPR), egg hatching success (EHS) and naupliar viability (NV) during the life cycle assessment of both A. tonsa Adriatic and Baltic strains.Females and males that died in the first 20 days of the experiment in the crystallizers were replaced with adults coming from the main F1 culture from which they had been isolated. Ecotoxicological Tests For ecotoxicological tests, mature A. tonsa adult copepods of both Adriatic and Baltic strains were sorted from the main culture by filtration through a 300 µm mesh net filter, incubated at a density ≤ 40 ind.L −1 in 800 mL beakers and fed daily with a mixture of exponentially growing algae R. reticulata and R. baltica, at a concentration >500 µg carbon L −1 .After 18-24 h, eggs at the bottom of the beaker were collected with a 50 µm mesh net filter, gently rinsed with FSW, and manually sorted using a Leica stereomicroscope (Milan, Italy).For the ecotoxicological test, the eggs were collected during the first 15 days since the adult stages appeared and stored at 4 °C for a maximum of one month before being used [29]. The acute test was performed according to the methods reported by Gorbi et al. [31], with some modifications.Briefly, a total of 10 eggs were sorted from the main culture and singularly incubated in 2.5 mL well plates containing FSW (negative control) or NiCl2 solutions (positive controls) at the final concentrations of 0.4, 0.2, 0.1, and 0.05 mg Ni 2+ L −1 .The plates, performed in triplicate, were maintained for 48 h in a thermostatic chamber at 20 ± 1 °C and a 14:10 L:D photoperiod.The end point considered was the immobilization or mortality of the nauplii after 48 h of incubation.Egg-hatching success and naupliar immobilization or mortality were evaluated after 48 h of exposure under an inverted microscope (Nikon TMS).Nauplii were considered immobilized or dead if, after 15 s of observation and physical stimulation, they did not actively swim. The percentage of naupliar immobilization/mortality (NI) was calculated as follows: where IN represents the number of immobilized or dead nauplii and HE represents the number of non-hatched eggs. The test was considered acceptable if the negative control had egg-hatching success ≥80% and naupliar immobilization or mortality ≤20%, and if EC50 in the positive control was 0.24 ± 0.12 mg Ni 2+ L −1 after 48 h of exposure [21,31].EC50 was calculated using PROBIT Analysis version 1.5 [21]. A semi-chronic 7-day test with A. tonsa was performed according to the method reported by Gorbi et al. [31].Briefly, groups of three eggs were transferred in each camera test consisting of a 100 mL glass beaker equipped with a 20 µm mesh filter tube and filled with 25 mL of test solution.Three replicates for each experimental group, consisting of three camera tests, were performed and placed in a climate-controlled room at 20 ± 1 °C.The control group in FSW and dilutions of the reference toxicant (Ni 2+ ) included aliquots of algal cultures in the exponential-growth phase at the final density of 5 × 10 4 cells mL −1 of I. galbana and 0.365 × 10 4 cells mL −1 of R. baltica.Final concentrations of 0.1, 0.063, 0.040, Main culture (F1) eggs Adult couples Life cycle assessment (EPR-EHS-NV) replacement of dead adults in between 20 days Figure 1.Acartia tonsa culture method.Flow chart synthetized the method used to collect eggs and adults for the analysis of egg production rate (EPR), egg hatching success (EHS) and naupliar viability (NV) during the life cycle assessment of both A. tonsa Adriatic and Baltic strains.Females and males that died in the first 20 days of the experiment in the crystallizers were replaced with adults coming from the main F1 culture from which they had been isolated. Ecotoxicological Tests For ecotoxicological tests, mature A. tonsa adult copepods of both Adriatic and Baltic strains were sorted from the main culture by filtration through a 300 µm mesh net filter, incubated at a density ≤ 40 ind.L −1 in 800 mL beakers and fed daily with a mixture of exponentially growing algae R. reticulata and R. baltica, at a concentration > 500 µg carbon L −1 .After 18-24 h, eggs at the bottom of the beaker were collected with a 50 µm mesh net filter, gently rinsed with FSW, and manually sorted using a Leica stereomicroscope (Milan, Italy).For the ecotoxicological test, the eggs were collected during the first 15 days since the adult stages appeared and stored at 4 • C for a maximum of one month before being used [29]. The acute test was performed according to the methods reported by Gorbi et al. [31], with some modifications.Briefly, a total of 10 eggs were sorted from the main culture and singularly incubated in 2.5 mL well plates containing FSW (negative control) or NiCl 2 solutions (positive controls) at the final concentrations of 0.4, 0.2, 0.1, and 0.05 mg Ni 2+ L −1 .The plates, performed in triplicate, were maintained for 48 h in a thermostatic chamber at 20 ± 1 • C and a 14:10 L:D photoperiod.The end point considered was the immobilization or mortality of the nauplii after 48 h of incubation.Egg-hatching success and naupliar immobilization or mortality were evaluated after 48 h of exposure under an inverted microscope (Nikon TMS).Nauplii were considered immobilized or dead if, after 15 s of observation and physical stimulation, they did not actively swim. The percentage of naupliar immobilization/mortality (NI) was calculated as follows: where IN represents the number of immobilized or dead nauplii and HE represents the number of hatched eggs. The test was considered acceptable if the negative control had egg-hatching success ≥80% and naupliar immobilization or mortality ≤ 20%, and if EC50 in the positive control was 0.24 ± 0.12 mg Ni 2+ L −1 after 48 h of exposure [21,31].EC50 was calculated using PROBIT Analysis version 1.5 [21]. A semi-chronic 7-day test with A. tonsa was performed according to the method reported by Gorbi et al. [31].Briefly, groups of three eggs were transferred in each camera test consisting of a 100 mL glass beaker equipped with a 20 µm mesh filter tube and filled with 25 mL of test solution.Three replicates for each experimental group, consisting of three camera tests, were performed and placed in a climate-controlled room at 20 ± 1 • C. The control group in FSW and dilutions of the reference toxicant (Ni 2+ ) included aliquots of algal cultures in the exponential-growth phase at the final density of 5 × 10 4 cells mL −1 of I. galbana and 0.365 × 10 4 cells mL −1 of R. baltica.Final concentrations of 0.1, 0.063, 0.040, and 0.0025 mg Ni 2+ L −1 were tested.The medium was renewed after 48 h and 5 days by transferring the 20 µm mesh net filter into a new camera containing fresh medium prepared as indicated above.At each renewal and after 7 days, the number of alive, dead, and The test was considered valid if egg-hatching success in the control was ≥80% and naupliar immobilization was ≤30%. Statistical Analyses Control charts were prepared using EC50 values obtained by the relative test (acute or semi-chronic) with the reference toxicant (Ni 2+ ) by different laboratories with Adriatic and Baltic strains.EC50 was calculated using Mosaic free loaded software (https://mosaic.univ-lyon1.fr/(accessed on 13 July 2023)).All results were plotted, and upper and lower chart limits were calculated considering 2-fold standard deviation (2SD) ± mean EC50; means were than compared with Student's t-test and considered statistically different for p < 0.05. Life Cycle Assessment The life cycle of both A. tonsa strains is schematized in Figure 2; in our laboratory conditions, the eggs hatched within 48 h and naupliar stages lasted almost 7 days. transferring the 20 µm mesh net filter into a new camera containing fresh medium prepared as indicated above.At each renewal and after 7 days, the number of alive, dead, and immobilized nauplii in each camera test was recorded.Naupliar immobilization or death (NI) is expressed as follows: where NI = naupliar immobilization, and TN = total nauplii. The test was considered valid if egg-hatching success in the control was ≥ 80% and naupliar immobilization was ≤ 30%. Statistical Analyses Control charts were prepared using EC50 values obtained by the relative test (acute or semi-chronic) with the reference toxicant (Ni 2+ ) by different laboratories with Adriatic and Baltic strains.EC50 was calculated using Mosaic free loaded software (https://mosaic.univ-lyon1.fr/(accessed on 13 July 2023)).All results were plotted, and upper and lower chart limits were calculated considering 2-fold standard deviation (2SD) ± mean EC50; means were than compared with Student's t-test and considered statistically different for p < 0.05. Life Cycle Assessment The life cycle of both A. tonsa strains is schematized in Figure 2; in our laboratory conditions, the eggs hatched within 48 h and naupliar stages lasted almost 7 days.Mature males and females were observed after 12 ± 2 days from hatching when viable eggs started to be produced.The lifespan of adults, daily egg production (fecundity), egghatching success, and naupliar viability were followed until the death of the females (Figure 3).The total lifespan of adults was variable, depending on the sex and the strain; specifically, for the Adriatic strain, males had a higher mortality rate than females within the first 26 days.They were replaced eight times, with alive males coming from the same F1 generation (Table 1); thereafter, males died after 29 and 27 days in couples 3 and 4 and after 32 days in couples 1 and 2 and were not replaced (Table 2).Five A. tonsa Adriatic strain females were replaced in couples 3 and 4 during the first 18 days and were replaced with the same F1 generation; thereafter, they died after 32 and 33 days, respectively, whereas females 1 and 2 died after 35 days (Table 1).Mature males and females were observed after 12 ± 2 days from hatching when viable eggs started to be produced.The lifespan of adults, daily egg production (fecundity), egg-hatching success, and naupliar viability were followed until the death of the females (Figure 3).The total lifespan of adults was variable, depending on the sex and the strain; specifically, for the Adriatic strain, males had a higher mortality rate than females within the first 26 days.They were replaced eight times, with alive males coming from the same F1 generation (Table 1); thereafter, males died after 29 and 27 days in couples 3 and 4 and after 32 days in couples 1 and 2 and were not replaced (Table 2).Five A. tonsa Adriatic strain females were replaced in couples 3 and 4 during the first 18 days and were replaced with the same F1 generation; thereafter, they died after 32 and 33 days, respectively, whereas females 1 and 2 died after 35 days (Table 1). Regarding the Baltic strain, four males and two females died within 22 days in two couples and were replaced, whereas those who died after 23 days were not replaced considering the end of their life cycle (Table 1).The maximum survival recorded was 29 and 33 days for males and females, respectively (Table 2). Daily fecundity (number of eggs per female), percentage of egg-hatching success, and naupliar viability are shown in Figure 2. Regarding the Baltic strain, four males and two females died within 22 days in two couples and were replaced, whereas those who died after 23 days were not replaced considering the end of their life cycle (Table 1).The maximum survival recorded was 29 and 33 days for males and females, respectively (Table 2). Daily fecundity (number of eggs per female), percentage of egg-hatching success, and naupliar viability are shown in Figure 2. During the first week, egg production was similar in both A. tonsa Adriatic and Baltic strains, with a mean of 21.63 ± 12.7 and 20.63 ± 10.14 eggs per female per day, respectively.Thereafter, the production rapidly declined for the Adriatic strain (mean of 7.09 ± 6.32) compared to the Baltic strain (21.53 ± 9.8), with a statistically significant difference after 17-19 and 24 days.Females did not produce eggs after 32 days up to their death (Figure 3a).The egg production of the Baltic strain remained stable and increased after the second week, declining after 32 days shortly before their death (Figure 3a).On average, the Baltic strain produced twice as many eggs as the Adriatic strain, with an average of 21.3 eggs per female per day (p < 0.05; n = 68) (Table 2). The percentage of egg-hatching success was variable during the life cycle for both strains; for the Adriatic one, the percentage ranged from 50 to 100%, with a mean of 77.9 ± 13.1% (Figure 3b, Table 2).In contrast, the percentage of egg-hatching success for the A. tonsa Baltic strain declined with the age of the females soon after 10 days.After 19 days, the mean percentage of egg-hatching success was statistically lower than that of the Adriatic strain and an average of 39.5% was calculated for the whole period, which was also significantly different from the Adriatic strain (p < 0.05; n = 65) (Figure 3b, Table 2). The percentage of naupliar viability remained almost constant during the entire life span; however, differences were significant between the two strains, with a higher average (98.3 ± 3.1%) recorded for the Adriatic strain than the Baltic strain (92.2 ± 8.1%) (Figure 3c, Table 2).During the first week, egg production was similar in both A. tonsa Adriatic and strains, with a mean of 21.63 ± 12.7 and 20.63 ± 10.14 eggs per female per day, respec Thereafter, the production rapidly declined for the Adriatic strain (mean of 7.09 ± compared to the Baltic strain (21.53 ± 9.8), with a statistically significant difference 17-19 and 24 days.Females did not produce eggs after 32 days up to their death (F 3a).The egg production of the Baltic strain remained stable and increased after the s week, declining after 32 days shortly before their death (Figure 3a).On average, the strain produced twice as many eggs as the Adriatic strain, with an average of 21.3 per female per day (p < 0.05; n = 68) (Table 2). Ecotoxicological Tests 3.2.1. Acute Test The percentage of egg-hatching success was variable during the life cycle fo strains; for the Adriatic one, the percentage ranged from 50 to 100%, with a mean o ± 13.1% (Figure 3b, Table 2).In contrast, the percentage of egg-hatching success for tonsa Baltic strain declined with the age of the females soon after 10 days.After 19 the mean percentage of egg-hatching success was statistically lower than that of the atic strain and an average of 39.5% was calculated for the whole period, which wa significantly different from the Adriatic strain (p < 0.05; n = 65) (Figure 3b, Table 2). The percentage of naupliar viability remained almost constant during the enti span; however, differences were significant between the two strains, with a higher av (98.3 ± 3.1%) recorded for the Adriatic strain than the Baltic strain (92.2 ± 8.1%) (Figu Table 2).Thirty-two bioassays were conducted with the Adriatic strain; data were pa derived by Rotolo et al. [32], except for the first two (Figure 4a).Single EC50 value the mean EC50 of 0.144 mg Ni 2+ L −1 exhibited homogeneous dispersion within th limits (2SD = ± 0.081 mg Ni 2+ L −1 ), with two outliers (0.24 and 0.04 mg Ni 2+ L −1 ).Rega the Baltic strain (Figure 4b), results from 36 independent tests are plotted, revea mean EC50 of 0.241 mg Ni 2+ L −1 .The collected data were homogeneously dispersed w Thirty-two bioassays were conducted with the Adriatic strain; data were partially derived by Rotolo et al. [32], except for the first two (Figure 4a).Single EC50 values and the mean EC50 of 0.144 mg Ni 2+ L −1 exhibited homogeneous dispersion within the two limits (2SD = ± 0.081 mg Ni 2+ L −1 ), with two outliers (0.24 and 0.04 mg Ni 2+ L −1 ).Regarding the Baltic strain (Figure 4b), results from 36 independent tests are plotted, revealing a mean EC50 of 0.241 mg Ni 2+ L −1 .The collected data were homogeneously dispersed within the two limits (2SD = ± 0.114 mg Ni 2+ L −1 ), with only one outlier (0.39 mg Ni 2+ L −1 ) (Figure 4b).The statistical comparison between EC50s in acute test exposure indicated significantly higher sensitivity to the reference toxicant for the Adriatic strain compared to the Baltic strain (Figure 5). Acute Test Water 2024, 16, x FOR PEER REVIEW 9 of 15 the two limits (2SD = ± 0.114 mg Ni 2+ L −1 ), with only one outlier (0.39 mg Ni 2+ L −1 ) (Figure 4b).The statistical comparison between EC50s in acute test exposure indicated significantly higher sensitivity to the reference toxicant for the Adriatic strain compared to the Baltic strain (Figure 5). Semi-chronic Test Figure 6 shows Shewart-like control charts of EC50s calculated for semi-chronic exposure tests of the A. tonsa Baltic and Adriatic strains to Ni 2+ (a and b, respectively).For both strains, the single tests exhibited a well-dispersed pattern around their relative mean EC50, devoid of outliers, and no significant differences between the means of the Baltic (0.053 mg Ni 2+ L −1 ) and Adriatic (0.061 mg Ni 2+ L −1 ) strains were detected (Figure 7). Semi-Chronic Test Figure 6 shows Shewart-like control charts of EC50s calculated for semi-chronic exposure tests of the A. tonsa Baltic and Adriatic strains to Ni 2+ (a and b, respectively). Semi-chronic Test Figure 6 shows Shewart-like control charts of EC50s calculated for semi-chron posure tests of the A. tonsa Baltic and Adriatic strains to Ni 2+ (a and b, respectively).For both strains, the single tests exhibited a well-dispersed pattern around thei tive mean EC50, devoid of outliers, and no significant differences between the me the Baltic (0.053 mg Ni 2+ L −1 ) and Adriatic (0.061 mg Ni 2+ L −1 ) strains were detected ( 7).For both strains, the single tests exhibited a well-dispersed pattern around their relative mean EC50, devoid of outliers, and no significant differences between the means of the Baltic (0.053 mg Ni 2+ L −1 ) and Adriatic (0.061 mg Ni 2+ L −1 ) strains were detected (Figure 7). Life Cycle Assessment Acartia tonsa copepods from the Mediterranean and Baltic areas were cultured and exposed to the same reference toxicant to test whether they responded differently to the Discussion Acartia tonsa copepods from the Mediterranean and Baltic areas were cultured and exposed to the same reference toxicant to test whether they responded differently to the same culture conditions in terms of reproduction and productivity, survival, and life cycle, and had different levels of sensitivity during ecotoxicological assays. Differences in larval developmental time within one nominal copepod species have been observed in natural populations, pointing out possible geographical origin effects on life history traits [33].Moreover, genetically distinct clades have been reported within the nominal, cosmopolitan species Acartia tonsa [34][35][36].Investigations of genetic differentiation suggested that European populations of this species may have resulted from single or multiple invasions of American strains [37].Comparative studies on different strains of A. tonsa have also been conducted in order to select the most suitable ones for aquaculture purposes, investigating both genetic traits and physiological characteristics, such as the biochemical composition of various life stages from four different strains.These studies demonstrated genetic differentiation between strains, as indicated by differences in two mitochondrial gene loci between three of the four strains and differences in mortality, egg production, hatching success, and the biochemical composition of eggs and adults [37,38]. We did not analyze genetic differentiation between the strains, and our results showed that life cycle and adult survival were comparable, despite the higher mortality rate observed in the Adriatic strain compared to the Baltic one.In both cases, adults showed a maximum survivability longer than 30 days, in accordance with the results obtained from the Kattegat Øresund (Denmark) population by Kiørboe et al. [39] (35 ± 1.3 and 31 ± 1.2 day of life span for females and males, respectively). The productivity, indicated by the number of eggs produced, showed differences between the two strains starting from the second week of the culture.The low productivity of the Adriatic strain derived mainly from the low egg production of couple 2 during the entire monitoring period.However, the mean values calculated considering all couples were comparable with those obtained with the same Mediterranean strain by Zhang et al. [30].On average, the daily egg production of the Baltic strain increased soon after the specimens reached the adult stage.In general, the maximum daily egg production in the first week was between 24 and 40 eggs.It is interesting to note that an increase in egg production was observed for all couples in the Baltic strain when the food supplied was doubled (after day 16).In this second phase of the trial, daily egg production reached a maximum of 66 eggs.The effect due to the increase in food supply was more evident in the Adriatic strain after 2 days, but the productivity recorded during the first week was not able to be restored.This suggests that the decline in egg production was only partially due to the amount of food supplied and was most likely caused by a physiological state, probably more influenced by the age of the adults. Despite the higher number of eggs produced by the Baltic strain compared to the Adriatic strain, its quality in terms of percentage of hatching success was lower than that of the Adriatic strain.This difference was much more evident after 10 days of culture.Before that time, the percentage of successfully hatched eggs was comparable in both strains and in the range of 79.83-93.96%,which was considered suitable for ecotoxicological assay performances and in line or even higher than the values reported in the literature [38,40,41].This result may have mainly depended on the physiological state of the females.Acartia tonsa females tend to remove undesirable substances, such as oxidative stress products, through eggs, with the consequence of a high content of oxidative products in the eggs released by older females [40,42], which may have been reflected in the lower egg-hatching success at the end of their life cycle. It is interesting to note different strategies evolved between the strains; the Adriatic strain seemed to respond to aging by reducing the energy involved in the production of eggs, which in turn were of better quality (fewer eggs with a high hatching rate), while the Baltic strain spared no energy to devote to laying eggs, which, however, were of poor quality in terms of hatching success.Probably, in nature, the final effect considered as the recruitment rate of individuals to the population is the same, but in ecotoxicology, the egg-hatching success is a parameter that must be taken into consideration for the validation of the results.In fact, for the validation of the bioassays with A. tonsa, the egg-hatching success in the controls should be ≥80% [21,31].Considering that the differences between the strains were observed in egg-hatching success rate, we suggest taking the physiological characteristics of the strain used for culture maintenance and for setting up bioassays into consideration. Regarding the results of ecotoxicological tests, standardized tests with A. tonsa have been included in the methods used to determine the quality and management of dredged marine sediments [2].The selection of species and bioassays to set up a battery of marine model organisms follows specific criteria in order to represent different trophic levels, end points, and matrices.Within these criteria, it is possible to choose alternative, different model species for ecotoxicological bioassays.In fact, despite some differences in end points, authors demonstrated that different species can be used interchangeably for an integrated sediment quality assessment [43][44][45]. Our results of the acute exposure test with both A. tonsa strains showed EC50s in the control chart close to the mean value, especially for those tests conducted in the second half of the graph, indicating the quality and the robustness of the data during the time.The preliminary results regarding EC50 for naupliar immobilization or mortality in the semi-chronic test were also close to the mean value, suggesting the accuracy and reproducibility of the data (Figures 4 and 6).However, the overall mean of EC50 in the acute test for the Adriatic strain of A. tonsa was statistically lower by about half, compared to the Baltic strain, indicating its high sensitivity.It is known that populations coming from different zoogeographical regions can show physiological differences, for example, related to tolerance to stressful conditions, such as low temperatures [13]. However, several studies indicated that different strains of aquatic species, such as bacteria [46], microalgae [47], and invertebrates [48], can show different levels of sensitivity to contaminants or stressors. Among freshwater crustaceans, a greater number of comparative studies are available for Daphnia magna, a widely used organism in ecotoxicology which is prescribed as a model by international regulations [49,50].Barata et al. [51] recorded variations in sensitivity when different laboratory clones or a wild population of D. magna were exposed to toxic compounds. Population-specific responses are also frequently reported in marine copepods exposed to the same stressors [52,53].Although environmental factors such as diet and culture conditions remain the major cause of inter-laboratory variations [54], various genotypes can respond differently to the same substances [55].Picado et al. [56] suggested performing an analysis on gene expression and genetic stability in order to obtain well-characterized and stable standard clones to use on standard tests, reducing confounding factors among laboratories.Gene expression analyses have been conducted on co-generic Acartia exposed to the same toxicant [24], but, to our knowledge, a comparison between toxicological responses of the same species coming from different geographical areas has never been reported until now. Overall, our data regarding the sensitivity of both A. tonsa strains to Ni exposure in acute toxicity tests agree with those reported by Gorbi et al. [31], suggesting the interchangeability of both strains. Similarly, our results regarding semi-chronic bioassays indicated a mean EC50 in alignment with that reported by Gorbi et al. [31].The number of semi-chronic assays performed was limited to a few tests compared to the acute tests; however, the preliminary results suggested that the pre-selection of different A. tonsa strains is not necessary for this test. Conclusions What emerged from this study was a difference in productivity and egg viability between A. tonsa Adriatic and Baltic strains reared in the same laboratory conditions.It has clarified the importance of detailed knowledge of the biology and strain specificity of model organisms, in this case, the marine calanoid copepod, to prevent potential confounding factors during ecotoxicological bioassays, which may invalidate tests or over-/underestimate the toxicity of chemicals and natural matrices.Shewart-like control charts have proven to be a useful tool to evaluate the EC50 variability in long-term plotted data and should be considered for all model species and strains to define the level of acceptability and sensitivity in ecotoxicological tests according to the standard protocols. Figure 1 . Figure1.Acartia tonsa culture method.Flow chart synthetized the method used to collect eggs and adults fro the analysis of egg production rate (EPR), egg hatching success (EHS) and naupliar viability (NV) during the life cycle assessment of both A. tonsa Adriatic and Baltic strains.Females and males that died in the first 20 days of the experiment in the crystallizers were replaced with adults coming from the main F1 culture from which they had been isolated. Figure 2 . Figure 2. Acartia tonsa life cycle timing.Schematic representation of the life cycle of A. tonsa Adriatic and Baltic strains. Figure 2 . Figure 2. Acartia tonsa life cycle timing.Schematic representation of the life cycle of A. tonsa Adriatic and Baltic strains. Figure 3 . Figure 3. Fecundity, percentage of egg-hatching success, and naupliar viability of Acartia tonsa Adriatic (orange line) and Baltic strains (blue line) fed a monoalgal diet of Rhinomanas reticulata.(a) Mean egg production per female, (b) percentage of egg-hatching success after 48 h, and (c) percentage of naupliar viability calculated on the number of hatched eggs.Dotted lines are linear trends (Adriatic, orange line; Baltic, blue line).Asterisks indicate significant differences (p < 0.05), and empty circles and nv indicate lack of data (0 egg production) or data with zero variance (nv). Figure 4 showsFigure 3 . Figure 4 shows Shewart-like control charts of EC50 depicting acute toxicity tests (naupliar immobilization or mortality) with A. tonsa Adriatic (a) and Baltic (b) strains exposed to the reference toxicant. Figure 4 shows Figure 4 shows Shewart-like control charts of EC50 depicting acute toxicity tests pliar immobilization or mortality) with A. tonsa Adriatic (a) and Baltic (b) strains ex to the reference toxicant. Figure 6 . Figure 6.Shewart-like control chart of EC50 of naupliar immobilization or mortality with N semi-chronic test (7 days) with Acartia tonsa eggs.(a) Adriatic strain; (b) Baltic strain.Green line: mean EC50 value; red lines: upper and lower limits (2 × standard deviation ± mean); diam single EC50 values.The x-axis represents different, independent tests. Table 1 . Acartia tonsa Adriatic and Baltic strains' daily survival of males (M) and females (F) fed with a monoalgal diet of Rhinomanas reticulata.Blue and red empty circles indicate dead males and females replaced with live copepods coming from the same culture; blue and red full circles indicate dead males and females, respectively, not replaced with live individuals. Table 2 . Acartia tonsa Adriatic and Baltic strains.Mean and standard deviation (sd) of egg number per female per day, percentage of egg-hatching success, and naupliar viability calculated over the whole period of lifespan.Different letters indicate statistically significant differences (p < 0.05).Adult survival refers to the longer survival time of females (F) and males (M).
8,933.4
2024-04-20T00:00:00.000
[ "Environmental Science", "Biology" ]
Using Knowledge-Guided Machine Learning To Assess Patterns of Areal Change in Waterbodies across the Contiguous United States Lake and reservoir surface areas are an important proxy for freshwater availability. Advancements in machine learning (ML) techniques and increased accessibility of remote sensing data products have enabled the analysis of waterbody surface area dynamics on broad spatial scales. However, interpreting the ML results remains a challenge. While ML provides important tools for identifying patterns, the resultant models do not include mechanisms. Thus, the “black-box” nature of ML techniques often lacks ecological meaning. Using ML, we characterized temporal patterns in lake and reservoir surface area change from 1984 to 2016 for 103,930 waterbodies in the contiguous United States. We then employed knowledge-guided machine learning (KGML) to classify all waterbodies into seven ecologically interpretable groups representing distinct patterns of surface area change over time. Many waterbodies were classified as having “no change” (43%), whereas the remaining 57% of waterbodies fell into other groups representing both linear and nonlinear patterns. This analysis demonstrates the potential of KGML not only for identifying ecologically relevant patterns of change across time but also for unraveling complex processes that underpin those changes. INTRODUCTION The surface area of a lake or reservoir (hereafter both referred to as a "waterbody") is an important indicator of freshwater availability and has been recognized as an "Essential Climate Variable" by the Global Climate Observation System. 1 Waterbody surface areas oscillate naturally due to seasonal precipitation and evaporation patterns, and also as a result of anthropogenic stressors including societal water use, human land management, and climate change, which threaten waterbodies and the ecosystem services that they provide. 2−9 For example, several waterbodies across the globe have experienced surface area declines, including large lakes in Central Asia (e.g., Aral Sea), Central Africa (e.g., Lake Chad), the Altiplano region (e.g., Lake Poopo), the Middle East (e.g., Lake Urmia), and the Western United States (e.g., Lake Mead and the Great Salt Lake).−13 In addition, regional lake area declines in the Mongolian Plateau have threatened the livelihood of local people, 14 while waterbody expansions in North Dakota (USA) have displaced agricultural croplands and existing wetland vegetation. 15Lake expansions due to increasing precipitation, glacier melt, and permafrost thaw at high elevations (e.g., in the Tibetan Plateau, 16 the Alps, 17 and Patagonia 18 ) serve as hydrological responses to global warming while increasing the risk for catastrophic glacial-lake outburst floods. −26 However, patterns of waterbody surface area change are not always linear and likely exhibit abrupt shifts, inconsistent oscillations, or other patterns of variability that reflect regional water balance fluctuations or human intervention. 19,26Increasing variability in lake ecosystems can be indicative of global change 27,28 and is likely to affect their resilience to the negative effects of climate change. 29Characterizing how and where lakes are changing on broad scales is, therefore, critical to understanding the drivers of change.−33 Recently, the Reservoir and Lake Surface Area Time series (ReaLSAT) data set, a long-term, spatially extensive water body data set, has become available 34 which makes possible, using new analytic tools, a comprehensive analysis of long-term patterns of change across broad spatial scales. Machine Learning (ML) has been used in the environmental sciences as a tool for analyzing time series patterns in large data sets using pattern recognition algorithms. 35To further interpret patterns, commonalities in patterns can be classified by using clustering algorithms.Such discretization of, for example, waterbodies, can facilitate ecological understanding, and ultimately management of these ecosystems. 36However, ML approaches identify patterns but do not include mechanisms as a basis for the interpretability of results. 37hese patterns can therefore identify spurious relationships in training data sets that may not be indicative of ecological phenomena, making it difficult to extrapolate to out-of-sample data sets. 38To address these limitations, researchers have begun to expand the utility of ML by engaging scientific knowledge to guide ML algorithms.−42 The introduction of KGML techniques in combination with the recent increased accessibility of high-quality remote sensing data products provides an opportunity to analyze and classify patterns of waterbody surface area change on broad spatial scales.The ReaLSAT data set 34 is specifically suited for this purpose, as it provides 32 years of monthly predictions (January 1984−January 2016) of surface area for waterbodies (>0.1 km 2 ) around the world.In addition, ReaLSAT provides dynamic waterbody polygons with unique lake identifiers, rather than static polygons 43,44 or pixel-based surface water area time series that are not linked to specific waterbodies. 20,21,45n this study, we combine the ReaLSAT data set with KGML for the first time, to analyze and classify patterns in the surface area of 103,930 waterbodies across the contiguous United States.−48 Our objectives were to (1) analyze and classify the long-term patterns of change in waterbody surface areas and (2) determine how those patterns differ across the contiguous United States.In addressing these objectives, we demonstrate how KGML and an interdisciplinary team science approach work in concert to effectively inform the design of the analytical framework and guide the interpretation of the results. Study Area and Data Set Description. This study analyzed the patterns of change of 103,930 waterbodies of the contiguous United States using the ReaLSAT data set. 34This region was selected due to its high waterbody abundance and the large east−west gradient in climate, geology, and morphology, introducing a diversity of waterbody types that makes this region ideal for the purpose of this study. The ReaLSAT data set used pixel-based land/water classification maps of the Global Surface Water data set (GSW) to create dynamic polygons for 681,137 lakes and reservoirs globally from 1984 to 2016 using a physics-guided machine learning algorithm. 34To correctly identify a waterbody, this algorithm required at least 100 GSW pixels (at a resolution of 0.9 arc-seconds or 30 m at the equator), therefore, waterbodies <0.1 km 2 were not included.Nevertheless, ReaLSAT identified relatively small waterbodies (i.e., Environmental Science & Technology with surface areas ∼0.1 km 2 ), including ephemeral and agricultural ponds that do not appear in other data sets because they were either not classified as a "lake" or "reservoir" or were not recognized because they are highly dynamic in their extent and existence.In addition, ReaLSAT included reservoirs on rivers and oxbow lakes (∼5% of nonlakes) in the data set, while acknowledging that they could not always be distinguished from river segments.In this study, we adopted ReaLSAT's definition of "lakes and reservoirs", which referred to all lentic (nonflowing) waters, including dynamically operated ponds.Limitations to the data set include the prevalence of data gaps, which are larger and more abundant before the year 2000 and during the winter season of most northern lakes (i.e., during ice cover).However, these gaps were filled in using the machine learning algorithm, further described below. 2.2.KGML Pattern Recognition and Clustering Methods.7][38][39]41,42 We used KGML to identify ecologically interpretable groups representing different patterns of waterbody surface area change over time in five steps (Figure 1): (1) we used a randomly selected subset of 4000 waterbodies (4% of the full data set) from the ReaLSAT time series to train several long short-term memory (LSTM) models (see Section 2.2.1 for model details). ( 2) he trained LSTM models with the lowest validation losses (four models) were averaged and used to produce smooth and gap-filled time series (i.e., "reconstructed" time series) for these 4000 waterbodies.(3) We then used machine learning (K-means clustering) to divide the low-dimensional embeddings for each of the 4000 waterbodies into 50 clusters sharing similar patterns of surface area change.(4) These 50 clusters were further combined into seven ecologically interpretable groups using domain knowledge (as described in Section 2.2.3) based on our physical understanding of waterbody area changes, which allowed us to distinguish ecologically plausible and interpretable patterns of surface area change.(5) Finally, we used K-means clustering and a statistical distance from the low-dimensional embeddings to the cluster centroids to group the remaining 99,930 waterbodies into one of the seven ecologically interpretable groups.The following sections describe these procedures in more detail. LSTM Model Training and Time Series Reconstruction. We trained a long short-term memory (LSTM) based sequence-to-sequence autoencoder model using the time series of 4000 randomly selected waterbodies (Figure 1, steps 1 and 2).We selected a small subset of waterbodies here to enable human experts to evaluate the results and provide ecological interpretability that was subsequently used to analyze the remaining waterbodies.All time series were z-score normalized to depict relative rather than absolute waterbody area changes.LSTM is a deep learning method that, when compared to more traditional neural networks, is particularly suited for the aims of our study because it captures long-term temporal dependencies 49,50 necessary to identify waterbody dynamics.Furthermore, the autoencoder formulation provides a robust way to gap-fill missing time steps and remove outliers, addressing the issue of recurring data gaps in the ReaLSAT data set.This method returns a low-dimensional feature space that can be used for clustering.While there are some limitations to LSTM models, such as a greater amount of memory needed for training compared to other models, 51 the advantages mentioned above made this model an excellent fit for our research objectives. Because LSTMs are designed to run only forward in time and our objective was to maximize reconstruction performance irrespective of directionality, we used a bidirectional LSTMbased sequence encoder consisting of two LSTM structures: the forward LSTM and the backward LSTM.The two LSTM structures are similar, except that the time series is reversed for the backward LSTM (Text S1).The embeddings for the forward LSTM and backward LSTM were added to obtain the final embeddings.This representation was then fed through the LSTM decoder to produce a target sequence, which is the same as the input sequence in the encode-decode architecture.Specifically, we used a conditional decoder that iteratively extracted data at each time step based on the output data from the previous time steps.The autoencoder parameters were trained to maximize the likelihood of the data, which under the Gaussian assumption becomes the reconstruction loss computed as the mean-squared error between the reconstructed and the original time series (Figure S1).We selected the hyperparameters by iterative tuning (learning rate = 0.001, epochs = 1000, no. of clusters = 50, code dimensions = 64), resulting in average training and average validation losses of 0.37 and 0.39, respectively. We averaged four trained LSTM models with the lowest training and validation losses to improve model performance. 52e used this ensemble of trained models to reconstruct the 4000-waterbody area time series.As described earlier, due to the autoencoder formulation, the reconstructed time series have no gaps or outliers. Machine Learning Clustering.Using the reconstructed time series from the trained LSTM model, we performed K-means clustering, an unsupervised machine learning algorithm, to group waterbodies into clusters based on their patterns of surface area change (Figure 1, step 3).We generated an elbow plot to identify the number of clusters necessary to capture most of the variation in our data set and determined that the optimal number was between five and ten (Figure S2).Then, we ran the K-means clustering algorithm for ten clusters, the maximum value identified by the elbow plot.We observed that some of these clusters depicted patterns of surface area changes that were ecologically similar.For instance, multiple clusters contained peaks in the surface area but were deemed separate because the timing of these peaks was different.We therefore determined that visual inspections and manual adjustments of the ML-derived clusters were necessary.We ran the K-means model again, but this time for 50 clusters, which we assumed to be the upper limit of ecologically possible and interpretable distinct waterbody types and which we could then use as a baseline for KGML grouping. 2.2.3.Knowledge-Guided Grouping.We visually inspected the time series patterns of a random subset of waterbodies in each of the 50 clusters (Figure 1, step 4).In addition, we inspected satellite imagery using Google Earth and assessed the spatial distribution of all waterbodies in each cluster to assess whether each cluster represented similar waterbodies (i.e., from the same region, or ecological zone, or representing similar waterbody types).Through this analysis, we found that similar patterns of change were being divided into multiple clusters and that individual clusters did not clearly depict Environmental Science & Technology ecologically unique waterbodies.We then merged clusters with similar time series patterns into seven ecologically interpretable groups, with each of the seven groups representing a distinct pattern or type of surface area change over time.We used our domain knowledge of lake ecosystems to resolve slight differences in the merging process (e.g., whether an abrupt increase in the lake area was more important than its precise timing), during which we manually placed each of the 50 clusters into one of seven ecologically interpretable groups.Finally, we looked at the Euclidean distance (a measure of similarity, further described below) between each reconstructed time series and the centroid of each cluster of similar time series to confirm that the seven groups that we identified were reasonable for each waterbody (Figure S3). Scaling up to 103,930 Waterbodies.Once the randomly selected 4000 waterbodies were grouped into seven clusters, we used the trained LSTM model to create lowdimensional embeddings for the remaining 99,930 waterbodies (Figure 1, step 5).After pattern recognition, each waterbody was grouped into one of the 50 clusters based on the Euclidean distance between the waterbody's reconstructed time series and the centroid of each of the 50 clusters, where the centroid of a cluster is defined as the multivariate mean of all waterbodies in that group.This resulted in 50 distances per waterbody, from which the smallest distance determined the group it was assigned to.The waterbodies were then sorted into the seven ecologically interpretable groups based on the criteria created on the subset of 4000 waterbodies (see Section 2.2.3).For instance, all waterbodies that were assigned to clusters 36 and 47 were grouped into the fifth ecologically interpretable group.We looked at the Euclidean distance between each reconstructed time series and the centroid of each cluster, across the seven groups, to confirm that there were no groups with obvious outliers (Figure S3).The group classifications for each waterbody can be linked via waterbody ID with the ReaLSAT data set and are available in the Zenodo repository. 53.3.Comparing KGML versus ML Output.To compare KGML and ML approaches, we repeated step 3 (Figure 1, machine learning clustering) to generate seven clusters instead of 50.We then visually compared these seven clusters derived from machine learning to the seven ecologically interpretable groups produced using KGML (i.e., including step 4, the "knowledge-guided grouping").The goal of this visual comparison was to determine how many of the clusters produced by using machine learning matched the ecologically interpretable groups produced by using KGML. Spatial Analysis. To identify patterns in the spatial distribution of the clusters, we first binned the contiguous United States into 100 km 2 hexagons.We then tested a null hypothesis, which assumes clusters were evenly spatially distributed, by using the binned hexagons.The null hypothesis (even spatial distribution) states that the percentage of waterbodies in each hexagon from a cluster (hex %wb/clust ) is equal to the percent of waterbodies classified into that cluster from the entire data set of 103,930 waterbodies (total %wb/clust ) We tested the null hypothesis by subtracting total %wb/clust from hex %wb/clust , resulting in either a positive or negative number (% difference) A positive number indicated that there were more waterbodies in the hexagon from a cluster than would be expected with an even spatial distribution.A negative number indicated that there were fewer waterbodies in the hexagon from that cluster than expected with an even spatial distribution.The resulting percent differences were z-score normalized to better compare the spatial distribution across clusters.All code and data files used to run these analyses are archived and available in the Zenodo repository. 53,54 RESULTS Patterns of Change. We identified seven patterns of temporal surface area change from the 103,930 ReaLSAT waterbodies within the contiguous United States using KGML (Figure 1).Patterns were described using domain knowledge, as (1) no change over time, (2) substantial increase and then maintain, (3) steady increase over time, (4) steady decrease over time, (5) peaks, (6) troughs, and (7) outliers or patterns for which there was no apparent ecological mechanism (Table 1).Many waterbodies (43% of the total) were classified as having no surface area change over time (group 1), while the remaining 57% of waterbodies fell into one of the other six ecologically interpretable groups (Table 1). Comparison of KGML vs ML Patterns of Surface Area Change. Two of the seven groups produced via KGML, groups 4 (steady decrease over time) and 7 (outliers), were also identified as unique clusters using only ML (clusters e and 2).The pattern of group 1 (no change over time) was detected by ML as well, however, based on differences in the scale of surface area fluctuations, ML subdivided the allocated time series into two separate clusters (a and b).Similarly, the pattern of group 2 (substantial increase then maintain) was detected by ML but allocated to two different clusters based on the timing and rate of the increase (c and d).One cluster identified by ML was not clearly defined using KGML (cluster f), as waterbodies that fell into this cluster either had patterns of "peaks" (group 5) or "troughs" (group 6).Finally, group 3 (a steady increase over time) was not identified by the ML method at all. Spatial Distribution of KGML Ecologically Interpretable Groups. Different patterns in the spatial distribution of the seven ecologically interpretable groups were identified (Figure 3).Waterbodies in group 4 (steady decrease over time) were more abundant in the lower latitudes, whereas waterbodies in group 6 (troughs) were more abundant in the higher latitudes.Most groups had the highest densities in the midlongitude region, with small increases in density in western and eastern longitudes for groups 1 (no change over time) and 4 (steady decrease over time) (Figure S4). We compared the spatial distribution of waterbodies in each of the seven groups to the null assumption that waterbodies from each cluster would be evenly distributed spatially across the full data set (103,930 waterbodies representing the contiguous United States; Figure 4).If a group had differences in spatial distribution beyond the null assumption, then there may be relationships between the waterbodies in each group and their geographic location.The occurrence of waterbodies with no change in surface area (group 1) was relatively high in most of the Mississippi River Valley and relatively low in most other regions of the contiguous United States.Additionally, there were more than average waterbodies with peaks and troughs in their surface area time series (groups 5 and 6) in western regions.Waterbodies exhibiting increases in surface area (groups 2 and 3) showed moderate deviations from the null assumption, with slightly higher abundance in the Environmental Science & Technology southwestern United States and localized high abundances in Florida and Michigan.Additionally, in the Southwest, there were more waterbodies with decreasing surface areas (group 4) but fewer in most other regions of the United States.Waterbodies with outliers (group 7) were more common in the Northeast and Southwest United States. DISCUSSION 4.1.Importance of a KGML and an Interdisciplinary, Team Science Approach.−48 KGML instantiates both the technical and social underpinnings of those ideas, as it requires the integration of tools and knowledge from diverse domains. 37,39,40Our team consisted of ecologists, engineers, hydrologists, and computer scientists at different career stages who worked collaboratively and interactively throughout the entirety of this project, each learning and utilizing tools from others' expertise.The emphasis on best practices of team science from idea development to interpretation and communication of results was an integral component of this work, as it allowed us to effectively leverage expertise from different skill sets within the group.Asking a question such as "What are the long-term patterns of change in U.S. waterbodies?"might appear simple, but it belies the challenges of extracting meaningful insights from a very large and complex data set using both ML tools and domain knowledge.The main findings of our work, identifying seven patterns of change and how those patterns are distributed across the U.S., reveal the importance of scale by showing that some decadal changes in waterbodies have regional-specific tendencies.Our findings also show the importance of the local scale, where neighboring waterbodies can exhibit strikingly different temporal patterns.While identifying the drivers of these patterns is the logical next step, fully addressing this question is beyond the scope of the present work.However, our results lay the foundation by providing a valuable data set and by showcasing the efficacy of team science in integrating a diverse set of human and technical resources to answer what initially seems like a straightforward inquiry. The KGML approach explicitly incorporates domain knowledge into the analytical framework.We invoked expertise as part of the classification processes, resulting in groupings that differed in some cases from pure ML classification but were consistent with overall waterbody behavior.For example, we regrouped some of the clusters that ML allocated based on the timing of surface area changes and the scale of seasonal fluctuations.We observed that ML cluster c closely resembled ML cluster d, both matching KGML group 2 for waterbodies with surface area patterns that substantially increase and then maintain (Figure 2).The only difference between clusters c and d was the timing and intensity of the increase, while the overall patterns of surface area change were ecologically similar; i.e., they both likely represent a reservoir, filled at different times and rates after initial construction.We also found that ML did not capture surface areas that steadily increased over time, likely explained by the subtlety of this change compared to waterbodies experiencing no change over time or waterbodies with substantially increasing surface areas that then maintain.ML also produced a cluster that included multiple patterns identified using KGML, where the waterbodies had surface area patterns that resembled both peaks and troughs (groups 5 and 6), which could be an indication of either human management or heavy precipitation in different regions.By visually observing patterns in the ML reconstructed data, we identified waterbodies with steady increases in surface area over time, a subtle but important pattern compared to others with stronger signals.As a result, team science and KGML allowed us to effectively group waterbodies based on ecologically meaningful patterns of change. 4.2.Importance of Waterbody-Specific Data.A strong feature of our work, which is an extension of the approach used in the ReaLSAT data set, is a focus on individual waterbodies, rather than generalized pixel-based water surface area changes that can only address the more general notion of water on the landscape.Although pixel-based studies provide important information for the assessment of regional water storage, they do not tell us whether these changes in surface area are occurring across most waterbodies or are driven by relatively few large waterbodies.For example, Zou et al. 25 and Pekel et al. 21determined that between 1984 and 2016, waterbodies in the western United States experienced strong surface area declines.Our analyses show that the relative number of waterbodies with decreasing surface area patterns was indeed above the national average in states like Utah and Nevada, but not in California, Oregon, or Washington (Figure 4).Knowing the ecologically interpretable group for each individual waterbody in these states could be useful for identifying the specific drivers that are causing different patterns of surface area change, which can then be used to implement water management strategies for specific waterbodies.Further, because patterns of surface area change are likely influenced by waterbody-and watershed-specific drivers, such as hydrology, morphology, climate, and anthropogenic factors, classifying waterbodies into ecologically interpretable groups is the first step toward fully understanding the mechanism driving these changes in surface area over time. Potential Drivers of Long-Term Patterns of Waterbody Surface Area Change.Explaining waterbody surface area change is challenging due to the interaction of multiple climatological, hydrological, morphological, and anthropogenic variables across spatial and temporal scales.While identifying causal mechanisms for waterbody area change was beyond the scope of this study, we offer a few plausible interpretations meant to provide ecological context and process for our KGML model.In addition, we performed a preliminary driver analysis meant to inspire further investigation into waterbody area changes. 4.3.1.Plausible Interpretations of Patterns of Surface Area Change.Group 1 can be ecologically defined as waterbodies that experience only minor oscillations in the surface area with no long-term or abrupt change.This can occur both naturally, such as in waterbodies located in regions with consistent hydrologic inputs, and in human-made systems, where water levels are manually controlled with infrastructure, such as pumps or dams.Group 2 can be ecologically explained as marking the creation of a waterbody, either by a large natural flooding event or by the intentional creation of a human-made reservoir.Groups 3 and 4 represent waterbodies in a long-term unidirectional trend, where they may experience a regular increase (group 3) or decrease (group 4) in surface area.For example, increased water inputs could be due to a steady increase in precipitation over multiple years, while decreasing water inputs could be due to long-term water extraction or drought.Groups 5 and 6 represent Environmental Science & Technology waterbodies that experience either a sudden increase (group 5) or decrease (group 6) in surface area over a short period (e.g., 1−5 years) but then return to a stable baseline.These periodic extreme water level fluctuations could reflect short-term anthropogenic changes in water use, such as temporary flooding and then draining of agricultural fields, or large water extractions from reservoirs.Short-term water fluctuations could also reflect natural phenomena, such as beaver damming, flash flood events, or isolated drought periods.Group 7 represents a waterbody surface area time series that experienced various extreme fluctuations over time.These are classified as "outlier" waterbodies that likely have casespecific explanations for their changes in surface area that cannot be generalized on this scale.In many cases, these ecologically uninterpretable fluctuations are caused by small data inconsistencies, which we observed via visual inspection of a subset of waterbodies within this group.Indeed, the ReaLSAT documentation describes potential sources of error causing false water or land detections, including the impact of surface algae and floating aquatic plants, the spurious water level fluctuations of agricultural ponds distorting the deep learning model used to fill in missing pixels, and missing data or low-confidence land-water classifications by the underlying Global Surface Water data set. 21.3.2.Preliminary Driver Analysis.Temporal change in the waterbody area, represented by the ecologically interpretable groups in our study, is unevenly distributed across the United States, suggesting that geographical drivers (latitude, longitude, and elevation) or climatic drivers (air temperature and precipitation) are important factors.A vast collection of potential drivers could be important.As a preliminary exploration, we focused on air temperature, precipitation, and elevation for an initial assessment because of their direct connection to the water cycle and because the data sets were readily available.We used gridded monthly air temperature and precipitation data from the National Oceanic and Atmospheric Administration Physical Sciences Lab in Boulder, Colorado, USA 55 and elevation data from the United States Geological Survey 56 to extract monthly time series for each waterbody (see Text S2), which we explored with a principal component analysis (Figures S5−9).We found that air temperature and precipitation likely both play a role in waterbodies with long-term decreasing trends in surface area (group 4), as these waterbodies were most prevalent in the hot climates of the southwest United States and Florida (Figure 4).These findings are in line with previous reports where air temperature was found to enhance evaporation and thus decrease waterbody surface area, particularly in the southern United States. 57In contrast, the northeast has waterbodies that exhibit substantial increases in surface area, especially along the Mississippi River, which might be best explained by the aboveaverage precipitation of those regions, also reported by Zhang et al. 16 Areas of high precipitation are associated with greater hydrologic connectivity in surface waters 58 and lake-specific characteristics such as hydrologic connectivity are known drivers of lake surface area change. 59We also found that elevation best-explained waterbodies with peaks and troughs in their surface area (groups 5 and 6), which could occur if waterbodies at high elevations are exposed to greater precipitation and lower evaporation. 60revious work on the spatial distribution of lake water quality 61,62 has shown that, in many cases, there is no good explanation for some spatial patterns, and that in some cases, neighboring lakes that experience the same land use and climate can have very different water quality. 61It is possible that the same is true for waterbody area change over time as factors specific to a waterbody, such as its morphology, hydrology, or the existence of control structures, such as dams, override the influence of regional-scale drivers.Other factors are likely to explain surface area changes in waterbodies and will be important for future exploration.Examples include evaporation dynamics, known to be the main water loss for most waterbodies, 20,63 hydrological connections, runoff, lake morphology, 64 and anthropogenic influences (e.g., land-use change, urbanization, population density, agriculture, industry, mining). 65,66Ultimately, our KGML approach proved important for generating temporal patterns of waterbody change, whereby driving mechanisms can be explored.The resultant spatial distribution of surface area change across the contiguous United States was not random, suggesting that many of the long-term changes in the waterbody surface area are likely to have explanatory drivers, including broad-scale patterns in climate variables.Understanding patterns of change through the application of KGML is a crucial step in addressing the complex processes driving these changes, and we must understand both pattern and process to manage, protect, and build resilience for waterbodies and the critical ecosystem services they provide, especially in an era of profound change. Data Availability Statement All data 63 and code 64 Figure 1 . Figure 1.Methods for pattern recognition and clustering of surface area changes in 103,930 waterbodies of the contiguous United States from 1984−2016, using knowledge-guided machine learning (KGML).Black circles depict either ML clusters or KGML ecologically interpretable groups. Figure 2 . Figure 2. Waterbody surface area time series(1984−2016) in the contiguous United States for ecologically interpretable groups 1−7 that were derived by knowledge-guided machine learning (KGML) and clusters a−g that were derived by Machine Learning (ML).Each panel shows the original as well as reconstructed time series of a waterbody that is representative of its cluster, based on its proximity to the cluster centroid.Illustrations of the generalized KGML group patterns are indicated next to each panel, where the illustration associated with each ML panel indicates the ecologically interpretable group that KGML would have assigned each of the ML clusters. Figure 3 . Figure 3. Map of 103,930 waterbodies within the contiguous United States.The associated temporal ecologically interpretable group classification of each waterbody is depicted by colored points on the map and colored lines in the side panels.Side panels represent the kernel density estimate of the latitudinal (right) and longitudinal (top) distributions of each waterbody group. Figure 4 . Figure 4. Spatial distribution of KGML ecologically interpretable groups for 103,930 waterbodies in the contiguous United States.Z-score values represent the normalized percent difference from the null hypothesis (even spatial distribution).Orange versus purple hexagons represent fewer versus more waterbodies than would be expected from an even distribution across the United States.Hexagons with no identified waterbodies are white.Spatial distribution within the contiguous United States, group description, number, and percentage of water bodies are depicted for each group. Table 1 . Overview of the Seven Ecologically Interpretable Groups Generated via Knowledge-Guided Machine Learning (KGML), Which Describes the Temporal Patterns in the Surface Area Change between 1984 and 2016 for 103,930 Waterbodies across the Contiguous United States sı Supporting Information The necessary to recreate analysis in this manuscript are publicly available on Zenodo (https://zenodo.org/records/10207055; https://zenodo.org/records/10214420) Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.est.3c05784.LSTM equations and architecture, Euclidean distances for KGML clusters, spatial distributions of clusters, elbow plot for determining the optimal number of clusters, alternate PCs to those shown in the main text, and results from the Tukey-HSD posthoc test (PDF) *
7,372.4
2024-03-06T00:00:00.000
[ "Environmental Science", "Computer Science" ]
A Proteome Reference Map of the Causative Agent of Melioidosis Burkholderia pseudomallei Burkholderia pseudomallei is the etiologic agent of melioidosis. Using 2DE and MALDI-TOF MS, we report here a proteome reference map constructed from early stationary phase, a bacterial adaptation process. We identified 282 protein spots representing 220 ORFs; many of them have been implicated in bacterial pathogenesis. Up to 20% of identified ORFs belong to post-translational modification and stress responses. The proteome reference map will support future analysis of the bacterial gene and environmental regulation and facilitate comparative proteomics with its sibling species. Introduction Burkholderia pseudomallei is a pathogenic bacterium causing melioidosis disease, which is endemic predominantly in south Eastern Thailand and Northern Australia.It has also been recognized as a B-type biological warfare agent possessing an ability for bioterrorism [1].Its sibling species B. mallei is the parasitic bacterial pathogen of Glanders disease and was recognized for its destructive role during the World War I [2].In contrast, another closely related species previously characterized as a B. pseudomallei strain with an ability of arabinose-positive assimilation, that is, B. thailandensis, is unable to invoke pathogenesis in human.However, these bacterial species harbor genomic DNA with high similarity in nucleotide as well as in amino acid sequences, suggesting that gain and/or loss of genes during evolution is a reason for differences in their behavior and pathogenicity [3][4][5][6].For example, an arabinose catabolic operon belonging to B. thailandensis, which is absent in both B. pseudomallei and B. mallei genomes, has a suppressive role in B. pseudomallei virulence identified by the operon overexpression analysis [5].In addition, using a comparative proteomics of proteins expressed at prolonged stationary phase, we could identify protein-encoding genes that are lost in B. thailandensis genome compared with that of B. pseudomallei [7].Therefore, an establishment of a 2DE dataset proteome reference map from one of these three species will be a valuable resource for comparing protein expression profile in order to study their pathogenesis differentially caused by B. pseudomallei and B. mallei comparing with B. thailandensis.To this end, given the fact that B. pseudomallei has the most biomedical impact between these three bacterial species, the proteome reference map constructed from B. pseudomallei will eventually lead to the most effective effort, not only for interspecies comparative proteomic analysis [7] but also for studying gene and environmental regulation in this bacterium [8][9][10]. We previously reported an initiation of a partial proteome reference map of B. pseudomallei grown at prolonged stationary phase to apply for a proteomic comparison between this virulent species and the nonvirulent species B. thailandensis [7].However, only 88 protein spots were identified at this growth phase limiting the application on further analysis of other growth conditions due to the spatial and temporal control of protein expression in this bacterium.To further expand the characterization of protein expression profile in B. pseudomallei, we performed an extended proteome reference map of this bacterium grown at early stationary phase.The early stationary stage of growth has been shown to be relevant to an adaptation phase in bacteria reflected by the gene regulatory pattern of the adaptive stress response RpoS [11].We have successfully identified 282 protein spots representing 220 ORFs.This proteome reference map will be an invaluable data for further analysis of pathogenesis and virulence factors of B. pseudomallei and its sibling species. Materials and Methods 2.1.Bacterial Culture and Protein Extraction.The clinical isolated B. pseudomallei strain 844 [12] was grown in 100 mL Luria Bertani (LB) medium at 37 • C and 200 rpm [9].Bacterial cells were pelleted using the culture at the 12th h of growth (OD 600 of about 9.0) corresponding to the early stationary phase by centrifugation at 10,000 ×g and 4 • C. Protein was extracted by resuspending the cell pellet in 500 µL lysis buffer (8 M urea, 4% w/v CHAPS, 2 mM TBP, 1% v/v IPG buffer pH 4-7 (Amersham Biosciences, Uppsala, Sweden) and 1% v/v protease inhibitor cocktail set II (Calbiochem, La Jolla, CA)).Cell lysis was performed by sonication on ice.Cell debris was then removed by centrifugation at 13,000 ×g for 30 min at 20 • C. Protein concentrations were determined using RC DC protein assay kit (BioRad, Hercules, Calif, USA) as previously described [7]. Two-Dimensional Gel Electrophoresis.2DE was performed as previously described [9].2DE gels were fixed in fixing solution (50% v/v ethanol and 2% v/v phosphoric acid) for 1 h.The gels were washed by soaking in distilled water for 10 min.The proteins were then stained in the staining solution (20% v/v methanol, 10% v/v phosphoric acid, 10% w/v ammonium sulfate, and 0.1% CBBG-250) for an overnight staining.The 2DE gels were scanned with the ImageScanner (Amersham Biosciences).Image analysis was performed using the PDQuest software version 7.1.1(BioRad).A master gel used for spot-matching process was created.The master gel was then used to match corresponding protein spots between 2DE gels from three different cultures.Approximately 700 protein spots were detected using the software.Approximately 300 protein spots were picked manually, and they were subjected to tryptic ingel digestion. Tryptic In-Gel Digestion.Protein spots were excised from a 2DE gel and transferred into 1.5 mL microcentrifuge tubes.The in-gel digestion was carried out in 96-well microtiter plates by the Ettan Spot Handling workstation (Amersham Biosciences).The gel pieces were washed in 200 µL washing solution (50% v/v methanol and 5% acetic acid) for 2 h at room temperature and were dehydrated by the addition of 200 µL acetonitrile.An incubation was performed for 5 min, and the gels were completely dried in a vacuum centrifuge.Protein disulfide bonds were reduced by adding 30 µL of 10 mM DTT and incubating for 1 h at 56 • C.They were then alkylated by adding 30 µL 100 mM iodoacetamide and incubating for 1 h at room temperature.The gel pieces were dehydrated in 200 µL acetonitrile for 5 min to exchange the buffer.Two hundred µL of 100 mM ammonium bicarbonate was added to rehydrate the gel pieces.The gel pieces were dehydrated again in 200 µL acetonitrile for 5 min and completely dried in a vacuum centrifuge for 3 min.The tryptic digestion was carried out for overnight at 37 • C with 30 µL of 20 ng/µL ice-cold, sequencing-grade trypsin (Promega, Madison, Wis, USA).Peptides produced after digestion were collected by extraction with 30 µL of 20 mM ammonium bicarbonate and 30 µL of 50% acetonitrile with 5% formic acid.The peptide extracts were then collected into 96-well microtiter plates for further analysis. Protein Identification. For protein identification using the matrix-assisted laser desorption ionization time-offlight mass spectrometry (MALDI-TOF MS), peptide mass fingerprinting (PMF) was analyzed using the Reflex IV MALDI-TOF mass spectrometer (Bruker Daltonik, Bremen, Germany).The matrix solution (10% w/v 2,5-dihydroxy benzoic acid (Sigma, St. Louis, Mo, USA) in 50% v/v acetonitrile with 0.05 v/v trifluoroacetic acid (TFA)) was prepared prior to use.Peptide extracts were solubilized in 96-well microtiter plates by adding 2 µL of 0.1% TFA.One µL of the solubilized peptides were mixed with an equal volume of the matrix solution on a 384-well AnchorChip target plate (Bruker Daltonik, Bremen, Germany).The spots were dried at room temperature until crystal occurred, and the target plate was then subjected to the PMF analysis.The mixture was irradiated with a 337 nm N2 laser and accelerated with 20 kV accelerating voltage in a two-stage gridless pulsed ion extraction source.The mass spectrometer was operated in the positive reflectron mode.The covered peptide masses ranged between 800 to 3,500 Dalton.The resulting PMF of each protein spot was visualized by the flexAnalysis software version 2.2 (Bruker Daltonik).Protein identification from the tryptic fragment sizes was performed by using PMF searching against the bacterial subgroup of "other proteobacteria" using the entire NCBI'sNR protein database from the MASCOT search engine (http://www.matrixscience.com/).We set the assumption as monoisotopic peptides, a fixed modification at cysteine residues and a variable oxidation at methionine residues.Up to one missed tryptic cleavage was also allowed [7]. Results and Discussion By using 2DE coupled with MALDI-TOF MS, we have established an extended proteome reference map of B. pseudomallei.We picked approximately 300 protein spots identified by the PDQuest software.However, 282 protein spots were successfully identified by the PMF searching.PMFs of protein spots, which failed to be identified, showed only few peptide masses.In addition, those low-quality S1. PMFs also resulted in matching with proteins from bacterial genus other than Burkholderia spp.rejecting the positive identification.The failure to identify those protein spots might be due to a very low protein amount of the excised spots.Figure 1 shows the identified 282 protein spots, representing 220 open reading frames, within the analysis window of 10-75 kDa, and pI 4-7.The identified protein spots were categorized according to their biological processes and were listed (see Table S1 in Supplementary material available online at doi: 10.1155/2011/530926).Based on a published genome of B. pseudomallei [4], we have thus compared our identified proteins with predicted ORFs from the genome.The result shows that proteins involved in posttranslational modification and lipid metabolism represent more than 50% of predicted ORFs.In contrast, only 1 and 2% have been identified as cell envelope biogenesis and outer membrane proteins and hypothetical proteins, respectively (Figure S1).Within our proteome reference map, over 97 spots equivalent to 35% of the identified ORFs are found to be involved in central metabolisms of the bacterium, of which energy, lipid, and carbohydrate metabolisms are the major pathways contributing to 28, 18, and 16%, respectively (Figure S2).This result suggests an essential role of those gene products for survival of B. pseudomallei at this growth phase.Approximately 20% of identified proteins presented in the reference map represent proteins functioning in posttranslational modification and stress responses.Within these categories, chaperonin GroEL, heat shock Hsp20-related protein, and phasin are highly expressed at both early and late stationary phase when compared with our previously partial B. pseudomallei proteome map [7].Interestingly, up to 13% of the identified genes matched to hypothetical proteins highlighting the need for further functional characterization of these open reading frames, which might be important for the adaptation process upon entry into stationary phase. We could also assign a new biological category of proteins identified in this extended proteome reference map, which we were unable to detect in the previous map.This group is genes associated with cell motility and intracellular trafficking and secretion and comprises 6% of the identified ORFs.Given the fact that the early stationary phase-associated adaptation involves expression of genes related to various processes of chemotaxis [11] and that expression of genes involved in cell trafficking and motility is down-regulated in bacterial prolonged culture [13], this might be the reason why we could observe this gene category at the early but not late stationary phase.Within this group, expression of genes involved in type III secretion systems BsaR, BsaU, BipB, and BipD (Table S1) has been reported to be induced by osmotic stress through a transcriptome analysis [14].Similar to osmotic stress, a transcriptomic study has found that iron stress results in modulation of expression of ATP synthase subunit epsilon and succinate dehydrogenases [15], two proteins identified in our proteome reference map.These data show that our proteome reference map will be useful for studying comparative proteomics of B. pseudomallei grown under environmental stresses. The B. pseudomallei genome consists of two chromosomes, in which the large chromosome (chromosome 1) contains genes necessary for central metabolism whereas the small chromosome (chromosome 2) harbors accessory genes important for bacterial response to the environment [4].To elucidate which biological processes of the gene ontology classification are dominant to which chromosome, we grouped all identified genes according to their encoding chromosome (Figure S3).We show that expressed proteins belonging to nucleotide metabolism, coenzyme metabolism, translation, and posttranslational modification categories are dominant for chromosome 1.On the other hand, chromosome 2 favors proteins with secondary metabolism, cell motility, intracellular trafficking and secretion, and uncharacterized and hypothetical proteins.Since proteins are the functional output of genes, our result suggests a partitioning preference in functional expression of different cellular processes by the two chromosomes in B. pseudomallei.Because proteins involved in central metabolism represent the most abundant identified proteins of our proteome reference map, we then took the advantage of the Kyoto Encyclopedia of Genes and Genomes (KEGG) database (http://www.genome.jp/kegg/) to map metabolic pathways relating to these identified proteins.Pathways including glycolysis, Krebs cycle, oxidative phosphorylation, pentose phosphate pathway, and fatty acid metabolism are summarized in Figure S4. Our extended proteome reference map also includes 34 genes whose expression is associated with protein isoforms (Table S2) in comparison with only 6 genes reported in our previous finding [7].The most abundant protein isoforms are encoded by phasin, comprising 9 protein spots.This finding is correlated with our previous work, which shows phasin as the major protein isoforms expressed in the late stationary phase B. pseudomallei proteome.The abundance of phasin isoforms in the early and prolonged stationary phase cultures of B. pseudomallei suggests that the storage of polyhydroxy alkanoate, which is a form of carbon source associated with phasin, is a common feature between these growth phases.However, in contrast to the previous reported, we find the majority of protein isoforms falling into the category of central metabolism.For example, two enzyme isoforms encoded by glyceraldehyde 3-phosphate dehydrogenase (GapA) gene show a similar molecular weight, but their pIs are 4.47 and 6.70.This roughly 3-unit charge difference might be due to an intermediate transition, glyceraldehyde 3phosphate-bound form of the enzyme localized at the acidic pI [16,17].Thus, this result suggests that enzyme isoforms with charge difference might represent an intermediate of its catalytic step. We have identified a number of genes, whose function has been reported to be implicated in bacterial pathogenesis, in the current proteome reference map of B. pseudomallei (Table S1).For instance, MurF is an enzyme required for bacterial cell wall synthesis and antibiotic resistance.Ppk2, a gene belonging to polyphosphate kinase family, functions in polyphosphate and ATP/GTP biogenesis and signaling during stress response.Similarly, adenylate kinase and adenylate cyclase are involved in ATP metabolism and signaling.Type III secretion-related proteins are required for invasion of pathogenic bacteria into host cells.Expression of many of these proteins has been shown to be controlled by stress response sigma factors RpoS and RpoE in B. pseudomallei [8,9].Thus, translating our proteome reference map to other comparative proteome studies of B. pseudomallei grown at early stationary phase shall benefit the study of pathogenesis caused by B. pseudomallei, since expression of these pathogenesis-related proteins might be regulated by bacterial intrinsic factors and growth condition. With the very similarity in nucleotide and in amino acid sequences between B. pseudomallei, B. mallei, and B. thailandensis genomes [3,4], we propose that our proteome reference map constructed from B. pseudomallei can also be applied to the other two species and will be useful for comparing their proteome profiles.To provide an example, we previously reported a comparative proteomic analysis in B. pseudomallei and the nonvirulent sibling B. thailandensis and identified a number of potential virulent factors and biomarkers for differentiating these two bacterial species [7].This proteome reference map of proteins expressed at early stationary phase in B. pseudomallei will facilitate further studies on "melioidomics," that is, the study of melioidosisrelated global profile expressions. Conclusion The proteome reference map containing 282 protein spots, which represent 220 protein encoding genes, has been established for the melioidosis pathogen B. pseudomallei.Many of the identified proteins have been implicated in virulence of pathogenic bacteria and have been shown to be controlled by the stationary phase sigma factor RpoS and a biofilm-associated sigma factor RpoE. Figure 1 : Figure 1: 2DE gel of B. pseudomallei proteome reference map grown under early stationary phase.The identified protein spots are listed in TableS1.
3,577
2011-09-22T00:00:00.000
[ "Biology", "Medicine" ]
First record of the sea slug Stylocheilus striatus ( Quoy & Gaimard , 1825 ) ( Anaspidea , Aplysiidae ) and swarming behavior for Bazaruto Archipelago , Mozambique with the first record of Pleurobranchus forskalii Rüppel & Leuckart , 1828 ( Nudipleura , Pleurobranchidae ) for Bazaruto Island ( Gastr Two heterobranch species, Stylocheilus striatus (Quoy & Gaimard, 1825) (Anaspidea, Aplysiidae) and Pleurobranchus forskalii Rüppel & Leuckart, 1828 (Nudipleura, Pleurobranchidae) are reported for the first time for Bazaruto Island, Mozambique. Swarming behavior of Stylocheilus striatus, which was previously described for other localities, was also observed for the first time in the Bazaruto Archipelago. The sea slugs were photographed in situ and identified in sync with their species descriptions, photographic databases and the current literature. Introduction Marine heterobranch gastropods from the West Indian Ocean (WIO) have been the focus of recent systematic investigations, with increasing tendency towards species discoveries (Gosliner et al. 2008, Yonow 2012, Tibiriçá 2013, Goodheart et al. 2015, Tibiriçá and Malaquias 2016. This is not surprising considering that the WIO region, frequently defined with the Eastern African Marine Ecoregion (EAME) (WWF 2004), borders 4,600 km of coastline and comprises diverse ecosystems from southern Somalia to the Cape of South Africa (Everett et al. 2008, Pereira et al. 2014, van der Elst and Everett 2015. The coastline of Mozambique alone covers 2,400 km, encompassing major island and coastal habitats, including sandy and rocky shorelines, mangroves, swamps, seagrass beds, rocky reefs and coral reefs teaming with rich marine diversity (Pereira et al. 2014, Tibiriçá andMalaquias 2016). Recently, two new species records were found on NOTES ON GEOGRAPHIC DISTRIBUTION Bazaruto Island, Mozambique ( Fig. 1): Stylocheilus striatus (Quoy & Gaimard, 1825) and Pleurobranchus forskalii Rüppel & Leuckart, 1828. Bazaruto is one of 5 barrier islands of the Bazaruto Archipelago, located 20 km off the Mozambique coast (Fig. 2). The aplysiid Stylocheilus striatus (Figs 3−11) was abundantly swarming, covering the beach in a spread of gelatinous squishy slugs. This observation constitutes the first record of S. striatus swarming behavior from Bazaruto Archipelago. In addition, a single nudipleurid Pleurobranchus forskalii Rüppel & Leuckart, 1828 (Fig. 12) was encountered at approximately the same time on a sand flat 3−4 km further south of the S. striatus site. This sighting represents the first record of P. forskalii on Bazaruto Island and the second record from Bazaruto Archipelago. The first finding was a turned over beach specimen conspicuously exposing its large gill on Benguerra Island in August 2004 (Rutherford 2005 (Bebbington 1974, Yonow 2012. Its South African distribution extends from Mngazana in the Eastern Cape to southern Mozambique (Gosliner 1987, Perissinotto et al. 2014. All these sites are further than 575 km from Bazaruto Island, with the closest sighting reported from Inhaca Island, Mozambique (Macnae and Kalk 1962). The biggest distance between Bazaruto Island and these locations is ca 2829 km to the Seychelles. According to the IUCN Red List Categories and Criteria (2012), this species is not threatened and can be considered of Least Concern (LC). Pleurobranchus forskalii, known as Forskal's Sidegill Slug, inhabits temperate shallow subtidal areas throughout the tropical Indo-West Pacific, the Mediterranean Sea and the Red Sea (Gosliner et al. 2008, Wakimoto andAbe 2013). East African records include Tanzania (Rudman 1999a), Mauritius, Reunion, Mayotte Island, Rodrigues Island, Madagascar (Bidgrain 2010) and South Africa (Rudman 1999b). According to the IUCN Red List Categories and Criteria (2012), this species is not threatened and can be considered of Least Concern (LC). Heterobranchs are scarcely represented in the natural history literature of Mozambique and no specific account was available until recently (Tibiriçá and Malaquias 2016). Accounts from Mozambique were either reported in general faunistic reports (Bergh 1900, Kalk 1958, 1962), phylogenetic studies of certain genera (Malaquias and Reid 2008, Price et al. 2011, Carmona et al. 2014 or in field guides presenting broader geographic realms (Branch et al 2008, Gosliner et al. 2008, King and Fraser 2014. Accounts for Bazaruto Archipelago amount to three: "Aplysia" Linnaeus, 1767 on Bazaruto Island (Helgason 2015), Bulla ampulla Linnaeus, 1758 on Bazaruto Island (Malaquias and Reid 2008) and Pleurobranchus forskalii on Benguerra Island (Rutherford 2005). The nearest locality of a joint record, albeit sketchy, for S. striatus and P. forskalii was from Inhaca Island, Mozambique (Macnae and Kalk 1962). This site is located ca 575 km southwest of Bazaruto Island. Macnae and Kalk (1962) reported seventeen species of heterobranchs on Inhaca Island, which they described as "usually present in small numbers but which may on occasion be common". Rudman (1999b) reports these instances as occurrences "depending on the vagaries of the currents and water temperature." Stylocheilus longicaudus (Quoy & Gaimard, 1825), Pleurobranchus sp. nov. and Pleurobranchus sp. were identified amongst these seventeen species (Macnae and Kalk 1962). Macnae and Kalk (1962) may have described S. striatus instead of S. longicaudus when they referred to seeing "the little purple spotted sea hare crawl around, actively feeding, copulating and laying eggs." In context to these records, the aim of this paper is to provide this data for faunal inventories (i.e. IUCN, WoRMS) as well as to express a voice of confidence for continued support of conservation initiatives for the Bazaruto Archipelago and for Mozambique in general. Methods In reference to the National Park status of Bazaruto Island, live material was not collected. The material examined was photographed and filmed in situ CAT (20 October at 18:24-18:40 h; 22 October at 10:34-10:36 h and 25 October at 12:29 h) using a Sony DSC−T100 digital pocket camera. Maps were created using Worldclim data (Hijmans et al. 2005) and political borders were retrieved from Esri Data and Maps (2002). Stylocheilus striatus (Quoy & Gaimard, 1825) aggregations were imaged as they washed in the littoral and intertidal zones of shoreline located on the west side of Bazaruto Island, adjacent to the Anantara Bazaruto Island Resort and Spa (21.7067° S, 035.4468° W; Fig. 1). The individual Pleurobranchus forskalii Rüppel & Leuckart, 1828 ( Fig. 12) was imaged on a sand flat in the shallow intertidal zone ca 3-4 km south of the Anantara Bazaruto Resort and Spa (25 October at 12:59 h). Identification For the first record regarding S. striatus here, dense aggregations of S. striatus littered the beach and intertidal zone as far as the eye could see (Fig. 2). Countless individuals of different sizes of S. striatus drifted in a mass swarming and mortality event on the western side of Bazaruto Island during 20-25 October 2014. These temporarily gregarious Blue-Ring Sea Hares were occasionally seen with an irregular tangle of amber-coloured egg strands and dense conglomerations of slime at low tide (Figs 5, 6). The former resident activities coordinator, Nicole Helgason (pers. comm.) of the Anantara Resort also observed these same aggregations, during mid-October 2014 and subsequently entered these findings as "Aplysia" on iNaturalist.org (7 November 2015). Though not very clear, N. Helgason's original image and description regarding this sighting still documents the same morphs of S. striatus bearing fine longitudinal brown lines and ocellar spots for the same day. Our observations were corroborated at the time on 20 October 2014 (pers. comm.). Identification to the genus level was confirmed by Dr. Heike Wägele (pers. comm.) on 4 November 2014. Subsequent comparisons with photographic databases and the literature enabled further identification to the species level (Rudman 1999c(Rudman , d, 2001Sachithananadam et al. 2011, Yonow 2012. Quoy and Gaimard first described Stylocheilus longicauda (Quoy & Gaimard, 1825) and a second species, Stylocheilus striatus (Quoy & Gaimard, 1832), remarking that these two morphospecies were probably the same species. Quoy and Gaimard's (1825) description for S. longicauda has this species showing a uniform yellow or greenish colour and sparse, generally unbranched papillae. In sync with its name, the "tail" is long and reaches half its body length while that of S. striatus is short (Rudman 1999d(Rudman , 2001. This morphospecies is usually referred to as S. longicauda or S. longicaudus in earlier reports (Marcus and Marcus 1970, Bebbington 1974, Rudman 1999c, d, 2001. However, the current consensus is that the name, S. longicauda, was used incorrectly for the common species, S. striatus (Rudman 1999c(Rudman , d, 2001 and that earlier reports of S. striatus are questionable (Marcus and Marcus 1970, Bebbington 1974, Rudman 1999c, d, e, 2001. Nomenclaturally, this case has opened a "can of worms" (Willan 2000, Yonow 2012) beyond the scope of this paper and one that future phylogenetic research should definitely take on. Recent works consider two forms based on ecology and external morphological appearance (Rudman 1999d, 2001, Sachithananadam et al. 2011, Yonow 2012. Stylocheilus longicauda is the yellow pelagic form and drifts on floating seaweeds and other floating material (Rudman 1999d, 2001, Yonow 2012. Its long tail is likely used to grasp onto flotsam in the open ocean (Rudman 2005). Stylocheilus striatus, known also as the Blue-Ring Sea Hare, is the shorter-"tailed" and brown-lined, shallow water, benthic form (Figs 3−11). Stylocheilus striatus has dark longitudinal lines, a translucent body, compound to several branched papillae and shows a mottled color pattern with ocelli consisting of a blue or pink center encircled by an orangebrown rim (Rudman 1999d, 2001, Sachithananadam et al. 2011. Like many aplysiid sea hares, purple ink is exuded when disturbed (Fig. 11). Adult Pleurobranchus forskalii is usually dark plum red although it can be found in shades of peach, to dark orange to dark purple (Bidgrain 2010). Color ranges known for P. forskalii include dark brown to a lighter brown variation with dark brown rings (Fig. 12), pale brown with black rings, brownish with white rings, red with white rings, and dark brown with white ornaments to complete red (Köhler 2016). The conspicuous opaque white arches or "semicircles" outlining clusters of pustules characterize these slugs. The individual encountered on Bazaruto Island (Fig. 12) was not crawling and thus, did not reveal the characteristic tubular siphon at the posterior end of the mantle, which is used for channeling water and excreting feces (Bidgrain 2010). Pleurobranchus forskalii is similar to P. albiguttatus, Figure 12. Pleurobranchus forskalii Rüppel & Leuckart, 1828, dorsal view. P. grandis, P. mamillatus and P. peroni, but is clearly distinguishable by the dark brown rings known for the lighter form and the white semicircles on the mantle (Bidgrain 2010). Discussion The observed mass-swarming phenomenon is not uncommon for S. striatus within its circumtropical range (Apte 2009, Bidgrain 2005, de Vargas Ribeiro et al. 2017, and has been reported for other tropical heterobranch species as well (Rudman 2001, Perissinotto et al. 2014. It is known to be locally common in sea grass beds and shallow water, particularly during mating season (Yonow 2012, Bidgrain 2005. It is likely that this event coincided with a period of very hot weather. These swarming occurrences likely reflect very favorable conditions, when large numbers of juveniles settle out of the plankton and grow rapidly to maturity (Rudman 2001). Subsequently, they synchronously die together. Yonow (2012) conducted an extensive study of 70 species of Heterobranchia from the Western Indian Ocean, including the aforementioned known distributions of S. striatus and P. forskalii in this region. The map (Fig. 1, Yonow 2012) shows the western half of the Indian Ocean, whereby the coast of Mozambique was still devoid of Heterobranchia. Stylocheilus striatus from Zanzibar was included in this study, while Pleurobranchus forskalii was not. Yonow (2012) emphasized the need for a revision of the genus Stylocheilus encompassing a comparison of specimens from different regions and habitats as well as a comprehensive review of the literature. We couldn't agree more. The World Register of Marine Species (WoRMS 2016) reports 431 species of molluscs for Mozambique. Forty-seven of these belong to the Heterobranchia. This number reflects a great underestimation for Mozambique especially since recent studies presenting the diversity of nudipleuran sea slugs collected in this country revealed that 170 species were recorded within a radius of 20 km (Quoy & Gaimard, 1825) and Pleurobranchus forskalii Rüppel & Leuckart, 1828 in the West Indian Ocean. Marine ecosystems are still incompletely inventoried with new species continuing to be described at a pace of ca 1800 species per year (Bouchet et al. 2002). The tropics harbor approximately 75% of newly described marine molluscs for which 43% alone were recorded for the Indo Pacific two decades ago (Bouchet 1997). For heterobranchs, 30% of 3400 Indo-Pacific species were undescribed twenty years ago (Gosliner and Draheim 1996). Meanwhile, a plethora of new species and distributions have been described since then, for which the Indo-Pacific coastal realm continues to lead in terms of global numbers (Bouchet et al. 2002, Tibiriçá andMalaquias 2016). Currently Mozambique, like most of Africa, is not yet available on the IUCN Red List radar; it is, however, considered "A focal point for national red lists and species action plans" (IUCN 2012). These two reports from Bazaruto Island add new species data for East African occurrences and extend the range of S. striatus and P. forskalii from southern Mozambique (Inhaca Island) 575 km northwards in the Mozambique Channel as well as 1,406 km south from Mayotte Island, their northernmost distribution in the Mozambique Channel (Fig. 13). This report corroborates as well as augments the knowledge of individual forms, color patterns and the context dependent behavior of these two heterobranch species in the Bazaruto Archipelago during mid-October 2014.
3,081.4
2017-09-15T00:00:00.000
[ "Biology" ]
Ammonium glycyrrhizin counteracts liver injury caused by lipopolysaccharide / amoxicillin-clavulanate potassium We treated isolated chicken primary hepatocytes with lipopolysaccharide/ amoxicillin clavulanate potassium (LPS/AC) to model liver injury and investigate its underlying mechanisms. We also used this model to assess the cytoprotective effects of compound ammonium glycyrrhizin (CAG) in vitro. LPS/AC-induced injury decreased cell viability and increased the activity of serum aspartate transaminase and alanine transaminase. Levels of superoxide dismutase, glutathione, and glutathione peroxidase were lower than control, while levels of the oxidative product malondialdehyde and reactive oxygen species were higher. Treatment with CAG for 24 h ameliorated these changes. Caspase-3 activity assays and flow cytometry revealed increased apoptosis in the model group. However, apoptosis decreased after CAG treatment, as confirmed by Hoechst 33342 staining. We also observed changes in mitochondrial ultrastructure. Real-time PCR and western blot analyses showed that CAG treatment downregulated LPS/AC-induced RNA expression of caspase-3, caspase-9, bax, cytochrome c, and fas, and upregulated the expression of bcl-2. Mitochondrial cytochrome c was released into the cytosol and the inner mitochondrial membrane potential (ΔΨm) was decreased. Our results highlight CAG as a potential therapeutic agent to counteract LPS/AC-induced INTRODUCTION Lipopolysaccharide (LPS), a cell wall component of gram-negative bacteria [1], is capable of eliciting inflammatory responses that involve the release of numerous proinflammatory cytokines, thereby leading to hepatic necrosis and a decrease in the levels of antioxidant enzymes and free radical scavengers [2,3]. In humans, the injection of nanograms of LPS into the bloodstream can result in septic shock [4], whereas its administration to animals induces symptoms of liver injury. From 1995 to 2005, 77 out of 1,164 cases (6.6% incidence) in an outpatient hepatology clinic involved liver injury, which is commonly associated with the use of drugs [5]. Serious drug-induced liver injury may lead to hospitalization, and it is the most common identifiable cause of acute liver failure in the US [6,7]. Antibacterial agents and other drugs are another frequent cause of liver failure after transplantation, autoimmune hepatitis, and drug-induced liver injury (DILI) [8][9][10]. It is reported that from 2004 to 2007, antibacterial agents including amoxicillin/ clavulanic acid, third-generation cephalosporins, and fluoroquinolones, accounted for 45.5% of DILI in the US [11][12][13]. DILI arises via complex multistep mechanisms, which are initiated by chemical insults to liver cells. Formation of chemically reactive metabolites, impairment of mitochondrial function, inhibition of the activity of the bile salt export pump Amoxicillin/clavulanic acid is one of the most frequently used antibiotic combinations in both human and veterinary clinical practice [15]. Nonetheless, when amoxicillin is used alongside a β-lactamase inhibitor such as clavulanic acid, the risk of hepatotoxicity and liver injury is increased [16][17][18]. It is reported that amoxicillin clavulanate (AC) is a type of penicillin strongly associated with hepatotoxicity and is the most frequent cause of DILI-related hospitalizations in clinical medicine [19]. Gram-negative bacteria are common pathogens that attack chickens. Multiple antibiotics, particularly bactericidal drugs, have been used in high doses and frequencies in veterinary clinics [20]. LPS and AC can each lead to hepatic injury, and studies have shown that AC potassium induces the release of LPS. Glycyr1rhizic acid (GA) or glycyrrhizin is commonly used in Asia to treat patients with chronic hepatitis [21][22][23]. Compound ammonium glycyrrhizin (CAG), which is mainly composed of glycyrrhizin, glycine, and methionine, is an effective anti-inflammatory, anti-cancer, anti-hepatotoxic, and antioxidant drug [24][25][26][27][28]. Here, we used cultured chicken liver primary cells to investigate whether the release of LPS due to AC potassium administration aggravates hepatocyte injury. We created a chicken primary hepatocyte model to explore the mechanisms underlying liver damage by LPS/AC and to determine whether CAG imparts a protective effect. Using this model, we measured various anti-oxidative indicators such as superoxide dismutase (SOD), reduced glutathione (GSH), glutathione peroxidase (GSH-Px), reactive oxygen species (ROS), and malondialdehyde (MDA). We also measured alanine transaminase (ALT) and aspartate transaminase (AST) activity in liver cells, the percentage of apoptotic cells, and the expression levels of various mRNAs and proteins related to the apoptosis and p38 pathways. Isolation and culture of chicken primary hepatocytes Inverted phase contrast microscopy indicates that the hepatocytes that were isolated at an early stage were elliptical and circular in shape. Most of the hepatocytes adhered to the bottom of the culture plate at 6 h after plating. At 24 h of culture, the cells had fused and differentiated to form islands on the plate. At 9 days of culture, roughly half of the cells had undergone apoptosis, and the remaining cells showed cytoplasmic granulation ( Figure 1). Effect of LPS/AC treatment on hepatocytes LPS/AC treatment of hepatocytes ( Figure 2) resulted in variations in cell viability and morphology. Treatment of cells with 30 + 60 μg/mL and 30 + 80 μg/mL LPS/AC did not induce changes in cellular morphology, although their relative cell viability was 81.52 ± 2.35% and 75.54 ± 2.79%, respectively. At a concentration of 30 + 100 μg/mL, the observed cell viability of the hepatocytes was 53.56 ± 6.17% and their shape was irregular, with disruption of the cell membrane. Exposure of the cells to 30 μg/mL of LPS + 140 μg/mL of AC led to a great reduction in the total number of hepatocytes. The application of 30 μg/mL of LPS + 100 μg/mL of AC caused the death of 50% of the cells; therefore, this was deemed the most appropriate concentration in the model groups. CAG attenuates LPS/AC-induced acute liver injury in hepatocytes The cells were divided into five groups: The control group neither received CAG nor LPS/AC; the model group was exposed to 30 μg/mL of LPS + 100 μg/mL of AC for 24 h; and the combination group was treated with 30 μg/mL of LPS + 100 μg/mL of AC for 24 h after the addition of 1, 10, or 100 μg/mL of CAG. Figure 3A shows that the activity of the cells was almost half of that of the control when exposed to 30 μg/mL of LPS + 100 μg/mL of AC for 24 h (51.23 ± 2.46%). However, cell viability reached 85.88 ± 3.03% and 93.37 ± 1.80%, respectively, when treated with 10 μg/mL and 100 μg/mL of CAG. The activity of both ALT and AST was significantly lower compared to that in the LPS/AC group (P < 0.01, Figure 3B-3C). Figure 4 shows that the administration of LPS/AC for 24 h led to a decrease (P < 0.01) in the activity of GSH, GSH-Px, and SOD to 26.18 ± 3.08 nmol/mg protein, 16.11 ± 0.30, and 40.63 ± 2.05 U/mg protein, respectively, compared to that in the control group (GSH: 59.64 ± 4.45 nmol/mg protein; GSH-Px: 46.22 ± 0.08 U/mg protein; and SOD: 79.05 ± 9.72 U/mg protein). Furthermore, treatment with LPS/AC resulted in an increase in MDA and ROS levels. Treatment with CAG (10, 100 μg/mL) for 24 h resulted in an increase in SOD, GSH, and GSH-Px activity (P < 0.05), and a decrease in MDA and ROS levels (P < 0.01). Together, these findings suggest that CAG treatment protects hepatocytes from LPS/ACinduced acute liver injury in vitro. Measurement of caspase-3 activity by using a colorimetric assay The exposure of hepatocytes to LPS/AC for 24 h resulted in an increase in the activity of caspase-3 (OD: 0.344) compared to that in the control group (OD: 0.179) (P < 0.01). Moreover, the caspase-3 activity of hepatocytes exposed to CAG (1 and 10 μg/mL) (OD: 0.291, 0.287) significantly decreased (P < 0.05) compared to that in the model group that was exposed to a high concentration of CAG (100 mg/mL, P < 0.01, Figure 4D). Apoptosis rates of the LPS/AC and CAG groups as assessed by FCM The percentage of apoptotic cells was higher in the LPS/AC group than in the control group (P < 0.05). On the other hand, the percentage of apoptotic cells in the groups exposed to 1, 10, or 100 μg/mL of CAG was lower than that in the model group ( Figure 5). Assessment of apoptotic cells by Hoechst 33342 staining Compared to the control group, a more intense blue fluorescence due to Hoechst 33342 staining highlighting apoptotic cells was observed in the model group. However, after the exposure of the hepatocytes to 100 µg/mL CAG for 24 h, the rate of apoptosis decreased ( Figure 6). CAG modifies the level of mRNA expression in hepatocytes At the transcriptional level, the application of 30 μg/mL of LPS + 100 μg/mL of AC triggered an increase in the expression of caspase-3, caspase-9, bax, Fas, and cyt c by 1.94 ± 0.13-fold, 4.88 ± 1.8-fold, 1.97 ± 0.24fold, 2.32 ± 0.21-fold, and 5.01 ± 0.56, respectively, and a decrease in bcl-2 expression of 0.60 ± 0.05-fold in the hepatocytes, compared to levels in the control group. We did not observe differences in the levels of bcl-2 and fas mRNA expression between the model group and the groups treated with CAG at concentrations of 1 and 10 μg/mL. However, treatment with 100 μg/mL of CAG did result in decreased mRNA levels of bcl-2 and fas in hepatocytes compared to those in the model group (P < 0.05, Figure 7). Impact of CAG on LPS/AC-induced protein expression of caspase-3, caspase-9, bax, cyt c, and bcl-2 The level of bax, caspase-3, and caspase-9 protein expression increased in the LPS/AC group compared to that in the control group and decreased in the presence The hepatocytes were treated with 30 μg/mL of LPS + 100 μg/mL of AC after exposure to CAG for 24 h. For the LPS/AC-treated groups, "−" and "+" represent the cells in culture medium and those treated with 30 μg/mL of LPS + 100 μg/mL of AC, respectively. For the CAGtreated groups, "−" represent cells treated without CAG. Values are expressed as the mean ± SD. (n = 3). # < 0.05, ## < 0.01 compared to the control; * < 0.05, ** < 0.01 compared to the model. Oncotarget 96841 www.impactjournals.com/oncotarget of CAG relative to that in the model group ( Figure 8). However, bcl-2 levels decreased in the LPS/AC group compared to those in the control group and increased in the CAG groups compared to those in the model group. The expression of cyt c was increased in the cytoplasm but decreased in mitochondria when cells were treated with LPS/AC. However, cyt c expression was decreased in the cytoplasm and increased in mitochondria compared with the model group when the concentration of CAG was 100 μg/mL ( Figure 10). Ultrastructural changes in hepatocytes The cells of the control group showed normal ultrastructural features, including a smooth round nucleus with the nuclear membrane intact, and mitochondria with normal cristae (Figure 9). The mitochondria of liver cells from the LPS/AC treatment group were swollen, vacuolated, and exhibited disintegration or loss of cristae. Effect of LPS/AC on mitochondrial membrane potential The ΔΨm (mitochondrial membrane potential) of chicken liver cells was analyzed using JC-1. Normal mitochondria initially fluoresced red, and then green as ΔΨm become lower. The treatment group showed increased green fluorescence compared with the control group. Furthermore, green fluorescence decreased upon treatment with CAG at 100 μg/mL ( Figure 11). DISCUSSION The objective of the present study was to explore the effect of LPS/AC in hepatocytes and to determine whether CAG could reverse such effect. We found that LPS/AC causes hepatocyte injury and that CAG can reverse it. In our experiments, cells treated with LPS/AC showed decreased viability, decreased ALT and AST activity, increase in the levels of oxidative stress indicators, increased apoptosis, and alterations in the expression levels of apoptosis-related mRNAs and proteins. We used a modified two-step IV collagenase ex-situ perfusion method that was based on an in situ method to isolate primary hepatocytes as a predictor of [29,30]. Our hepatocytes showed a high level of dispersion, purity, and viability (90%) in culture for approximately nine days (Figure 1). This allowed us to assess enzyme induction and inhibition and perform medium-throughput screening of compounds. LPS and AC have each been shown to induce liver injury [18,31]. A composite liver injury model using LPS and Bacillus Calmette-Guerin or D-galactosamine has been previously established [32][33][34]; however, no reports employing LPS in combination with AC to induce liver injury are currently available. In our previous study, we determined the optimal LPS and AC doses that induce liver injury in vitro, as well as their respectively IC 50 , which was 60 μg/mL and 180 μg/mL, respectively. On this basis, treatment of hepatocytes with LPS/AC resulted in a concentration-dependent increase in cell death, with an IC 50 of 30 + 100 μg/mL, which indicates that the dose of LPS required to generate a combined AC induced liver injury was much lower than that required when these are modeled separately (Figure 2). Reports have shown that LPS can induce hepatocyte damage in rat primary hepatocytes at a dose of 40 μg/mL [35]. The dose we found to induce liver injury in chicken hepatocytes was thus higher than that in rats. This suggests that rat liver cells are more sensitive to LPS than chicken cells. However, when we exposed chicken hepatocytes to a combination of AC potassium and LPS, the concentration of LPS necessary to induce injury was lower than that in rats. The present study reveals that the activity of ALT and AST in the cell culture medium of the model groups was higher than in controls (Figure 3). This suggests that LPS/ AC causes hepatic structural damage since ALT and AST are normally localized in the cytoplasm and are released into the circulation only after cellular damage [36]. In our investigation, we treated hepatocytes with different doses of CAG after exposure to 30 μg/mL of LPS + 100 μg/mL of AC ( Figure 3A). Results from our cell viability assays indicate that viability increased in a dosedependent manner and was accompanied by an increase in the activity of ALT and AST in the supernatant (Figure 3). These results indicate that LPS/AC can induce liver injury in vitro, and that CAG can reverse this effect. Oxidative stress can also activate signaling pathways, thereby causing cell damage [37,38]. MDA, which is produced by free radical-mediated lid peroxidation, is frequently used as a marker of oxidative stress. In contrast, SOD, GSH, and GSH-Px protect host cells from oxidative damage by scavenging free radicals. Figure 4 shows an increase in MDA levels and a decrease in SOD activity, as well as a decrease in the concentration of GSH and GSH-Px in cell lysate collected from the model group. This suggests that LPS/AC triggers oxidative damage to the cell membranes of hepatocytes, in agreement with previous studies [39]. Furthermore, glycyrrhizin effectively inhibits lid peroxidation and enhances the capacity to eliminate free-radicals [28]. In addition, CAG inhibited LPS/ACinduced liver injury by increasing SOD activity and GSH levels and decreasing MDA levels (Figure 4), which may be attributable to its free-radical scavenging power. ROS can induce cell death and activate various signaling Oncotarget 96845 www.impactjournals.com/oncotarget pathways [40]. Our study here we showed that CAG could decrease ROS induced by LPS/AC in hepatocytes. Apoptosis in hepatocytes can occur in virus-or nonvirus-induced acute liver injury [41,42] and is the main mechanism hallmark of liver failure [43]. To determine whether LPS/AC induces apoptosis in hepatocytes and whether CAG reverses this event, we measured the rate of apoptosis in cultured hepatocytes by FCM and Hoechst 33342 staining (Figures 5 and 6). Both experiments showed an increase in the number of ImageJ. The data are expressed as the mean ± SD (n = 3). ## < 0.01 compared to control group; ** < 0.01 compared to the model group. Oncotarget 96846 www.impactjournals.com/oncotarget apoptotic hepatocytes after LPS/AC treatment. However, when CAG was added into the medium, cellular repair was observed in a dose-dependent manner. The inter-membranous space of mitochondria contains various pro-apoptotic proteins, which include cytochrome c, AIF, and endonuclease G (EndoG). Mitochondrial swelling and release of cytochrome occur during permeabilization of the outer mitochondrial membrane [44][45][46]. Cytochrome c associates with the adaptor protein Apaf-1 and the prodomain protein caspase-9 to form the apoptosome complex, which in turn initiates apoptosis [47,48]. Caspase-3, which is a key executor of apoptosis, is then activated. Our results indicated that the mRNA and protein levels of caspase-3 increased after LPS/AC injection, whereas this was markedly attenuated by CAG (Figures 7 and 8). In our experiments, LPS/AC treatment also induced an increase in the mRNA and protein levels of caspase-9, cyt c, and bax, which in turn may be responsible for inducing apoptosis in primary hepatocytes. Furthermore, mitochondria became swollen and ΔΨm decreased in model group. Similarly, western blot analyses showed that cyt c was released from mitochondria into the cytoplasm (Figures 9, 10 and 11). The bcl-2 family includes pro-apoptotic (bax and bid) or anti-apoptotic (bcl-2 and bcl-xl) proteins that regulate the mitochondrial apoptotic pathway. In addition, these proteins also regulate cell survival and death by blocking both the death receptor and mitochondrial apoptosis pathways [49]. Bcl-2 and bax have opposite effects on cell death: Bcl-2 inhibits or delays cell death, whereas bax accelerates apoptosis [50]. Our results here suggest that the increase in bax mRNA and protein levels and the decrease in Bcl-2 levels correlate with LPS/AC- Oncotarget 96847 www.impactjournals.com/oncotarget induced apoptosis (Figures 7-8), in agreement with studies reporting that the bax/bcl-2 ratio increases with apoptosis [51]. The administration of 100 μg/mL of CAG attenuated bax mRNA and protein expression, suggesting that CAG has a protective effect against LPS/AC-induced liver injury. The p38 MAPK pathway transduces a variety of extracellular signals that transmit cellular responses to stress, and is implicated in cell proliferation, differentiation, and apoptosis [52,53]. In the present study, we observed an increase in the rate of phosphorylation of p38 kinases after exposure to LPS/AC, an effect that was reversed after CAG treatment (Figure 12). The hepatoprotective activity of CAG may thus be due to the inhibition of LPS/ACinduced phosphorylation of p38 MAPKs. In conclusion, this is the first model that depicts LPS/ AC-induced chicken liver injury in vitro and the beneficial effects of CAG treatment. The mechanism of action of CAG as a protective agent involves the maintenance of the level of cellular antioxidants, inhibition of apoptosis, and regulation of the p38 MAPK signaling cascade in liver cells. Therefore, our study shows that AC potassium enhances the toxicity of LPS in chicken hepatocytes whereas CAG administration may be used an alternative therapy to treat or prevent acute hepatic damage. Reagents and materials CAG was prepared in our laboratory. Collagenase (type IV), LPS (E. coli L-2880) and HPEPS were purchased from Sigma Chemical Co. Dulbecco's modified eagle's medium (DMEM) was obtained from Hyclone. Diagnostic kits used to determine the presence or absence of AST, ALT, SOD, GSH, and MDA were obtained from Nanjing Jiancheng Institute of Biotechnology (Nanjing, China). Preparation and incubation of isolated hepatocytes Hepatocytes were isolated from male Hailan chickens (average weight: 1.0-1.5 kg) by using a twostep collagenase perfusion method. The chickens were anesthetized by using a cocktail of xylazine (2 mg/mL) and ketamine (20 mg/mL), and their livers were excised after ligating the hepatic blood vessels such as the pancreaticoduodenal veins, the mesenteric vein, and the inferior caval vein. We then cut a ventage in the portal vein and inserted a tubule to allow perfusion of saline solution A (33 mM/L HEPES, 127.8 mM/L NaCl, 3.15 mM/L KCl, 0.7 mM/L Na 2 HPO 4 ·12H 2 O, 0.6 mM/L EGTA, pH 7.4) that was warmed to 37°C to wash out the blood. Then, to remove the EGTA, perfusion was performed using saline solution B (solution A added 3 mM/L CaCl 2 , pH 7.4) at room temperature and at a flow rate of 10-20 mL/min for 15 min. Finally, the liver was perfused in a beaker with 0.5% collagenase IV at a flow of 20 mL/min for 20-25 min at 37°C. Hepatocytes were separated from other cellular components by centrifugation at 50g for 3 min and the precipitates were resuspended in DMEM containing 10% fetal bovine serum (FBS, Gibco), 0.5 mg/L bovine insulin, 100 U/mL penicillin, and 0.25 mg/mL streptomycin (pH 7.4). Hepatocytes were counted using a hemocytometer, and cell viability was determined using Figure 12: Effects of CAG treatment on p38 MAPK phosphorylation. Three groups of cells were tested; namely, the control, the model and the 100 μg/mL CAG treatment groups. # < 0.05, ## < 0.01 compared to the control group; * < 0.05, ** < 0.01 compared to the model group. Oncotarget 96848 www.impactjournals.com/oncotarget Trypan blue. Cells were resuspended in DMEM and diluted to a final concentration of 5 × 10 5 cells/mL. The hepatocytes were seeded onto plates and then incubated at 37°C in a humidified incubator with an atmosphere of 5% CO 2 . Establishment of a LPS/AC-induced liver injury model Cells were seeded onto 96-well plates at a density of 5 × 10 5 cells/mL in 200 μL of incubation culture medium for 24 h. Then, the hepatocytes were incubated with LPS/ AC at concentrations of 30 + 60, 30 + 80, 30 + 100, 30 + 120, and 30 + 140 μg/mL for 24 h. MTT stock solution (5 mg/mL) was then applied to each of the wells, and the cells were incubated in a humidified atmosphere for 4 h. The absorbance of the samples was measured using a microtiter plate reader at a dual wavelength mode of 490 nm and 655 nm. Cell viability was calculated using the following equation: Cell viability = (Average OD of the treated wells -Average OD of the blank wells)/ (Average OD of the control wells -Average OD of the blank wells) × 100%, where OD is the optical density. Evaluation of the protective effect of CAG on the LPS/AC-induced hepatocyte injury Hepatocytes were seeded onto 96-well plates at the same density for 24 h. Different batches of cells were then incubated with CAG at concentrations of 1, 10, and 100 μg/mL for 24 h. After incubation, the supernatant was discarded, and the cells were exposed to LPS/AC for 24 h at a concentration that induces death of 50% of the hepatocytes. Cell viability was measured as described above (Methods 2.3). The levels of AST and ALT in the cell culture supernatant were measured using commercial kits according to the manufacturer's protocols. Determination of MDA levels and SOD, GSH, and GSH-Px The cells were washed twice with 300 μL of phosphate buffered saline (PBS, pH 7.4). The cells were detached using a sterilized scraper and lysed in 25 mmol/L Tris-HCl lysis buffer. The homogenate was then sonicated on ice (10-s pulses) and finally centrifuged at 13,000 g for 15 min. The supernatant was collected and analyzed according to the procedures recommended by the manufacturers of the respective assay kits. ROS detection ROS levels were assessed using DCFH-DA. Briefly, after the cell medium was discarded, the cells were incubated with DCFH-DA for 20 min at 37°C, washed three times with PBS, and then visualized under a fluorescence microplate. Measurement of caspase-3 activity The cells were isolated using trypsin, centrifuged (4°C, 2,000 rpm, 5 min), and resuspended in 150 μL of a lysis buffer and 1.5 μL DTT from a caspase-3 colorimetric assay kit (KGA202, KeyGEN, China), placed on ice for 1 h, and vortexed four times for 10 s each time. The cell suspension was centrifuged (10,000 rpm, 1 min), and the supernatant was transferred to a 1.5-mL centrifuge tube, which was then placed on ice. Approximately 50 μL of a 2× reaction buffer and 5 μL of the caspase-3 substrate were added to each 50-μL sample, incubated at 37°C in the dark for 4 h, and then analyzed using a spectrophotometer at a wavelength of 405 nm. Determination of apoptosis in primary chicken liver cells by flow cytometry (FCM) The detection kit (eBioscience, USA) employed in this study utilizes FITC-conjugated annexin V/PI. After treating with LPS/AC and CAG, the hepatocytes were dissociated by using 0.25% trypsin without EDTA (Solarbio, China) and collected as a suspension. A 100-µL aliquot of the cell suspension was transferred to a 5-mL culture tube, which was then mixed with 5 μL of Annexin V-FITC and 5 μL of propidium iodide. The cells were gently vortexed and incubated for 15 min at room temperature (25°C) in the dark. Then, 400 µL of 1× binding buffer were added to each tube and analyzed by FCM within 1 h. Detection of apoptosis via Hoechst 33342 staining Primary chicken liver cells were cultured on a glass slide in 24-well plates. The cells were washed with PBS and then stained with Hoechst 33342 (1 mg/mL, Nanjing Jiancheng, China) for 10 min. After two washes with PBS, the cells were examined using fluorescence microscopy. Relative quantification of target gene expression by real-time PCR (RT-PCR) Total RNA was extracted using TRIzol TM (9108/9109 Takara, Japan) according to the manufacturer's instructions. cDNA synthesis was performed using 1 μg of total RNA using a cDNA synthesis kit (RR047A Takara, Japan). The mRNA expression levels of caspase-3, bax, bcl-2, Fas, and actin were quantified by RT-PCR using a SYBR green premix according to the manufacturer's instructions (RR820A Takara, Japan). The thermal cycling conditions of the PCR assays were as follows: denaturation at 95°C for 3 min, followed by 40 cycles of denaturation at 95°C for 5 s, primer annealing at 58°C for 30 s, and primer extension at 72°C for 1 min, with a final extension at 72°C for 6 min. For each PCR product, a single narrow peak was obtained by www.impactjournals.com/oncotarget melting curve analysis at a specific temperature. The relative expressions of the target genes were normalized to that of actin. The data were calculated using the 2 -ΔΔCt method as described by the manufacturer and were expressed as a foldincrease over the indicated control groups. Transmission electron microscopy (TEM) For electron microscopy, the cells of every group earlier described were rapidly fixed in 2.5% glutaraldehyde in 0.1 M sodium phosphate buffer (pH 7.2) for 3 h at 4°C, and then later used in preparing TEM sections. Ultrathin sections were stained with 2% uranyl acetate and observed under a Hitachi transmission electron microscope. Mitochondrial membrane potential assay JC-1 kit was used to measure the mitochondrial membrane potential of the liver cells. After treatment with LPS/AC and CAG, cells were trypsinized, collected, centrifuged, resuspended with 0.5 mL DMEM and 0.5mL JC-1 stain liquid, and incubated for 20 min. Cells were washed two times with JC-1 stain buffer solution, then analyzed with FCM. Statistical analysis All data were expressed as the mean ± standard deviation. One-way ANOVA was used for statistical comparisons. A p value of < 0.05 was deemed statistically significant. Graphs were plotted using Graphpad Prism 4 (GraphPAD Software, San Diego, CA, USA). CONFLICTS OF INTEREST None to declare. FUNDING The National Natural Science Foundation of China (grant no. 31572569) supported this study.
6,232.8
2017-05-30T00:00:00.000
[ "Biology" ]
Ablation Resolution in Laser Corneal Refractive Surgery : The Dual Fluence Concept of the AMARIS Platform Purpose. To evaluate to which extent individual Zernike terms can be corrected. Methods. Ablation time and fidelity was analysed using different fluence levels (range 90–2000 mJ/cm2) and aspheric ablation profiles. With optimal parameters, the extent to which individual Zernike modes can be corrected was evaluated. Results. The range 188–565 mJ/cm2 resulted as optimum fluence level with an optimum proportion range 50%–90% for high fluence. With optimal parameters, it corresponds to 2.4 s/D at 6 mm OZ, with fidelity variance of 53 μm RMS, and average ablation error of 0.5 μm for each location. Ablation simulation of coma Z[3,±1] showed 98,4% accuracy and 98% fit quality; trefoil Z[3,±3], 99,9% accuracy and 98% fit quality; spherical aberration Z[4,0], 96,6% accuracy and 97% fit quality; secondary astigmatism Z[4,±2], 97,9% accuracy and 98% fit quality. Real ablation on a flat plate of PMMA of coma Z[3,±1] showed 96,7% accuracy and 96% fit quality; trefoil Z[3,±3], 97,1% accuracy and 96% fit quality; spherical aberration Z[4,0], with 93,9% accuracy and 90% fit quality; secondary astigmatism Z[4,±2], with 96,0% accuracy and 96% fit quality. Conclusions. Ablation of aspherical and customised shapes based upon Zernike polynomials up to the the 8th order seems accurate using the dual fluence concept implemented at the AMARIS platform. Introduction With the introduction of the laser technologies for refractive surgery, the change of the corneal curvature to compensate in a controlled manner for refractive errors of the eye [1] is more accurate than ever.The procedure is nowadays a successful technique, due to its submicrometric precision and the high predictability and repeatability of corneal ablation accompanied by minimal side effects.Standard ablation profiles based on the removal of convex-concave tissue lenticules with spherocylindrical surfaces proved to be effective in the compensation of primary refractive errors.However, the quality of vision deteriorated significantly, especially under mesopic and low-contrast conditions [2]. With the introduction of wavefront analysis, it was proved that the conventional refractive LASER techniques were far from ideal, by measuring the aberrations induced by conventional algorithms and the aberrations induced by the LASIK flap cut itself. With the LASIK (Laser in Situ Keratomileusis [3]) treatment, we have an accepted method to correct refractive errors such myopia [4], hyperopia [5], and astigmatism [6].One of the most significant side effects in myopic LASIK, is the induction of spherical aberration [7], which causes halos and reduced contrast sensitivity [2].However, the different laser platforms are always introducing new concepts and optimising their ablation profiles. Since Laser refractive surgery was introduced, the technology rapidly improved.With the beginning of photoablation, the goal was to achieve predictable and stable results for myopic, hyperopic, and astigmatic corrections.Today's technology is far more advanced since sophisticated diagnostic instruments, such as aberrometers and topography systems, offer the challenge of improving the postoperative results in terms of visual acuity and night vision.At the same time, the better knowledge and understanding on refractive surgery by potential patients upgrades the required standard outcomes.Making more challenge finding new approaches towards the close-to-zero aberrations target results in several senses: (a) finding the sources of the induced aberrations due to laser refractive surgery, (b) developing "free-ofaberrations" ablation profiles, (c) developing ablation profiles to compensate the natural aberrations of any single eye in order to get a close-to-zero aberrations result. Advances in Optical Technologies To eliminate already existing aberrations, the so-called "customized" treatments were developed.Customisation of the ablation is possible either using wavefront measurements of the whole eye [8] (obtained, e.g., by Hartman-Shack wavefront sensors) or by using corneal topographyderived wavefront analyses [9,10].Topographic-guided [11], Wavefront-driven [12], Wavefront-optimized [13], Asphericity preserving, and Q-factor profiles [14] have all been put forward as solutions.Nevertheless, considerations such as treatment duration, tissue removal [15], tissue remodelling, and overall postoperative outcomes have made it difficult to establish a universal optimal profile.Therefore, the topic "ablation resolution in laser corneal refractive surgery" is still worth to be analysed and considered, because its clinical implications are not yet deeply explored. The real impact of ablation resolution in laser corneal refractive surgery is still discussed in a controversial way.The Advances in Optical Technologies Materials To evaluate the technical capabilities to correct individual Zernike modes, and the extent to which individual Zernike modes can be corrected, the CAM software was used to plan the ablations, which were first simulated and then ablated onto flat PMMA plates with an AMARIS excimer laser (SCHWIND eye-tech-solutions GmbH, Kleinhostheim, Germany). The AMARIS laser system works at a repetition rate of 500 Hz, produces a spot size of 0.54 mm (full width at half maximum (FWHM)) with a superGaussian ablative spot profile [16,17].High-speed eye-tracking with 1050 Hz acquisition rate is accomplished with 3-ms latency period [18].The system delivers aspheric wavefront-customised profiles and including some optimisations: The aspheric profiles go beyond the Munnerlyn proposed profiles, and add some aspheric characteristics to balance the induction of spherical aberration (prolateness optimisation).These particular case of aspheric profiles compensate for the aberrations induction observed with other types of profile definitions [19], some of those sources of aberrations are those ones related to the loss of efficiency of the laser ablation for nonnormal incidence [20][21][22]. Optimisation consisted to take into account the loss of efficiency at the periphery of the cornea in relation to the centre as there is a tangential effect of the spot in relation to the curvature of the cornea (Keratometry (K-reading)).The software provides K-reading compensation, which considers the change in spot geometry and reflection losses of ablation efficiency. An optical zone of 6.50 mm in diameter was planned, a variable transition zone, to smooth the ablated area towards the nontreated cornea, was provided automatically by the software in relation to the refraction to be corrected. Real ablative spot shape (volume) is considered through a self-constructing algorithm.In addition, there is a randomised flying-spot ablation pattern, and controls the local repetition rates to minimise the thermal load of the treatment [23] (smooth ablation, no risk of thermal damage).Therefore, the ablated surface after aspheric wavefrontcustomised profiles is very smooth, so that there are some benefits in higher order aberrations. Method Laser corneal refractive surgery is based on the use of a Laser (typically an excimer one) to change the corneal curvature to compensate for refractive errors of the eye.It has become the most successful technique, mainly due to the submicron precision and the high repeatability of the ablation of the cornea accompanied by minimal side effects.Laser refractive surgery is based upon the sequential delivery of a multiplicity of laser pulses each one removing (ablating) a small amount of corneal tissue. From the blowoff model (derived form the Beer-Lambert's law), the real energy density absorbed at that point determines the ablation depth as where d i j is the actual depth per pulse at the location i, j; I i j is the radiant exposure of pulse at the location i, j; R i j is the reflectivity at the location i, j; I Th is the corneal threshold; α Cornea is the corneal absorption coefficient. In general, For different beam profiles we get different spot profiles as depicted in Figures 1 and 2. For different radiant exposures with the same beam profiles, we get different spot profiles as displayed in Figure 3. The problem of the spot profile and the radiant exposure of the system relies on the sequential delivery of a multiplicity of laser pulses each one ablating locally a small amount of corneal tissue, being the global process an integral effect.The higher the spot profile is, the higher the ablated volume per pulse limiting the resolution of the treatment. There are several ways to avoid that problem. (i) Reducing the radiant exposure improving the vertical resolution of the treatment. (ii) Reducing the spot diameter improving the horizontal resolution of the treatment.The problem of both alternatives is that they need extra time for the ablation procedure, which may produce another inconveniences. The gained ablation volume has to be applied onto the cornea by thousands of single laser shots at different but partly repeated corneal positions, because the ablated volume of a single spot is much smaller than the ablation volume. We have introduced as well some innovations concerning ablation shot file (sequences of pulses needed to carry out a refractive procedure) generation, in order to optimally remove the tissue corresponding to these state-of-the-art treatments, generating the sequence of laser shot coordinates in a way that (i) guarantees a high fidelity reproduction of the given ablation volume line shape and (ii) avoids vacancies and roughness of the cornea. In this context, two opposed requirements define the fluence level. (i) A short ablation time (favouring high fluence levels). We have analysed ablation times and fidelity using fixed fluence levels ranging from 37 pl spot volume to 440 pl spot Advances in Optical Technologies With the optimal parameters we have evaluated the extent to which individual Zernike modes can be corrected, first simulated and then ablated onto flat PMMA plates. Results We have analysed ablation times and fidelity using fixed fluence levels ranging from 37 pl spot volume to 440 pl spot volume (corresponding to 90 mJ/cm 2 to 2000 mJ/cm 2 energy density, using as ablation volume the one corresponding to an aberration-free correction of −5.00 D-3.50 D × 15 • at 6.50 mm OZ, 8.21 mm TAZ (Figure 4). We have obtained as optimum fluence level a variable range from 105 pl spot volume to 240 pl spot volume, corresponding to 188 mJ/cm 2 to 565 mJ/cm 2 energy density with equivalent ablation time ranges from 60 seconds to 26 seconds, corresponding to 4.0 s/D to 1.7 s/D at 6 mm OZ and equivalent fidelity variance from 50 μm RMS to 60 μm RMS and average ablation error from 0.475 to 0.575 μm for each location (Table 1). A second simulation was prepared upon the basis of a dual fluence level concept, using a variable rate from only low-fluence spots to only high-fluence spots using the same ablation volume for the correction of −5.00 D-3.50 D × 15 • at 6.50 mm OZ, 8.21 mm TAZ (Figure 5). We have obtained an optimum proportion range variable from 50% high fluence to 90% high fluence, with equivalent ablation time range from 45 seconds to 33 seconds, corresponding to 3.0 s/D to 2.2 s/D at 6 mm OZ, with equivalent fidelity variance from 52 μm RMS to 55 μm RMS, and average ablation error from 0.490 to 0.525 μm for each location (Table 2). With the optimal parameters we have simulated the extent to which individual Zernike modes can be corrected (Figures 6, 7, 8, and 9, Table 3). With the same parameters, we have evaluated the extent to which individual Zernike modes can be corrected, by ablating onto flat PMMA plates (Figures 10, 11, 12, and 13). Discussion We have evaluated to which extent individual Zernike terms can be corrected, by analysing ablation times and fidelity using different fluence levels (range 90-2000 mJ/cm 2 ), and aspheric ablation profiles, as well as, using a dual fluence level concept (variable rate from only low fluence to only high fluence pulses).With the optimal parameters, the extent to which individual Zernike modes can be corrected was simulated and ablated onto PMMA with a laser. Huang and Arif [16] investigated the effect of laser spot size on the outcome of aberration correction with scanning laser corneal ablation using numerical simulation of ablation outcome of correction of wavefront aberrations of Zernike modes from second to eighth order.They modeled gaussian and top-hat beams from 0.6 to 2.0 mm fullwidth-half-maximum diameters, evaluated the fractional correction and secondary aberration (distortion), and used a distortion/correction ratio of less than 0.5 as a cutoff for adequate performance.The found that a 2 mm or smaller beam is adequate for spherocylindrical correction (Zernike second order), a 1 mm or smaller beam is adequate for correction of up to fourth order Zernike modes, and a 0.6 mm or smaller beam is adequate for correction of up to sixth order Zernike modes. Guirao et al. [17] calculated that the success of a customized laser surgery attempting to correct higher order aberrations depends on using a laser beam that is small enough to produce fine ablation profiles needed to correct higher order aberrations.Simulating more than 100 theoretical customized ablations performed with beams of 0.5, 1.0, 1.5, and 2.0 mm in diameter, they calculated the residual aberrations remaining in the eye and estimated the modulation transfer function (MTF) from the residual aberrations.They found that the laser beam acts like a spatial filter, smoothing the finest features in the ablation profile and that the quality of the correction declines steadily when the beam size increases.A beam of 2 mm was capable of correcting defocus and astigmatism.Beam diameters of 1 mm or less may effectively correct aberrations up to fifth order. Pettit [24] claimed that the LADARVision system using a small fixed diameter excimer laser beam providing a consistent ablation per pulse, is able to ablate complex (higher order) corneal shapes accurately. As demonstrated by Pedder et al. [25] and Jiménez et al. [26], the incorporation of models taking into account the angular dependence of laser-ablation rates as well as the effect of plume absorption can be important in efforts to improve the ablation algorithms used in refractive surgery.Differences in corneal power and corneal asphericity encountered when using this model can significantly affect the visual function of patients after LASIK.The high accuracy of determination of stroma plume absorption coefficients and the incorporation of this information in laser-ablation equations can improve the prediction of postsurgical corneal shape.More accurate values for postsurgical radius and asphericity could be achieved and thereby enhance emmetropization and correction of eye aberrations in refractive surgery.However, in case of AMARIS, the effect of the ablation plume could be not so siginificant since a debris removal system is incorporated.In our study, the range 188-565 mJ/cm 2 resulted as optimum fluence level for first simulation, and optimum proportion range 50%-90% high fluence for second one.With optimal parameters, it corresponds to 2.4 s/D at 6 mm OZ, with fidelity variance of 53 μm RMS, and average ablation error of 0.5 μm for each location. The proposed dual fluence concept uses a high-fluence "HF" and a low-fluence "LF" level.HF to speed up the treatment (minimised ablation time), LF to ensure the highest accuracy (maximised ablation smoothness).In our simulations results, HF takes out 220 pl vol per pulse (725 nm peak depth per pulse) whereas LF removes 110 pl vol per pulse (450 nm peak depth per pulse). The amount of treatment that will receive HF pulses is optimised, to keep the overall quality of the ablation as good as possible (dynamically adjusted to speed up the treatment maintaining the highest accuracy).That means that 1 or 2 D will be made only by LF, but a higher diopter treatment will be made up to 95% with HF.Typically, about 80% of the ablation will be performed at HF. The system uses the "automatic fluence level adjustment" procedure for optimal ablation control.Depending on the planned refractive correction, about 80 percent of the corneal ablation is performed with a high fluence level speeding up the treatment.Fine correction is performed with a low fluence level improving the resolution.The advantage is that the laser treatment is significantly shortened, especially when higher refractive corrections are involved-without compromising on precision and safety. The analysis of the clinical results specifically addressing customized treatments will show whether there are corneal discrepancies between real and expected shapes.Clinical outcomes published up to now show consistent results [27][28][29][30]. Ablation of aspherical and customised shapes based upon Zernike polynomials up the the 8th order seems accurate using the dual fluence concept implemented at the AMARIS platform. In summary, this study demonstrated that it is possible to develop new algorithms and ablation strategies for efficiently performing laser corneal refractive surgery in a customised form.The availability of such profiles, potentially maximising visual performance without incresing the factors of risk, would be of great value for the refractive surgery community and ultimately for the health and safety of the patients.Further clinical evaluations on human eyes are needed to confirm the preliminary simulated results presented herein. Figure 1 : Figure 1: Beam profiles for different beam geometries.Gaussian profile in blue, supergaussian profile (N = 2) in pink, Flat-Top profile in yellow. Figure 2 : Figure 2: Spot profiles for different beam geometries.Parabolic spot profile (from Gaussian beams) in blue, quartic spot profile (from supergaussian (N = 2) beams) in pink, Flat-Top spot profile (from Flat-Top beams) in yellow. 1 N = 2 ,Figure 3 :yFigure 4 : Figure3: Spot profiles for different radiant exposures.Quartic spot profiles (from supergaussian (N = 2) beams) for a peak radiant exposure of 150 mJ/cm 2 in blue and for a peak radiant exposure of 300 mJ/cm 2 in pink.
3,915.8
2010-09-15T00:00:00.000
[ "Physics", "Medicine" ]
The enone motif of (+)-grandifloracin is not essential for 'anti-austerity' antiproliferative activity. We report the synthesis and biological evaluation of three analogues of the natural product (+)-grandifloracin (+)-1. All three analogues exhibit enhanced antiproliferative activity against PANC-1 and HT-29 cells compared to the natural product. The retention of activity in an analogue lacking the enone functional group, 9, implies this structural element is not an essential part of the (+)-grandifloracin pharmacophore. a b s t r a c t We report the synthesis and biological evaluation of three analogues of the natural product (+)-grandifloracin (+)-1. All three analogues exhibit enhanced antiproliferative activity against PANC-1 and HT-29 cells compared to the natural product. The retention of activity in an analogue lacking the enone functional group, 9, implies this structural element is not an essential part of the (+)-grandifloracin pharmacophore. Ó Pancreatic cancer has one of the lowest 5-year survival rates of any cancer, at just 3-5%; in contrast to many other cancers, this figure has not improved in the last 40 years. 1 The reasons for this poor survival rate are numerous. It is one of the most aggressive of all human malignancies and as it is often asymptomatic in its early stages, the vast majority of patients already have metastatic tumours upon presentation. 1 Therefore, surgical resection (the only effective treatment modality) is not appropriate in most cases; those patients who do undergo surgery have a 5-year survival rate which is improved only to 20%. 1 Furthermore, there is a lack of effective chemotherapies for pancreatic cancer, with anti-cancer agents that are effective against other tumour types having negligible effect. 2 Gemcitabine is typically administered in palliative chemotherapy for pancreatic cancer, but the survival benefit it imparts is marginal. 2,3 On the basis of the above, it can be stated that an effective pancreatic cancer therapy constitutes a pressing unmet medical need. Accordingly, much effort is currently being directed towards the development of second-line therapies, adjuvant therapies and combination therapies. Recently, the combination of gemcitabine and erlotinib has been shown to lead to an enhanced 1-year survival rate of 23% as opposed to 17% for gemcitabine alone. 4 Nevertheless, it is clear that a major breakthrough in the treatment of pancreatic cancer will likely require the use of wholly new therapeutic strategies exploiting emerging targets. 5 In this context, the 'anti-austerity' strategy first described by Esumi and co-workers 6 in 2000 is particularly promising. Pancreatic tumours are generally hypovascular, with the consequence that the tumour microenvironment is hypoxic and comparatively nutrient-deprived in comparison with normal tissues. Despite this, pancreatic cancer cells are nevertheless able to proliferate rapidly under these austere conditions. Their ability to tolerate nutrient deprivation is far greater than that of normal tissues and indeed of other cancer cell lines. 6 An anti-austerity agent is defined as a drug that is able to remove the ability of cancer cells to survive under conditions of nutrient starvation, whilst cells with adequate nutrition remain unaffected. 6 Such a drug would represent a novel means of selectively targeting pancreatic tumours in vivo. It should be noted that such behaviour would be in contrast to that of most chemotherapeutic agents, whose efficacy is typically reduced under conditions of nutrient deprivation. 7 Esumi's initial study identified two agents, troglitazone and LY294002, that possess such anti-austerity activity in PANC-1 cells. In the subsequent period, many more anti-austerity agents have been identified and the field has recently been reviewed. 8 identified to date are natural products, due to screening campaigns on plant extracts enabled by the assay reported by Esumi. To date, three agents identified by this method have been evaluated in vivo as well as in vitro-the known anthelminthic, pyrvinium pamoate 9 and the natural products kigamicin D, 7,10 and (À)-arctigenin; 11 all were found to suppress tumour growth in mouse models. The full details of the mode(s) of action of these anti-austerity agents have not yet been elucidated, but it has recently been disclosed that pyrvinium pamoate inhibits the NADH-fumarate reductase system. 12 This is a mitochondrial energy-generating system that shows increased activity in PANC-1 cells cultured under austere conditions and is also employed by parasitic helminths for survival in the hypoxic conditions of their hosts' intestines. 13 The therapeutic promise of anti-austerity agents has attracted the attention of synthetic chemists, with several reports of syntheses of these agents and analogues. Several total syntheses of the anti-austerity agent (+)-angelmarin 14 have been reported (both in enantiopure form 15,16 and as the racemate 17,18 ) and Coster has reported the synthesis of a library of angelmarin analogues, one of which exhibited enhanced potency with respect to the natural product. 19 Additionally, Carrico-Moniz has reported a novel geranylgeranylated hydroxycoumarin having some structural homology with angelmarin, that displays anti-austerity activity against PANC-1 cells also. 20 No total syntheses of the kigamicins have been disclosed to date, but Whatmore, Shipman and coworkers have synthesised and evaluated truncated analogues in an attempt to establish the kigamycin pharmacophore; they report that 7-phenyltetrahydroxanthone displays anti-austerity activity, being 100-fold less potent than kigamicin C. 21 Elsewhere, several total syntheses have been reported of (À)-arctigenin [22][23][24][25] (and also of (±)-arctigenin 26 and (+)-arctigenin, 27 which is also a natural product, 28 although it has only very weak anti-austerity activity 29,30 ). It should be noted that in fact (À)-arctigenin is readily accessible from natural sources. 31,32 Significantly, a recent report from Toyooka, Tezuka and co-workers 29 describes two analogues with enhanced potency compared to (À)-arctigenin. There has been a high and sustained level of interest in antiausterity agents in recent years; a key report, from Awale et al. 33 disclosed that (+)-grandifloracin, (+)-1, isolated from Uvaria dac, is a potent antiausterity agent in four pancreatic cancer cell lines: PANC-1 (PC 50 , 14.5 lM), PSN-1 (PC 50 , 32.6 lM), MIA PaCa-2 (PC 50 , 17.5 lM), and KLM-1 (PC 50 , 32.7 lM). Awale's report also represented the first time that the (+)-enantiomer of 1 had been isolated from nature (the antipodal (À)-1 had been isolated previously from several other species of the same genus, 34-37 but its antiproliferative activity has not been evaluated). Most recently, a 2014 report from Awale details studies on the mode of action of (+)-grandifloracin, which reveal that it induces the autophagic cell death of PANC-1 cells under nutrient deprivation. 38 Furthermore, (+)-grandifloracin was found to inhibit strongly both the phosphorylation of Akt at Ser473 and the phosphorylation of mTOR at Ser2448. 38 In most pancreatic cancer cell lines the serine/threonine kinase Akt/mTOR pathway is constitutively activated and under conditions of nutrient deprivation, Akt has been shown to be overexpressed. In the first instance we targeted the ester motifs of (+)-grandifloracin as sites for diversification. In our original synthesis of (+)-1, 39 the benzoate group was introduced selectively on the primary alcohol in triol 4, which was accessed in three steps from 2 (Scheme 2). In the present case, treatment of 4 with other aroyl chlorides gave para-substituted benzoates 5a and 5b in yields comparable to that obtained with benzoyl chloride. It was found that pre-mixing the aroyl chloride and 2,4,6-collidine minimised competing overacylation at the secondary alcohol. Esters 5a and 5b were characterised by X-ray crystallography ( Figs. 1 and 2). The secondary alcohols in esters 5a and 5b were oxidised to ketones 6a and 6b with manganese dioxide. Cyclohexadienones are known to undergo spontaneous Diels-Alder dimerisation, but when complexed as g 4 ligands to iron(0), this process is suppressed and 6a/6b were found to be stable and characterisable. Upon treatment with cerium ammonium nitrate, 6a and 6b underwent smooth decomplexation to free cyclohexadienones 7a and 7b. These were not isolated, but instead underwent spontaneous dimerisation to give grandifloracin analogues 8a and 8b (Scheme 3). 51 A second approach to analogue preparation was undertaken, this time effecting semisynthetic functional group deletion by means of reduction of (+)-1 to give tetrahydrograndifloracin 9 (Scheme 4). We envisaged that this analogue would provide a means of assessing the importance or otherwise of the (nonaromatic) unsaturation in (+)-1 in terms of its anti-austerity activity. With three novel analogues of (+)-grandifloracin in hand, we undertook the evaluation of their anti-austerity properties in a variety of cell lines. In the first instance, we determined the antiproliferative effects of (+)-1, 8a, 8b and 9 in the PANC-1 pancreatic cancer cell line. Each agent was evaluated both under comparatively nutrient rich (10% fetal bovine serum) and nutrient deprived (0.5% fetal bovine serum) culture conditions. Cells were exposed to the test agent for 72 h and gemcitabine and 5-fluorouracil were employed as positive controls (see Supplementary information). The cell culture conditions were not identical to those employed by Awale et al. 33 and as such the data presented here are not directly comparable with those in the previous Letter. Rather, we adopted conditions of less extreme nutrient deprivation (see Supplementary information) analogous to those that are likely present in the actual tumour microenvironment, fully expecting that prospective anti-austerity agents might be less potent under such conditions. The calculated IC 50 values are shown in Table 1. The data are noteworthy in several respects. Contrary to our expectations, (+)-1 proved inactive up to 500 lM in 0.5% serum, yet exhibited an IC 50 of 123 lM in 10% serum. The anomalous nat- ure of this result is underlined by the fact that analogues varying only in the nature of the ester side chains, 8a and 8b, are active. Indeed, they are more potent than (+)-1 under both cell culture conditions. It is possible, however, that (+)-1, 8a and 8b may in fact be prodrugs, all affording the same tetraol upon the action of intracellular esterases; the enhanced activity of 8a and 8b with respect to (+)-1 may be due to differences in lipophilicity and cell permeability. Arguably the most significant result presented in Table 1 is the increase in antiproliferative activity upon hydrogenation of (+)-1 to 9. Inspection of the structure of (+)-1 had initially led us to speculate that the enone motif might serve as a Michael acceptor and that (+)-1 (or its metabolite) might exert its effects by covalent modification of its target(s) (Scheme 5). 52 That tetrahydrograndifloracin 9 retains its potency in the absence of the enone leads us to conclude that in fact neither the enone motif, nor the electron-rich alkene, are required for antiproliferative activity in this cell line. Both (+)-1 and 9 were modelled using Maestro 53 and it was determined that hydrogenation of (+)-1 to 9 induces only very subtle conformational changes. The rigid nature of the bicyclo[2.2.2]octene skeleton in the western hemisphere of (+)-1 means it has very limited scope to undergo conformational change upon reduction to 9. The eastern hemisphere of (+)-1 is less constrained, but even here the only appreciable change upon reduction to 9 is in the relative positions of the two carbons that have rehybridised from sp 2 to sp 3 ; more broadly other functional groups do not move significantly. Thus, all functionality which might constitute part of the grandifloracin pharmacophore is highly conserved between the two structures. For example, in (+)-1, the distance between two representative hydrogen bond acceptors, the two ketone oxygens (labelled as A7 and A8 in Fig. 3b), is 7.12 Å, and this distance is unchanged in 9. Similarly, the distance between the two hydrogen bond donors, the tertiary hydroxyl oxygens (labelled D9 and D10 in Fig. 3b) is 4.50 Å in (+)-1 and 4.54 Å in 9. We next sought to evaluate the effects of (+)-1, 8a, 8b and 9 on other (non-pancreatic) tumour cell lines. IC 50 values obtained for these agents in HT-29 human colon cancer cells are shown in Table 2. Three trends are evident in the data: (a) all three novel analogues are more active than (+)-1, under both cell culture conditions (as was the case for PANC-1 also). (b) All three novel analogues are more active than in PANC-1 cells. (c) All three novel analogues exhibited lower IC 50 values under the comparatively nutrient deprived conditions (0.5% serum), that is, an anti-austerity effect. To our knowledge, this is the first time such an antiausterity effect has been demonstrated for HT-29 cells. In summary, we have prepared three novel analogues of (+)-grandifloracin 1, all of which show enhanced antiproliferative activity towards PANC-1 and HT-29 cells with respect to the parent compound. We have also determined antiproliferative activities in other cell lines. Current work in our laboratory is focused on the preparation and evaluation of further analogues of (+)-1; results will be reported in due course.
2,928.8
2014-07-01T00:00:00.000
[ "Biology", "Chemistry" ]
Discriminating between Similar Languages using Weighted Subword Features The present contribution revolves around a contrastive subword n-gram model which has been tested in the Discriminating between Similar Languages shared task. I present and discuss the method used in this 14-way language identification task comprising varieties of 6 main language groups. It features the following characteristics: (1) the preprocessing and conversion of a collection of documents to sparse features; (2) weighted character n-gram profiles; (3) a multinomial Bayesian classifier. Meaningful bag-of-n-grams features can be used as a system in a straightforward way, my approach outperforms most of the systems used in the DSL shared task (3rd rank). Introduction Language identification is the task of predicting the language(s) that a given document is written in. It can be seen as a text categorization task in which documents are assigned to pre-existing categories. This research field has found renewed interest in the 1990s due to advances in statistical approaches, and it has been active ever since, particularly since the methods developed have also been deemed relevant for text categorization, native language identification, authorship attribution, text-based geolocation, and dialectal studies (Lui and Cook, 2013). As of 2014 and the first Discriminating between Similar Languages (DSL) shared task , a unified dataset comprising news texts of closely-related language varieties has been used to test and benchmark systems. The documents to be classified are quite short and may even be difficult to distinguish for human annotators, thus adding to the difficulty and the interest of the task. A second shared task took place in 2015 (Zampieri et al., 2015). An analysis of recent developments can be found in Goutte el al. (2016) as well as in the report on the third shared task . Not all varieties are to be considered equally since differences may stem from extra-linguistic factors. It is for instance assumed that Malay and Indonesian derive from a millenium-old lingua franca, so that shorter texts have been considered to be a problem for language identification (Bali, 2006). Besides, the Bosnian/Serbian language pair seems to be difficult to tell apart whereas Croatian distinguishes itself from the two other varieties mostly because of political motives (Ljubeši [Pleaseinsertintopreamble] et al., 2007;Tiedemann and Ljubešić, 2012). The remainder of this paper is organized as follows: in section 2 the method is presented, it is then evaluated and discussed in section 3. Preprocessing Preliminary tests have shown that adding a custom linguistic preprocessing step could slightly improve the results. As such, instances are tokenized using the SoMaJo tokenizer (Proisl and Uhrig, 2016), which achieves state-of-the-art accuracies on both web and CMC data for German. As it is rule-based, it is deemed efficient enough for the languages of the shared task. No stop words are used since relevant cues are expected to be found automatically as explained below. Additionnally, the text is converted to lowercase as it led to better results during development phase on 2016 data. Bag of n-grams approach Statistical indicators such as character-and tokenbased language models have proven to be efficient on short text samples, especially character n-gram frequency profiles from length 1 to 5, whose interest is (inter alia) to perform indirect word stemming (Cavnar and Trenkle, 1994). In the context of the shared task, a simple approach using n-gram features and discriminative classification achieved competitive results (Purver, 2014). Although features relying on the output of instruments may yield useful information such as POSfeatures (Zampieri et al., 2013), the diversity of the languages to classify as well as the prevalence of statistical methods call for low-resource methods that can be trained and applied easily. In view of this I document work on a refined version of the Bayesline which has been referenced in the last shared task (Barbaresi, 2016a) and which has now been used in official competition. After looking for linguistically relevant subword methods to overcome data sparsity (Barbaresi, 2016b), it became clear that taking frequency effects into consideration is paramount. As a consequence, the present method grounds on a bag-of-n-grams approach. It first proceeds by constructing a dictionary representation which is used to map words to indices. After turning the language samples into numerical feature vectors (a process also known as vectorization), the documents can be treated as a sparse matrix (one row per document, one column per n-gram). Higher-order n-grams mentioned in the development tests below use feature hashing, also known as the "hashing trick" (Weinberger et al., 2009), where words are directly mapped to indices with a hashing function, thus sparing memory. The upper bound on the number of features has been fixed to 2 24 in the experiments below. Term-weighting The next step resides in counting and normalizing, which implies to weight with diminishing importance tokens that occur in the majority of samples. The concept of term-weighting originates from the field of information retrieval (Luhn, 1957;Sparck Jones, 1972). The whole operation is performed using existing implementations by the scikit-learn toolkit (Pedregosa et al., 2011), which features an adapted version of the tfidf (term-frequency/inverse document-frequency) term-weighting formula. 1 Smooth idf weights are obtained by systematically adding one to document frequencies, as if an extra document was seen containing every term in the collection exactly once, which prevents zero divisions. Naive Bayes classifier The classifier used entails a conditional probability model where events represent the occurrence of an n-gram in a single document. In this context, a multinomial Bayesian classifier assigns a probability to each target language during test phase. It has been shown that Naive Bayes classifiers were not only to be used as baselines for text classification tasks. They can compete with state-ofthe-art classification algorithms such as support vector machines, especially when using approriate preprocessing concerning the distribution of event frequencies (Rennie et al., 2003); additionally they are robust enough for the task at hand, as their decisions may be correct even if their probability estimates are inaccurate (Rish, 2001). "Bayesline" formula The Bayesline formula used in the shared task grounds on existing code 2 and takes advantage of a comparable feature extraction technique and of a similar Bayesian classifier. The improvements described here concern the preprocessing phase, the vector representation, and the parameters of classification. Character n-grams from length 2 to 7 are taken into account. 3 1 http://scikit-learn.org/stable/modules/feature extraction.html 2 https://github.com/alvations/bayesline 3 TfidfVectorizer(analyzer='char', ngram range=(2,7), strip accents=None, lowercase=True) followed by MultinomialNB ( 3 Evaluation Data from the third edition In order to justify the choice of the formula, experiments have been conducted on data from the third edition of the DSL shared task ; training and development sets have been combined as training data, and gold data used for evaluation. The method described above has been tested with several n-gram ranges; the results are summarized in Table 1. The best combinations were found with a minimum n-gram length of 1 to 3 and a maximum n-gram length of 6 to 8. Accordingly, an aurea mediocritas from 2 to 7 has been chosen. Table 2 shows the extraction, training, and testing times for n-gram lengths with a mininum of 2. One can conclude that the method is computationnally efficient on the shared task data. Execution with feature hashing is necessary for higherorder n-grams due to memory constraints; it effectively improves scalability but it also seems to be a trade-off between computational efficiency and accuracy, probably due to the upper bound on used features and/or hash collisions. Table 3 documents the efficiency and accuracy of several algorithms on the classification task, without extensive parameter selection. The Ridge (Rifkin and Lippert, 2007) and Naive Bayes classifiers would have outperformed the best submis-sion of the 2016 competition (0.894) with scores of respectively 0.895 and 0.902, while the Passive-Aggressive (Crammer et al., 2006) and Linear Support Vector (Fan et al., 2008) classifiers would have been ranked second with a score of 0.892. It is noteworthy that the Naive Bayes classifier would still have performed best without taking the development data into consideration (accuracy of 0.898). Data from the fourth edition As expected, the method performed well on the fourth shared task, as it reached the 3rd place out of 11 teams (with an accuracy of 0.925 and a weighted F1 of 0.925). In terms of statistical significance, it was ranked first (among others) by the organizers. The official baseline/Bayesline used a comparable algorithm with lower results (accuracy and weighted F1 of 0.889). The confusion matrix in Figure 1 details the results. Three-way classifications between the variants of Spanish and within the Bosnian-Croatian-Serbian complex still leave room for improvement, although Peruvian Spanish does not seem to be as noisy as the Mexican Spanish data from the last edition. The F-score on variants of Persian is fairly high (0.960) which proves that the method can be applied to a wide range of alphabets. The same method has been tested without preprocessing on new data consisting in the identification of Swiss German dialects (GDI shared task). The low result (second to last with an accuracy of 0.627 and a weighted F1 of 0.606) can be explained by the lack of adaptation, most notably to the presence of much shorter instances. The classification of the Lucerne variant is particularly problematic, it calls for tailored solutions. Conclusion The present contribution revolves around a contrastive subword n-gram model which has been tested in the Discriminating between Similar Languages shared task. It features the following char- (2) weighted character n-gram profiles; (3) a multinomial Bayesian classifier, hence the name "Bayesline". Meaningful bag-of-n-grams features can be used as a system in a straightforward way. In fact my method outperforms most of the systems used in the DSL shared task. Thus, I propose a new baseline and make the necessary components available under an open source licence. 4 The Bayesline efficiency as well as the difficulty to reach higher scores in open training could be explained by artificial regular-4 https://github.com/adbar/vardial-experiments ities in the test data. For instance, the results for the Dari/Iranian Persian and Malay/Indonesian pairs are striking, these clear distinctions do not reflect the known commonalities between these language varieties. This could be an artifact of the data, which feature standard language of a different nature than the continuum "on the field", that is between two countries as well as within a single country. The conflict between in-vitro and real-world language identification has already been emphasized in the past (Baldwin and Lui, 2010); it calls for the inclusion of web texts (Barbaresi, 2016c) into the existing task reference.
2,430.4
2017-01-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
An IoT-Based Life Cycle Assessment Platform of Wind Turbines Life cycle assessment (LCA) is conducive to the change in the wind power industry management model and is beneficial to the green design of products. Nowadays, none of the LCA systems are for wind turbines and the concept of Internet of Things (IoT) in LCA is quite a new idea. In this paper, a four-layer LCA platform of wind turbines based on IoT architecture is designed and discussed. In the data transmission layer, intelligent sensing of wind turbines can be achieved and their status and location can be monitored. In the data transmission layer, the LCA platform can be effectively integrated with enterprise information systems through the object name service (ONS) and directory service (DS). In the platform layer, a model based on IMPACT 2002+ is developed, and four management modules are designed. In the application layer, different from other systems, energy payback time (EPBT) is selected as an important evaluation index for wind turbines. Compared with the existing LCA systems, the proposed system is specifically for wind turbines and can collect data in real-time, leading to improved accuracy and response time. Introduction Wind energy is a type of important clean and renewable energy with abundant reserves. As a green and environmental protection power generation technology, wind turbines can effectively reduce environmental pollution. In recent years, due to the gradual maturity of wind power technology and the continuous decline in the cost of wind power generation, wind turbines have been widely used in the world. According to "BP Statistical Review of World Energy June 2019", the annual growth rate of global wind farms in 2018 is 12.6%, and the power generation capacity is 1170.0 TWH. Additionally, the rated power of wind turbines has developed from dozens of kilowatts to megawatts [1]. Large-scale wind turbines are considered to be the most affordable in all renewable energy sources [2]. Nowadays, most of the research focuses on the financial aspects of wind farms or the optimization of the wind power generation process [3]. In the upstream of wind power generation, high energy consumption and large environmental emissions are produced due to the usage of fossil fuels and building materials and the manufacturing of other power generation equipment. Therefore, it is important to analyze the environmental benefits of wind turbines by comparing their energy produced with energy consumption and pollution to the environment, to ensure that they provide a net environmental benefit. As a technology to evaluate the environmental factors related to products and their potential impacts, life cycle assessment (LCA) has been developed as an important tool to evaluate the energy and environmental performance of products in their life cycle. The LCA of wind turbines can not only help enterprises carry out green design or improvement of the key energy consumption parts but also help the wind power industry change from the terminal-oriented management mode to the full life cycle management mode. In order to minimize energy consumption and environmental emissions, it is necessary to optimize energy management and carry out green design through accurate LCA of wind turbines. 1. The existing life cycle assessment process mainly relies on software system databases, which are from the average data of various industries, or manually collecting from literature or on site. The real time, dynamic, and accuracy of evaluation data are far from satisfactory. Therefore, the results of LCA are not accurate enough. 2. The existing LCA software are commercial, and they are for many products. As a result, the process is not specifically for wind turbines. 3. The existing LCA methods mainly include problem-oriented midpoint methods and damage-oriented endpoint methods, having their limitations. 4. The existing LCA systems and enterprise information systems (EIS) are independent of each other and cannot be effectively integrated together, and the data of enterprise information systems cannot be effectively utilized. As a new and rapidly developing technology, the Internet of Things (IoT) holds great promise in handling the abovementioned issues. IoT is a pervasive presence around us of a variety of things or objects (such as radio-frequency identification (RFID) tags, sensors, actuators, mobile phones, etc.), which can interact with each other and cooperate with their neighbors through unique addressing schemes [4,5]. IoT is recognized as one of the most promising networking paradigms due to bridging the gap between the cyber and physical worlds [6]. Its characteristics include comprehensive perception, reliable transmission, and intelligent data processing. Because of providing interoperability, compatibility, and reliability in the world, standardization is also one of the important factors for the widespread application of the Internet of things [7]. Although IoT has been successfully applied in many fields, the application in LCA is still in its infancy. Therefore, it is necessary to study systematically and thoroughly the potential applications of IoT in LCA of wind turbines, which is of great significance. In order to solve the above problems, based on IoT, this paper proposed a new LCA platform for wind turbines. Additionally, the main innovations are as follows: The IoT technology is applied to achieve the real-time, intelligent collection of energy consumption and environmental impact data in the entire life cycle of wind turbines. Therefore, the databases of the platform have the function of updating and expanding, which ensures the objectivity and accuracy of the assessment results. Additionally, the status and location of wind turbines and their components can be monitored with the support of IoT technology. 2. A novel LCA architecture based on IoT is proposed, which achieves the energy consumption assessment and environmental impact assessment of wind turbine from parts to products, including the stages of design, raw material acquisition, production and manufacturing, transportation and installation, operation and maintenance, and recovery and disposal. 3. An LCA model for wind turbines is proposed based on IMPACT 2002+ [8], which combined the midpoint method with the endpoint method. Compared with other methods, the proposed model reduces the complexity and uncertainty of the evaluation process. 4. Based on ONS and DS of IoT technology, the effective integration method between the proposed platform and the existing EIS is innovated, thereby constructing an open and extensible platform for life cycle assessment. The remainder of the paper is organized as follows. Section 2 introduces the background of LCA and presents related work about LCA of wind turbines. In Section 3, the architecture of the proposed platform based on IoT is described. A prototype system is developed in Section 4. Additionally, the characteristics of the system are discussed in Section 5. The conclusion and future work are presented in Section 6. Life cycle assessment (LCA) has made remarkable progress, since the first life-cycleoriented methods were proposed in the 1960s. Currently, LCA is defined as a tool that assesses the potential environmental impacts and resource consumption in a product's life cycle, i.e., from raw material acquisition, via production and use stages, to waste management [9]. The general methodological framework and standards of LCA are defined by ISO 14040 and ISO 14044 [10,11]. It includes the following four steps: goal definition and scoping, inventory analysis, impact assessment, and improvement analysis, as shown in Figure 1. Background and methods, the proposed model reduces the complexity and uncer tion process. 4. Based on ONS and DS of IoT technology, the effective integrat the proposed platform and the existing EIS is innovated, ther open and extensible platform for life cycle assessment. The remainder of the paper is organized as follows. Section 2 ground of LCA and presents related work about LCA of wind turbi architecture of the proposed platform based on IoT is described. A developed in Section 4. Additionally, the characteristics of the syst Section 5. The conclusion and future work are presented in Section 6 Life Cycle Assessment (LCA) 2.1.1. Development of LCA Life cycle assessment (LCA) has made remarkable progress, sin oriented methods were proposed in the 1960s. Currently, LCA is d assesses the potential environmental impacts and resource consum life cycle, i.e., from raw material acquisition, via production and use s agement [9]. The general methodological framework and standards by ISO 14040 and ISO 14044 [10,11]. It includes the following four s and scoping, inventory analysis, impact assessment, and improvemen in Figure 1. Currently, the research on LCA mainly focuses on the evaluatio system, product resource consumption, and environmental emission has been widely used in various fields, including materials, energy so and construction, etc. Boyden et al. compared the environmental imp ment strategies for lithium-ion batteries in Australia ehydro-metallur and landfill through LCA [12]. Geng et al. used content analysis to Currently, the research on LCA mainly focuses on the evaluation of the new energy system, product resource consumption, and environmental emission. Additionally, LCA has been widely used in various fields, including materials, energy sources, transportation and construction, etc. Boyden et al. compared the environmental impact of three management strategies for lithium-ion batteries in Australia ehydro-metallurgy, pyro-metallurgy, and landfill through LCA [12]. Geng et al. used content analysis to investigate various patterns in existing LCA studies in the building industry [13]. Wulf et al. studied the assessment of the life cycle of hydrogen production from biomass for transportation purposes concerning greenhouse gas emissions, emissions with an acidification potential and the fossil energy demand [14]. Foelster et al. explored the environmental impacts of a refrigerator recycling system in Brazil and quantified its ecological advantages over primary resource production through LCA [15]. The methods of LCA mainly include problem-oriented midpoint methods and damageoriented endpoint methods [16]. Additionally, midpoint methods focus on the intuitive interpretation of the impact of products on the environment, while endpoint methods mainly focus on the damage caused by the consequences to human health, environment, and resources. With the application of LCA in various industries, its methodology system is also developed. At present, the midpoint methods include CML-IA, TRACI 1.0, EDIP 2003, TRACI 2, and so on; the endpoint methods include EPS 2000, Eco-Indicator99, LIME 2.0 (2008), LC-Impact, and so on, and the methods combining both include IMPACT 2002+, ReCiPe 2008, ILCD/PEF/OEF, and IMPACT World+ [17]. The above methods have been widely used, but each has its own limitations. In recent years, in view of the continuous expansion and complexity of evaluation objects, LCA methods are also developed in new forms. Additionally, the combination of LCA methodology with other methods has become a common practice. Suhariyanto et al. proposed a multi-life cycle assessment perspective for assessing the environmental impacts on the multiple life cycle product system [18]. Bai et al. proposed an integrated LCA-based decision-support platform named HIT.WATER scheme, linking the currently available LCA system with a water quality model, Plackett-Burman design, and conjoint analysis [19]. Yao et al. analyzed waste mobile phone management and recycling by using an integrated method of LCA and system dynamic prediction [20]. Morbidoni et al. developed the CAD-integrated LCA tools to support the simplified LCA method, which could be used as eco-design tools [21]. Software of LCA LCA needs a large amount of data as the basis and a variety of models as tools. In order to improve the research efficiency, the software developed includes Simapro, Gabi, LCAiT, eBalance, PEMS, and so on. • Simapro is developed by the University of Leiden, the Netherlands. It provides abundant databases and a variety of evaluation methods. The database in the manufacturing phase is the most detailed. It is also the most popular LCA software in the world. Based on the system boundary, the main input of the above software includes the consumption of various resources (metal, minerals, water, etc.) and different types of energy (such as electricity, coal, natural gas, gasoline, diesel, etc.). Based on the establishment of process flow and the database in the software, the energy consumption and environmental emissions (such as wastewater, waste solid, CO 2 , CO, SO 2 , Nox, COD, etc.) in each stage can be calculated. Quantification of LCA According to the framework of ISO14040 standard, the process of LCA quantification includes calculation of environmental impact potential, standardization, and weight assessment. The environmental impact potential refers to the sum of environmental emissions or resource consumption in the whole life cycle of a product, which can be expressed as follows: where EP(j) is the environmental impact potential j, j = {global warming, acidification, eutrophication, ozone depletion, solid waste, hazardous waste, etc.}, EP(j) i is the contribution of substance i to the environmental impact potential j, Q(j) i is the emissions of substance i, and EF(j) i is the equivalence factor of the substance i to the environmental impact potential j. The equivalence factor is usually calculated based on one kind of substance, such as CO 2 for global warming and SO 2 for acidification. In order to compare and analyze different types of environmental impact, it is necessary to carry out standardization. The expression is as follows: where NEP(j) is the potential environmental impact and resource consumption after standardization, and ER(j) is the benchmark of environmental impact potential j, which varies with time and region. In order to analyze and evaluate different environmental impact types more reasonably, it is necessary to carry out weight evaluation, which can be expressed as follows: where WP(j) is the potential environmental impact after weighting, and WF(j) is the weight coefficient of environmental impact potential j. For a product composed of multiple components, the environmental impact potential of the product is the sum of the environmental impact potential of all the components, which is calculated by where EP(j) refers to the environmental impact potential j of the product, EP x (j) indicates the environmental impact potential j of the component x, and , and EP R x (j) indicate the environmental impact potential j of the component x in the stage of design, raw material acquisition, production and manufacturing, transportation, installation, operation and maintenance, and recovery and disposal, respectively. LCA of Wind Turbines In recent years, a number of papers related to the life cycle assessment of wind energy have been published. Some of them are as follows. Demir et al. [22] used the LCA method to compare and analyze wind turbines with 50, 80 and 100 m hub heights in Turkey. Additionally, the results show that the high-power wind turbine with high hub heights has a low environmental load and high energy return rate. Uddin et al. [23] studied wind turbines from three aspects of energy utilization, emission reduction, and environmental impact through LCA. Yang et al. [24] constructed a hybrid LCA model to facilitate the accounting of the energy consumption and greenhouse gas emission of the first offshore wind farm in China. Lloberas-Valls et al. [25] illustrated a detailed "cradle-to-gate" life cycle assessment of the 15 MW wind turbines by using GaBi 6 commercial software and Ecoinvent 2.2 databases. Gomaa et al. [26] investigate the environmental impact and energy performance of wind farms in the southern region of Jordan using an LCA method. Jesuina et al. [27] attempted to quantify the relative contribution of individual stages toward life cycle impacts by conducting a life cycle assessment with SimaPro and the IMPACT2002+ method. Moghadam et al. [28] developed an in-depth LCA evaluation of three different drive train choices based on permanent-magnet synchronous generator technology for 10 MW offshore wind turbines. Mendecka et al. [29] proposed simplified LCA models that predict the final results with acceptable uncertainty. Additionally, the obtained simplified LCA models were generalized for different site-specific wind conditions. Martinez et al. [30] created an LCA model of the repowering process of an old wind farm with low power wind turbines and promoted an analysis from the point of view of the potential environmental impact and benefit of a wind farm repowering process. The organizational life cycle assessment of a service provider for photovoltaic and wind energy projects was carried out in the United Kingdom [31]. The environmental impacts of hydropower generation, nuclear, and wind were analyzed, assessed, and compared in China through a comprehensive life cycle assessment approach [32]. Crawford [33] presented the results of a life cycle energy and greenhouse emissions analysis of two wind turbines and considers the effect of wind turbine size on energy yield. IoT in LCA With the rapid development of IoT technology, it has become a promising technology that can make product management more flexible and efficient [34]. According to the survey, it is noticed that the application of IoT has not caught the researchers' attention enough in the field of LCA. Additionally, only a few studies about life cycle assessment based on IoT have been developed. IoT has been employed for collecting real-time data at each stage of the product to achieve the multi-structure as well as multi-stage carbon emission evaluation [35]. By applying LCA and IoT, Du et al. achieved efficient and intelligent management of renewable resources and discussed the environmental effects of various types of renewable resources [36]. Tao et al. [37] investigated the specific application of the Internet of things in each stage of product energy management. An LCA method for energy-saving and emission-reduction based on IoT and bill of material was designed and presented [38]. System Design Before designing the LCA system of wind turbines, it is necessary to carry out goal definition and scoping, because it is directly related to the complexity of LCA evaluation of wind turbine and the accuracy of evaluation results. Goal Definition For different users, the goals of LCA in this platform are divided into the following three types. (1) For enterprises, the improvement schemes are proposed for the stages or parts with large energy consumption/environmental impact, and improvement measures of green design of wind turbine will be provided. (2) For the government, the reference standard of the research object will be established through the LCA results of each stage, and the energy consumption and environmental emission standards of each stage will be established. (3) For the third-party testing and certification department, it is to carry out green certification for the wind turbine by comparing the evaluation results with the defined green index value of the wind turbine. Research Objects The related research objects are mainly divided into three types: wind farm, wind turbine, and its main components. Wind farm refers to the place where wind energy is captured, converted into electric energy, and sent to the power grid through transmission lines, including wind turbines, power collection lines, substations, and roads. A wind turbine is mainly composed of a rotor, an engine room, a tower, and a foundation. For the main components, the rotor is mainly composed of blade, hub, nose cone, and bolt; the engine room is usually integrated by engine room frame, which covers generator, gearbox, transformer, electrical, and mechanical equipment; tower supports engine room and rotor, mainly made of steel, aluminum, copper, and plastic; foundation provides ground support for all the above parts, which is composed of concrete, steel, and iron. Usually, for enterprises, the research object is mainly wind turbine or its main parts; for the third-party testing and certification department, the research object is wind farm or wind turbine or its main components; for the government, the research object is usually wind farm or wind turbine. The complexity of LCA decreases in the order of wind farm, wind turbine, and its main components. System Boundary The system boundary is determined according to the research objectives. The wider the research scope is, the more the relative workload will be. Nowadays, the system boundary of LCA pays more attention to evaluation efficiency instead of breadth and depth, which is excluding the stage not related to the research objectives from the system boundary after comprehensively considering the accuracy of the evaluation results and the complexity of the evaluation work, and determining the most efficient system boundary according to the needs of research objectives. For example, when enterprises intend to improve the green design of wind turbines, they only need to study the stages of raw material acquisition, manufacturing, recycling, and scrapping, while the transportation, on-site installation, operation, and maintenance stages have little impact on the research objectives can be ignored. For different users, the objectives, and scope of LCA of wind turbines are shown in Table 1. The evaluation of the wind turbine is the most common in practical applications. Therefore, this system is mainly designed for wind turbines. Taking wind turbine as an example, the system boundary of LCA for wind turbines is shown in Figure 2, which includes the design stage, raw material acquisition stage, production and manufacturing stage, transportation and installation stage, operation and maintenance stage, and recovery and disposal stage. Figure 2. The boundary of the LCA system for the wind turbine. Architecture of the LCA Platform Through the analysis of the objectives and scope of LCA of wind turbines, the architecture of the LCA platform based on IoT technology is constructed. The data acquisition layer, data transmission layer, platform layer, and application layer are included. The architecture of the LCA platform for wind turbines is shown in Figure 3. Impact Assessment Environmental impact Resource consumption (EPBT, etc) Improvement analysis The third-party Enterprise Government Architecture of the LCA Platform Through the analysis of the objectives and scope of LCA of wind turbines, the architecture of the LCA platform based on IoT technology is constructed. The data acquisition layer, data transmission layer, platform layer, and application layer are included. The architecture of the LCA platform for wind turbines is shown in Figure 3. Architecture of the LCA Platform Through the analysis of the objectives and scope of LCA of wind turbines, the architecture of the LCA platform based on IoT technology is constructed. The data acquisition layer, data transmission layer, platform layer, and application layer are included. The architecture of the LCA platform for wind turbines is shown in Figure 3. Improvement analysis The third-party Enterprise Government …… …… Figure 3. The architecture of IoT-based LCA platform of wind turbines. Each layer of the platform is introduced as follows. Data Acquisition Layer In the data acquisition layer, intelligent identification, positioning, tracking, monitoring, and management can be achieved with the help of RFID devices, barcode identification equipment, global positioning systems, laser scanners, and other information sensing equipment. Consequently, all the input and output data involved in the life cycle of wind turbines will be obtained and preprocessed. There are three ways for collecting data: (1) By using RFID, wireless, mobile, and sensor equipment, real-time and intelligent collection of energy consumption and environmental emission data can be generated from different material units and manufacturing processes of wind turbines. (2) Bill of Materials can be acquired from the information management systems of enterprises. (3) For the data that cannot be collected through sensors or obtained by EIS, they can be collected by manual input, such as enterprise fundamental information, some parameters and specifications of wind turbines, etc. In the process of manual entry, it is necessary to ensure the authenticity and effectiveness of data and make sure there is evidence to follow for data output in the later stage. The data acquiring methods in each stage of wind turbines are as follows. Design Stage In the design stage, the wind turbine and its main components are designed, and the three-dimensional models of them are established, and the structure is tested and analyzed. Because the design tools such as computers and other electronic equipment are mainly used in this stage, the energy consumption and environmental emission data are not generated directly. Therefore, the energy consumption is measured by smart meters, and other data can be obtained through EIS. All the design schemes have a great impact on the final results. Raw Material Acquisition Stage Raw materials include parts, components, semi-manufactures, etc. Raw materials can be obtained in two ways: production in the factory and purchased from other factories. For the raw materials produced in the factory, which usually including internal and external blades and baffles of wind turbines, relief valve, brake, transmission shaft, rod, bearing, clutch, flying pendulum, cover, and so on, the relevant data can be obtained by integrating with EIS. For the purchased raw materials, which usually including tire coupling, permanent magnet synchronous generator, standard parts, etc., the relevant data are collected by reading barcodes or RFID tags attached to them. The elementary raw materials in the wind turbine include concrete, steel, iron, copper, glass fiber, epoxy resin, etc., as shown in Table 2. The relevant data can be collected from the related database. Production and Manufacturing Stage In the production and manufacturing stage, the energy consumption data of water, electricity, and gas are collected by embedded systems and sensing equipment installed on manufacturing equipment, and environmental emission data can be obtained from intelligent sensors and environmental monitoring systems. At the same time, in order to improve the reliability of the data, the measurement calibration certificates of relevant monitoring equipment should be upload simultaneously to the enterprise information database in the platform layer. According to the material composition of each component and the energy consumption and environmental emissions of various main raw materials, the total energy consumption and environmental emissions of wind turbines can be calculated in this stage. Transportation and Installation Stage In the transportation stage, the fuel consumption, weight of product transported, and transportation distance are collected by sensors on vehicles, and the total energy consumption and pollutant emission of the transportation process can be calculated based on the fuel consumption per kilometer. In the installation stage, which includes the installation of foundation construction, tower hoisting, and engine room, the data related to mechanical equipment can be obtained by sensors, and other data can be manually entered. Operation and Maintenance Stage In the operation stage, the environmental information (such as air temperature, air pressure, wind speed, etc.) and its operation data (such as rotor speed, power, etc.) can be collected through sensors. In addition, equipment calibration records (such as power transmitter, current transformer, anemometer, temperature, and humidity meter, etc.) need to be uploaded in time. Based on IoT technology, the wind turbine status can be continuously monitored, and detection automation can be achieved. In the maintenance stage, energy consumption is mainly generated by using various maintenance tools or component replacement. Additionally, its data can be collected by reading RFID tags or manually entering. Recovery and Disposal Stage This stage is the process of wind turbine disassembly, renovation, recycling, reuse, reassembly, or disposal. Recovery methods include direct recovery and recovery after solvent extraction. Scrapping methods of the wind turbine include open stacking, landfill, and incineration. Eighty-five to ninety percent of the foundation, tower, and engine room components of the wind turbine can be recycled. However, it is difficult to recover the blades, because the thermosetting materials cannot be degraded and the cost of recovering fiber materials is very high [39]. Nowadays, there are three methods for blades recycling: (1) the blades are disassembled, and the materials are reused for municipal construction and other fields; (2) the blades are broken and added into the construction materials after recycling; (3) recycle the chemical materials and reuse them after decomposition of the blades. The data in this stage are mainly collected by environmental monitoring sensors, and it can also be obtained from LCA basic databases, such as an energy warehouse and a material database. Data Transmission Layer The data transmission layer can also be called the network layer, which transfers the data captured in the data acquisition to the servers in the platform layer through networking technologies including WSN, WIFI, Ethernet, and so on. In addition, this paper proposes to achieve effective integration with EIS through object name service (ONS) and directory service (DS). As the infrastructure of the network layer, the ONS and DS provide external services. The ONS is defined by GS1 EPCglobal, and it is the core technology of the Internet [40]. In 2013, the GS1 published the ONS2.0 standard that enables the use of GS1 Identification Keys. Furthermore, ONS 2.0 introduces the Federated Model to solve the cooperating name service problem [41]. The ONS mainly completes the network resource analysis pointing request of all kinds of IoT identification and can map the Code identification to the resource entrance. The ONS can be called by the applications that need to conduct the name service. The DS is mainly deployed to offer the index access to EIS, which is at the data acquisition layer. Each enterprise submits event indexes automatically to their representative DS server. Before adopting this integration scheme, enterprises need to register and maintain their domain names in the enterprise information database in the Database Management Module in advance. The process of integration with enterprise information systems is shown in Figure 4. is mainly deployed to offer the index access to EIS, which is at the data acquisition l Each enterprise submits event indexes automatically to their representative DS server fore adopting this integration scheme, enterprises need to register and maintain thei main names in the enterprise information database in the Database Management Mo in advance. As shown in Figure 4, first of all, the user queries the product code. Secondly system analyzes the product identification and feeds back the results to users by O which is the resource entry. Then, the user queries the directory and the service inte of the EIS can be accessed through the DS. The user queries the required inform through the interface next. Finally, effective integration with EIS can be achieved, inc ing PDM, ERP, SCM, and so on. Platform Layer The platform layer mainly carries out an inventory analysis of the life cycle of w turbines. Inventory analysis is the process of collecting and objectifying the basic da all relevant processes of wind turbines, which mainly includes the analysis of raw mat consumption, production and manufacturing energy consumption, material and pro transportation energy consumption, operation and maintenance energy consump and scrapping treatment environmental emissions. The integrity and accuracy of the obtained by inventory analysis affect the accuracy of LCA results directly. The plat As shown in Figure 4, first of all, the user queries the product code. Secondly, the system analyzes the product identification and feeds back the results to users by ONS, which is the resource entry. Then, the user queries the directory and the service interface of the EIS can be accessed through the DS. The user queries the required information through the interface next. Finally, effective integration with EIS can be achieved, including PDM, ERP, SCM, and so on. Platform Layer The platform layer mainly carries out an inventory analysis of the life cycle of wind turbines. Inventory analysis is the process of collecting and objectifying the basic data of all relevant processes of wind turbines, which mainly includes the analysis of raw material consumption, production and manufacturing energy consumption, material and product transportation energy consumption, operation and maintenance energy consumption, and scrapping treatment environmental emissions. The integrity and accuracy of the data obtained by inventory analysis affect the accuracy of LCA results directly. The platform layer includes four modules: Unit Process Management Module, Flow Management Module, Database Management Module, and Inventory Data Processing Module. Unit Process Management Module Unit process refers to the most basic unit in life cycle assessment. In this module, the decomposition of the research object and the identification of the process flow are completed. In the meantime, in order to quantitatively determine the input and output corresponding data, users need to input the unit process attribute data of the research object in this module, and manage all the unit processes on this basis. The structure of wind turbines is complex, and there are many kinds of components and materials. Decomposing the wind turbine and classifying the materials and its components play a crucial role in the process of inventory analysis, which will affect the accuracy of the evaluation results. In our system, the analytic hierarchy process (AHP) [42] is used to determine the optimal decomposition path of the research object. An example of wind turbine decomposition is shown in Figure 5. ule, Database Management Module, and Inventory Data Processing Module. Unit Process Management Module Unit process refers to the most basic unit in life cycle assessment. In this module, the decomposition of the research object and the identification of the process flow are completed. In the meantime, in order to quantitatively determine the input and output corresponding data, users need to input the unit process attribute data of the research object in this module, and manage all the unit processes on this basis. The structure of wind turbines is complex, and there are many kinds of components and materials. Decomposing the wind turbine and classifying the materials and its components play a crucial role in the process of inventory analysis, which will affect the accuracy of the evaluation results. In our system, the analytic hierarchy process (AHP) [42] is used to determine the optimal decomposition path of the research object. An example of wind turbine decomposition is shown in Figure 5. LCA focuses on the full life cycle and goes deep into each process to analyze the input and output data. For this reason, it is also necessary to identify the key influencing factors of each stage of the process flow. On this basis, the data of each basic unit are input and output, and the data of each stage are classified and summarized. The main LCA influencing factors of wind turbines in each stage are shown in Table 3. LCA focuses on the full life cycle and goes deep into each process to analyze the input and output data. For this reason, it is also necessary to identify the key influencing factors of each stage of the process flow. On this basis, the data of each basic unit are input and output, and the data of each stage are classified and summarized. The main LCA influencing factors of wind turbines in each stage are shown in Table 3. Flow Management Module Material flow management and energy flow management are included in this module. Material flow and energy flow data are a vital link and are used to record the material/energy interaction between different unit processes. In this module, the material flow and energy flow information of each unit can be added, modified, deleted, and viewed by users, which provides support for inventory data processing of the wind turbine. Database Management Module The Database Management Module is an important module to support the completion of the life cycle assessment of wind turbines. It mainly includes the functions of data addition, data storage, and data calls. For users, the required data can be obtained from the database conveniently, timely, and accurately. Four databases are included in this module, which are as follows: Enterprise information database: storing the enterprise information involved in the life cycle of the wind turbine, including enterprise name, organization code, enterprise identification, identification rules, enterprise address, legal person information, responsible person information, enterprise website, and other fundamental data of the enterprise. The information of the database is obtained through enterprise registration and filing mainly. • Product information database: storing the fundamental data, operation and maintenance data, and environmental emissions and energy consumption data in the life cycle of wind turbines. The fundamental data contain the structural information of all kinds of wind turbines and defines the structural relationship between the wind turbine and its components and between the components. In this database, the fundamental data are mainly obtained through integration with EIS, and the operation and maintenance data and environmental emissions and energy consumption data are obtained by real-time monitoring of IoT technology. Based on the database, the usability and real-time performance of LCA can be significantly improved. Inventory Data Processing Module The Inventory Data Processing Module is the core module of the LCA platform. Based on the data support of material flow, energy flow, unit process, and database module, the module can quantify and analyze the data between input and output of resources and environment in each stage through online modeling and the corresponding algorithm. In this module, life cycle impact assessment (LCIA) index management and online modeling are included, which are as follows: • LCIA index management: it mainly stores the types of evaluation indicators, conversion coefficients between different types of indicators, and environmental impact assessment standards of relevant industries or regions. It supports the calculation of LCIA characteristic indicators, normalized indicators, and weighted comprehensive indicators. Consequently, it is an important module for data normalization and environmental impact factor unification. • Online modeling: Based on linking all types of life cycle inventory results via 14 midpoint categories to four damage categories, a feasible approach combining midpoint and damage is proposed, which is the IMPACT2002+ methodology [8]. Based on the IMPACT 2002+, the LCA model for wind turbines is established, as shown in Figure 6. Compared with other methods, the proposed model reduces the complexity and uncertainty of the evaluation process and guides the next optimization. The Inventory Data Processing Module is the core module of the LCA platform. Based on the data support of material flow, energy flow, unit process, and database module, the module can quantify and analyze the data between input and output of resources and environment in each stage through online modeling and the corresponding algorithm. In this module, life cycle impact assessment (LCIA) index management and online modeling are included, which are as follows: • LCIA index management: it mainly stores the types of evaluation indicators, conversion coefficients between different types of indicators, and environmental impact assessment standards of relevant industries or regions. It supports the calculation of LCIA characteristic indicators, normalized indicators, and weighted comprehensive indicators. Consequently, it is an important module for data normalization and environmental impact factor unification. • Online modeling: Based on linking all types of life cycle inventory results via 14 midpoint categories to four damage categories, a feasible approach combining midpoint and damage is proposed, which is the IMPACT2002+ methodology [8]. Based on the IMPACT 2002+, the LCA model for wind turbines is established, as shown in Figure 6. Compared with other methods, the proposed model reduces the complexity and uncertainty of the evaluation process and guides the next optimization. As shown in Figure 6, nine indicators including GWP, WS, EP, AP, COD, RI, IWU, ODP, and PED are selected for analysis in the intermediate factor layer. Additionally, climate change, ecosystem quality, resource consumption, and human health are the final index in the damage factor layer. Since the previous data collection is aimed at each unit process, in order to ensure the accuracy and integrity of the evaluation results, it is necessary to determine a measure- As shown in Figure 6, nine indicators including GWP, WS, EP, AP, COD, RI, IWU, ODP, and PED are selected for analysis in the intermediate factor layer. Additionally, climate change, ecosystem quality, resource consumption, and human health are the final index in the damage factor layer. Since the previous data collection is aimed at each unit process, in order to ensure the accuracy and integrity of the evaluation results, it is necessary to determine a measurement reference of data before data processing, which is the function unit. In the raw material acquisition stage, the functional unit of the weight of various raw materials is kg. In the manufacturing stage, the functional unit of electrical energy consumption is kWh. In the transportation stage, the functional unit of the transportation distance and energy consumption are km and L separately. In the recovery and disposal stage, the functional unit of the weight of recycled or scrapped materials is kg. The main influencing factors and functional units of each resource and environment index are shown in Table 4. After the inventory analysis, in order to evaluate the accuracy of the inventory data, the relative error can be obtained by comparing the total mass of the wind turbine with the summing of the material quality of all components. To sum up, the data processing of the proposed platform is shown in Figure 7. of 21 the transportation stage, the functional unit of the transportation distance and energy consumption are km and L separately. In the recovery and disposal stage, the functional unit of the weight of recycled or scrapped materials is kg. The main influencing factors and functional units of each resource and environment index are shown in Table 4. After the inventory analysis, in order to evaluate the accuracy of the inventory data, the relative error can be obtained by comparing the total mass of the wind turbine with the summing of the material quality of all components. To sum up, the data processing of the proposed platform is shown in Figure 7. LCIA is a process of quantitative or qualitative evaluation to obtain the performance of energy consumption and environmental emission after inventory analysis, and the inventory information will be translated into environmental impact scores. In terms of resource consumption of wind turbines, it mainly includes the evaluation of mineral energy consumption in the raw material acquisition stage, electric energy consumption in the production and manufacturing stage, and oil consumption in the transportation stage. For wind turbines, energy payback time (EPBT) is one of the key factors of evaluation, because it is one of the important indexes in the evaluation system of wind turbines generating capacity [43]. EPBT is the ratio of the total energy input of the wind turbine in the entire life cycle to the annual power generation during its operation [44], as shown in the following equation: where E i is the total energy input of the wind turbine in its entire life cycle, and E 0 is the annual power generation of the wind turbine, which is calculated by where _ P is the average power of wind turbines under different wind speeds, and its calculation formula is where p(v) is the expression of wind turbines output power characteristic curve, and f (v) is the wind speed frequency distribution curve expression of wind turbines installation site. In this study, the real-time wind speed and the actual output power of the wind turbine are collected through IoT technology. After obtaining the power characteristic curve of wind turbines and the wind speed frequency distribution of the installation site, E 0 will be calculated. At the same time, Ei is obtained according to the Inventory Data Processing Module. Thus, the EPBT of the wind turbine will be calculated. On this basis, according to the design life of the wind turbine, the power generation without emission in its life cycle can be calculated. Besides, when the wind turbine parameters (starting wind speed, rated wind speed, maximum safe wind speed, rated power, maximum power, rated speed, working wind speed range, etc.) and installation address are known, the EPBT can be estimated in advance. On this basis, it can be predicted whether the installation site of wind turbines is appropriate, to avoid the loss caused by the improper installation scheme. Improvement Analysis Improvement analysis is to analyze the results of the impact assessment, form a report, and provide practical improvement plans for different users. Enterprise users, government users, and third-party users are included in this system. For enterprise users, according to the LCA results, the stage, or the key components with large resource consumption, energy consumption and environmental emission of wind turbines will be identified. Consequently, better raw materials can be selected. Additionally, design optimization, production process optimization, transportation route and vehicle optimization, operation and maintenance optimization, and recycling optimization can be carried out. For example, energy consumption in peak hours or idle time can be reduced. For government users, through the results of impact assessment, the entire process of recycling can be monitored, the recovery rate of product components or materials can be improved, and the potential harm to the environment in the recycling can be reduced. Additionally, the suppliers of materials and components with the highest proportion of environmental emission can be list as the major concerns. Based on the evaluation results, relevant standards can be established, the corresponding evaluation mechanism can be improved, and the industrial structure will be optimized finally. For the third-party users, the green certification will be accomplished by comparing the evaluation results with the defined green index value of the wind turbine. Prototype System Implementation Based on the proposed architecture, an LCA prototype system of the wind turbine has been developed. In this system, various users have different authority to manage data. The main process of the life cycle assessment by using this system is illustrated as follows: 1. The enterprise registers in the system and inputs the information of name, identification, address, website, responsible person information, relevant qualifications, etc. 2. The decomposition of the wind turbine studied is determined, and the process flow is identified through the Unit Process Management Module. Additionally, then, the relevant parameters and the main information of key components for life cycle assessment will be input. Then, the material flow and energy flow information of each unit can be added, modified, deleted, and viewed through the Flow Management Module. 3. Based on the installation of various sensors, the relevant energy consumption and environmental emission data will be collected. Meanwhile, the operation data will be monitored, including wind speed, vibration, noise, engine speed, pitch angle, etc. According to these data, the output power characteristic curve is formed, which can be used to calculate the annual power generation of the wind turbine. Moreover, the real-time fault data and the fault rate curve of wind turbines are collected and formed. On this basis, the residual life prediction of the main components of wind turbines is carried out. 4. In addition to the real-time monitoring data, the Bill of Materials is acquired by integrating with the information systems of enterprises. Additionally, other data are obtained by the Database Management Module in the system. 5. Based on the online modeling and LCIA index management, the inventory data will be processed through the Inventory Data Processing Module. Based on inventory data, the energy consumption and environmental emission data at all stages of wind turbines are calculated systematically, and the evaluation results are analyzed visually. On this basis, the life cycle assessment report of wind turbines is formed. The workload of using LCA software include goal definition and scoping, data preprocessing, inventory analysis, and impact assessment. Compared with the existing software, the workload of data collection is greatly reduced, and most of the scoping of wind turbines has been established in detail in the proposed system. To compare the LCA workload between the proposed system and the existing software, we suppose the total workload of data preprocessing, goal definition and scoping, inventory analysis, and impact assessment of the existing software are all 100%, and the weights of them on the LCA workload are 30, 20, 40, and 10%, respectively. The comparison results are shown in Table 5. As shown in Table 5, the total workload of data preprocessing, goal definition and scoping, inventory analysis, and impact assessment of the proposed system are 48, 30, 80, and 100%, respectively. By weighting, the LCA workload of the proposed system is 62.4%, which means the proposed system effectively reduces the LCA workload of wind turbines. Estimation of the Accuracy At present, the data collected by existing systems are often intertwined and need to be distributed among multiple parts, which is greatly affected by subjective factors. Therefore, the accuracy of the inventory data is not high. The proposed system can increase the data accuracy through IoT technology and improve the integrity of the data processing model through the integration with EIS. As previously mentioned, the system boundary of LCA for wind turbines mainly includes six stages. In order to estimate the accuracy of the results, we assume that the accuracy of inventory analysis in each stage is increased by 5% on average. Consequently, the accuracy of the results is improved by (1 + 5%) 6 × 100% = 134% Characteristics of the Platform The characteristics of the IoT-based life cycle assessment platform of wind turbines are summarized as follows: LCA of Wind Turbines Oriented The platform is designed for wind turbines. In the process of the platform design, the corresponding system modules are established according to the life cycle process of the wind turbine. At the same time, according to the characteristics of the wind turbine, the LCA model based on the IMPACT2002+ is established. Additionally, the EPBT of wind turbines can be calculated accurately. Flexible Expansibility To solve the lacking of information in the process of life cycle assessment, it can effectively integrate with the existing enterprise information systems through the ONS and DS based on IoT. Furthermore, the platform provides a standard application program interface and supports all kinds of users to develop and integrate various business applications on the platform. Therefore, it has the feature of extensibility. Real-Time Data Updating Based on the Internet of Things technology, real-time and dynamic data in the life cycle of wind turbines can be collected in the platform. Therefore, the databases of the platform have the function of updating and expanding, which ensures the objectivity and accuracy of the assessment results. Effective Utilization of Data Based on the big data technology, the platform can analyze the real-time data collected by sensors and obtained from EIS and has functions of intelligent monitoring, analysis, and prediction. Finally, it can serve the design and manufacturing enterprises of wind turbines, wind farm construction enterprises, and operation and maintenance enterprises, which include modeling service, evaluation service, simulation service, analysis service, and optimization service, etc. In the meantime, an open platform with controllable permissions can be provided through the construction of application components, so that the third party can participate in the application development and create business value. Ease of Use The platform supports centralized user management, unified authentication and authorization, and business collaborative processing. Additionally, it is designed on B/S architecture, which is more convenient to operate and easier to maintain. To sum up, the comparison between the proposed platform and several existing LCA systems is shown in Table 6. Conclusions In this paper, based on IoT technology, an LCA platform architecture for wind turbines is proposed. Additionally, object name service and directory service are used to provide external services in the network layer. In the meantime, an LCA model of wind turbines is established based on IMPACT 2002+. Therefore, the platform is wind-turbines-oriented and has good scalability and credibility. The application of this platform can achieve effective integration and sharing with the existing enterprise information system and accomplish the real-time, intelligent collection of the energy consumption and environmental emission data generated in some stages. Given that the inventory data can be updated, the objectivity and accuracy of the assessment results can be ensured. It is of great significance for the sustainable development of the wind power industry. In our future work, it is very important to deploy the LCA platform in more application scenarios. Additionally, combining the platform with blockchain technology to ensure the authenticity and credibility of data is also a key research direction. Besides, further research and exploration are needed to make the analysis results comparable with the existing systems.
11,879
2021-02-01T00:00:00.000
[ "Computer Science", "Engineering" ]
An Automated System for Skeletal Maturity Assessment by Extreme Learning Machines Assessing skeletal age is a subjective and tedious examination process. Hence, automated assessment methods have been developed to replace manual evaluation in medical applications. In this study, a new fully automated method based on content-based image retrieval and using extreme learning machines (ELM) is designed and adapted to assess skeletal maturity. The main novelty of this approach is it overcomes the segmentation problem as suffered by existing systems. The estimation results of ELM models are compared with those of genetic programming (GP) and artificial neural networks (ANNs) models. The experimental results signify improvement in assessment accuracy over GP and ANN, while generalization capability is possible with the ELM approach. Moreover, the results are indicated that the ELM model developed can be used confidently in further work on formulating novel models of skeletal age assessment strategies. According to the experimental results, the new presented method has the capacity to learn many hundreds of times faster than traditional learning methods and it has sufficient overall performance in many aspects. It has conclusively been found that applying ELM is particularly promising as an alternative method for evaluating skeletal age. Introduction Skeletal maturity assessment, or bone age assessment (BAA), is a radiological process to examine the ossification development in the left-hand wrist and estimate the bone's age by making comparisons with an atlas comprising hundreds of standard images [1]. Many disease in children such as growth disorders, chromosomal disorders, endocrine disorders and endocrinological problems could be discovered by the discrepancy between the bone age and chronological age. Bone age assessment is an important process in clinical routine; however, it has not improved much over the last 35 years [2,3]. There are two well-known methods applied for BAA: the Greulich-Pyle (GP) [4] and Tanner-Whitehouse (TW2) methods [5]. In the GP system, radiologists compare hand bone radiographs with standardized radiographs from the atlas and make evaluations, while the TW2 system is based on a scoring method [6]. The results from both these assessment types are associated with human observation variability, since a radiologist doing a bone age assessment to evaluate a child's maturation cannot be certain about estimation accuracy [7,8]. Therefore, this has been the greatest motivator for presenting an automated method of estimating skeletal maturity (bone age) [9]. However, the computerized BAA system is still under the empirical period because of the inadequate performance of the system [10]. Some proposed methods have been discussed in the literature. State of art The first try to design an automated system for bone age assessment have been reported by Nelson and Micheal, in 1989 [11]. Their system converted the images to the binary format and normalized the image before processing. This system have been never evaluated in a large scale due to the some drawback in overlapping of pixel intensity of bone in image processing technique. Manos with his team posed segmentation and presented a method for merging region and edge detection in pre-processing level [12]. However, the output of edge detection was not reliable and threshold was included the results. Pietka et al. [13] have designed a method based on analysis of carpal bone in hand-wrist. The system used dilation method to extract the carpal. Their research team improved the system by windowing technique to calculate the statistical features. However the new version of system still doesn't find the solution for the segmentation problem. Another system was reported by Mahmoodi, has applied the binary thresholding and location searching using concave-convex followed by segmentation based on the active shape technique [14]. Sebastian et al. [15] has conducted a study on image segmentation base on deformable method, the pre-processing contained the region growing and local competition in region sections. The output of this system was acceptable but it was included the heavy computing processes and complicated calculating. In the system presented by Gertych et al. [16] adaptive segmentation technique was applied based on Gibbs random in the pre-processing stage. Zhang et al. [17] worked on the carpal segmentation using the anisotropic diffusion and adaptive image threshold in the pre-processing stage. The proposed included canny edge detection that is not robust technique in image segmentation. Han et al. [18] presented Gradient vector flow (GVF) to use the segmentation while this technique was involved heavy loading process for edge detection. Liue et al. [19] suggested primitive image processing method that is similar to edge detection and simulate matching at the pre-processing level of segmentation. Most of the method is presented to the model for segmentation of the hand, however estimation of bone age according this method was never assess accurately. Hence, this method cannot be introduced as a fully automated system for bone age assessment. Bone age analysis requires high accuracy for assessment. The aim of this study is to introduce a new model of determining bone age based on content-based image retrieval (CBIR) technique as a part of a novel age assessment method, using a soft computing approach, namely extreme learning machines (ELM) for evaluation, which is a 100% automated method for BAA. Nowadays, applying modern computational approaches to solving real problems and determining optimal values and functions has been receiving enormous attention from researchers in diverse scientific disciplines [20]. Neural networks (NN), a vital computational approach, has recently been introduced and applied in various engineering areas such as medical application diagnosis [21,22]. This method facilitates solving complex nonlinear problems, which are otherwise difficult to solve with classic parametric methods. There are numerous algorithms for training neural networks, such as hidden Markov model (HMM), back propagation, and the support vector machines (SVM). A shortcoming of NN is learning time application. Huang et al. [23] introduced an approach for single-layer feed forward NN known as Extreme Learning Machines (ELM). This technique is capable to solve the problems are creating by gradient descent-based algorithms like backpropagation in ANNs. ELM can decrease the required time for training a Neural Network. In fact, it has been proven that by using ELM, the learning process becomes very fast and generates robust performance [24]. Accordingly, a number of investigations have been carried out related to the successful application of the ELM algorithm in solving problems in various scientific fields [25][26][27][28][29][30]. In general, ELM is a powerful algorithm with faster learning speed than traditional algorithms like backpropagation (BP) and superior performance. ELM attempts to achieve the standard of weights with the smallest error rate of training. In this study, a new automated bone age assessment approach is developed and evaluated by the ELM measurement and elimination the need for image segmentation. The results indicate that the proposed model can adequately estimate skeletal age. The ELM results are also compared with the results from genetic programing (GP) and artificial neural networks (ANNs). An attempt is made to retrieve the correlation between chronological age and bone age. Methodology The Content-based image retrieval (CBIR) approach is become famous in medical imaging as well as crime prevention in recent years [31]. The CBIR system was developed in the 1990s to solve problems encountered in text-based image retrieval. The CBIR method is based on querying by image [32]. Content-based image retrieval is a robust method to determine age independent of bone measurements. The CBIR methodology for skeletal age assessment is involves comparing image content for a new input with earlier samples. Most BAA systems are applied to the regions of interest (ROIs) in hand bones, which leads to low accuracy in bone age assessment [17,33,34]. The new method utilized in our study overcomes the mentioned limitation in literature by using complete images for an individual query instead of applying the query to the regions of interest (ROIs) [35]. The CBIR assessment methodology is found on compressing image content from a new sample with the earlier samples. Fig 1 shows the CBIR layout applied in our BAA system. In our system, not only are whole images considered, but so is visual content information such as ethnicity and gender, since these features allow the system to correctly perform Skeletal Maturity Assessment by ELM extractions from the available data. The feature extraction includes getting the related features from images. Therefore, the features are extracted from the hand radiographs, and an optimal subset of the selected features is picked. Feature extraction is based on the Weighted PCA as it is one of the best pattern recognition methods in computer vision applications [36]. It is certainly the most suitable feature extraction method for this study as it is a linear feature extraction technique [37] that is both efficient and fast. The retrieval images in this research included third party data provided from the database in Medical Image Research Centre (IRMA) available at "https://ganymed.imib.rwth-aachen.de/irma/institute_irmadaten.php". There are 1100 X-rays classified as female and male, and four ethnicities: Asian, Caucasian, African/American and Hispanic [38]. Age assessment The main step in implementing our BAA system is the process of estimating bone age according to the automated technique (Fig 2). Bone age is assessed by comparing a radiograph with samples from a repository that contains various ages for both genders and four different ethnicities. A temporary repository is needed to rank the retrieved radiographs. The tagged age values of the retrieved images are utilized as part of the BAA process and the final estimated age is calculated as the mean of the retrieved values: x n Where x = Age of highest ranked retrieved images n = Total number of highest ranked retrieved images Therefore, bone age assessment is computed in the following steps: 1. Related features are extracted and stored in the database for a temporary period. 2. An individual query is enforced to the system's search engine by each feature. 3. The best matching output is retrieved from the feature repository according the similarity score for the query. Validation Experiments The image data used for the evaluation consists of images collected from normal samples. The age range of the images is 1-18 years for both genders, male and female. The radiographs are classified and scanned in X-ray format with 256 x 260 pixel size. Tables 1 and 2 illustrate the input variables and output results used to validate our system in terms of definition and obtained values. Extreme Learning Machines (ELM) Huang at el. [23] introduced the extreme learning machine (also called ELM) according the single-layer feed-forward neural network (SLFN) structure as a tool for learning algorithms [39,40]. The ELM solved the problems like improper learning rate, local minima, and over fitting commonly in iterative learning approaches [41]. ELM selects the input weights randomly and decides the output weights of SLFN analytically. ELM includes a more favourable general capability with faster learning speed. This algorithm does not require much human intervention and can execute much faster than other customary algorithms. The ELM algorithm is able to analytically specify all network variables that prevent human intervention. ELM is an effective technique with numerous advantages including high performance, ease of use, rapid learning speed, kernel functions and suitability for nonlinear activation. Single hidden layer feed-forward neural network (SLFN). SLFN structures include L hidden nodes which are usually applied like a mathematical theory of SLFN, combination of the two additives and RBF hidden nodes in an integrated way [42,43]: In the Eq (1) the learning variables of the hidden nodes indicated by a i and b i respectively; the weight joining that presented by β i is the ith hidden node toward the output node; and the output value of the ith hidden node in related to the input x is G (a i , b i , x). The additive structure g(x): R ! R (e.g., sigmoid and threshold), G(a i , b i , x) by the activation, hidden node is: where a i represents the vector of the weight that connects the input layout to the ith hidden node; b i is the basis of the ith hidden node a i ; x is the inner vector result a i and x in R n . G(a i , b i , x) can be found for the RBF hidden node with the activation structure g(x): R ! R (e.g., Gaussian), G(a i , b i , x) as [39]: Since a i and b i demonstrate the centre as well as the impact factor of the ith RBF node. The series of all positive real parameters presented by R + . In addition, the RBF network considers as a particular case of SLFN with RBF nodes in its hidden layer. For N arbitrary distinct samples (x i , t i ) 2 R n × R m , x i is the n × 1 input vector and t i is the m × 1 target vector. While an SLFN with L hidden nodes could be predict these N samples with zero error, it suggests there exist β i , a i and b i like as [39]: Eq (4) could be computed compactly as: Where Hðã;b;xÞ ¼ Where H is the hidden level of result matrix of SLFN with the ith column of H being the ith hidden node's output related to inputs x 1 ,. . ., x N . Principles of ELM. Recently, the application of ELM have been extensively studied in different research domains especially in biomedical engineering. ELM has three bold features from learning efficiently point of view: high learning accuracy, fast learning speed and least human invention. The benefit of ELM in generalization over traditional algorithms has been proved for the problems from various areas [44]. The algorithms introduced in neural networks do not included the generalization efficiency when they are applied for the first time. While, ELM reached the better generalization efficiency by the smallest training error rate. It was for this reason that we used ELM as it had the best chance to provide us with improved results. ELM was defined as a SLFN by L hidden neurons is able to learn L distinct samples which has zero error [27]. Even with the number of hidden neurons (L) is less than the number of distinct cases (N), ELM can still assign random parameters to the hidden nodes and compute the output weights by the pseudo inverse of H, with only a small error of ε > 0. The hidden node variables of ELM a i and b i can easily be adjusted random parameters and also they should not be tuned throughout training. These notions will be defined in the following theorems: Theorem 1: Let there be an SLFN with L additive or RBF hidden nodes and an activation structure g(x) that is extremely differentiable in all interval of R. Furthermore, for arbitrary L definite input variables {x i | x i 2 R n , i = 1,. . ., L} and fða i ; b i Þg L i¼1 randomly created by all continuous possibility distribution, respectively, the hidden layer output matrix is invertible with the probabilities of one, and the hidden layer output matrix H of the SLFN is invertible and kHβ−Tk = 0. Theorem 2: (Liang et al. [34]) Assigning the small positive rate of ε > 0 and activation operation g(x): R ! R, that is considerably differentiable in any interval, presently there is existent L N like that for N arbitrary distinct input vectors {x i | x i 2 R n , i = 1,. . ., L} for each fða i ; b i Þg L i¼1 randomly produced based upon any continuing potential distribution kH N×L β L×m − T N×m k < ε with a probability of one. As the hidden node variables of ELM cannot be adjusted throughout training since, they are allocated with random parameters, Eq (5) becomes a linear algorithm and the output weights should be appraised like the following [39]: Since H + indicated the Moore-Penrose generalized inverse [45] of the hidden level output matrix H which could be computed with many approaches containing orthogonal projection, orthogonalization, iteration, singular value decomposition (SVD), etc. [45]. The orthogonal projection technique is utilized only when H T T is non-singular and H + = (H T T) −1 H T . Owing to the presence of searching and iterations, orthogonalization and iteration methods are included limitations. Implementations of ELM are based on SVD to compute the Moore-Penrose generalized inverse of H, because it can be used in all positions. Hence, ELM is considered a batch learning method. Artificial neural networks The backpropagation learning algorithm in the multilayer feedforward network presented the famous neural network structures [46], it is widely used in different scientific fields [47]. Ordinarily, a neural system contains of three levels: (i) an input level; (ii) a middle or hidden level; and (iii) an output level. The first directions are D = (X 1 , X 2 , . . ., X n ) T and D 2 R n ; the outputs of q neurons in the hidden level are presented by Z = (Z 1 , Z 2 , . . ., Z n ) T ; and finally the results of the output level include the Y 2 R m , Y = (Y 1 , Y 2 , . . ., Y n ) T . Adopting the tolerance among the input and hidden levels and also the weight are shown by w ij and y j respectively and the weight and assuming thorough the hidden and output layers are presented by w jk and y k respectively, furthermore the outputs of any neuron in a hidden level and also output level are represented as following: Since f is applied as transfer function, is included the rule for planning the neuron's summed input to its output, using a proper instrument for providing non-linearity to the network system. The sigmoid function represents a major function that is monotonic improving and changing from zero to one. Genetic programming Genetic programming, or GP, is a progressive method containing Darwinian principles about natural parameters and survival to predict statement in symbol style. GP programming defines how the outputs relate to the input variables. This technique utilizes a basic sample of randomly generated programs (equations) extracted from a random mix of input values, functions and random numbers including arithmetic operators, comparison/logical functions and mathematical functions, which must be selected according to understanding of the process properly. Some solutions are exposed to the evolutionary procedure and the 'fitness' of the developed programs is examined. Particular programs with the best data fit are then picked up from the basic population sample. The structures that are the best matches choose to change some of the data among themselves to create better structures from 'mutation' and 'crossover,' which imitate the reproduction process in the natural world. In the genetic algorithm, mutation means exchanging programs randomly to make new structures, and crossover refers to the changing sections of the best programs with each other. This development routine repeat over successive generations and drive towards searching data for symbolic expressions that could be scientifically clarified to derive procedure information. GP provided a big improvement in the computer science, chemistry, bioinformatics, engineering and mathematics by the metaheuristic (called search heuristic) technique [48][49][50]. Proposed model accuracy evaluation The performance of the proposed models is represented as root mean square error (RMSE), coefficient of determination (R 2 ) and the Pearson coefficient (r). These statistics are defined as follows: 1. root-mean-square error (RMSE) 2. Pearson correlation coefficient (r) 3. coefficient of determination (R 2 ) While O i and also P i are the assessed value of bone age and the experiential, accordingly, further more n refers to the total amount of tested data. Performance evaluation of the proposed ELM model This section reports the results of the ELM bone age assessment models. Fig 3A shows the accuracy of the presented ELM BAA model. Subsequently, Fig 3B and 3C present the accuracy of the GP and ANN BAA models, respectively. It can be seen that most of the points fall along the diagonal line for the ELM assessment model. Consequently, the estimation results are in Architecture of soft computing models The parameters of the ELM, ANN and GP modelling frameworks employed in this study are presented in Table 3. Performance comparison of ELM, ANN and GP To demonstrate the merits of the presented ELM approach on a more definite and tangible basis, the accuracy of ELM model estimation was compared with the accuracy of estimation of the GP and ANN methods, which served as a benchmark. Conventional error statistical indicators, i.e., RMSE, r and R 2 , were used for comparison. Table 4. summarizes the results of estimation accuracy for the test datasets, since training error is not a credible indicator of the prediction potential of a particular model. The ELM model outperformed the GP and ANN models according to the results in Table 4. The ELM model provided significantly better results than the benchmark models. According to RMSE analysis in comparison with ANN and GP, it may be concluded that the proposed ELM outperformed the benchmark models. As ELM is a data driven algorithm, the primary limitation of our method is that it is heavily reliant on the data selection process. Conclusion In this study, a systematic approach was carried out to create a new fully automated method to assess bone age using an ELM model, in depended to image segmentation. The ELM measurement was compared with GP and ANN in order to evaluate the models' accuracy. The results calculated in terms of RMSE, r and R 2 , indicate that the ELM approach is superior to GP and ANN. Furthermore, the results revealed the robustness of the method. The proposed system has many appealing, remarkable features that distinguish it from conventional, well-known gradient-based learning approaches for feedforward neural networks. ELM approach has much faster learning speed compared to traditional feedforward network learning algorithms such as backpropagation (BP). Moreover, unlike traditional learning algorithms, ELM is able to attain the standard of weights as well as the smallest training error. Future work will involve further improving the skeletal age assessment accuracy by expanding the database of images.
5,166
2015-09-24T00:00:00.000
[ "Computer Science" ]
Realization of a thermal cloak–concentrator using a metamaterial transformer By combining rotating squares with auxetic properties, we developed a metamaterial transformer capable of realizing metamaterials with tunable functionalities. We investigated the use of a metamaterial transformer-based thermal cloak–concentrator that can change from a cloak to a concentrator when the device configuration is transformed. We established that the proposed dual-functional metamaterial can either thermally protect a region (cloak) or focus heat flux in a small region (concentrator). The dual functionality was verified by finite element simulations and validated by experiments with a specimen composed of copper, epoxy, and rotating squares. This work provides an effective and efficient method for controlling the gradient of heat, in addition to providing a reference for other thermal metamaterials to possess such controllable functionalities by adapting the concept of a metamaterial transformer. Recently, thermoelectric components 25 have offered a method for actively controlling heat flux. Other alternatives such as the shape memory alloy 26 have also been demonstrated, theoretically and experimentally, for use as a thermal diode 26 , a thermal cloak-concentrator (TCC) 27 , and a temperature-trapping device 28 . Analogous to the thermal cloak and concentrator, a practical idea for adapting the concept of manipulating the heat flow in electronic components has been applied 29,30 . In this method, the temperature within a shield area can be reduced and the concentrator can be employed to collect the low-grade waste heat of electronic components. Consequently, thermal metamaterials have been considered an important subject for future technology and applications. However, fabrication of such materials is extremely challenging owing to the continuous change in their thermal properties. The notion of discretizing thermal metamaterials into unit-cell thermal shifters, representing heat flux lines in local spots, was therefore recently proposed 31 . The method not only simplifies the manufacture of thermal metamaterials but also maintains their functionalities, including the cloak, concentrator, diffuser, and rotator. Recent research on thermal metamaterials has thus focused on the controllability of functionalities. In this study, we propose a new class of thermal metamaterials, namely a metamaterial transformer (MMT), by combining unit-thermal shifters and rotating squares 32,33 , a form of auxetic metamaterials 34 . Rotating squares become thicker as the rotation angle increases to 45°. They gradually close and shrink until the rotation angle is 90°. The rotating squares remain in the same configuration, whereas the unit cells of the rotating squares rotate 90° relative to each other. The proposed thermal metamaterials can possess tunable functionalities by transforming the configuration of the device. The designed MMT acts as a type of TCC that initially works as a cloak and then as a concentrator after the rotation of all squares. In contrast to previous approaches that are based on shape memory alloys, our MMT-based TCC can realize dual functions that can be freely controlled without out-of-plane deformation. We then developed a theoretical model of the MMT-based TCC for predicting the property of the TCC after the rotating squares have rotated 90°. In the following section, we demonstrate the cloaking and concentrating effect of the proposed device in both simulations and experiments. Results Numerical and experimental setup. The commercial finite element software COMSOL Multiphysics 5.3 was used for numerical simulations in this study. We performed simulations with a two-dimensional heat conduction module under a steady-state condition. The MMT-based TCC was analyzed in the following three cases. First, a theoretical model was constructed to simulate the functionality of an ideal TCC and was then rotated 90°. Second, an effective model composed of unit cell thermal shifters with a contact interface was constructed for verification. Finally, a revised effective model of the MMT-based TCC was constructed to take radiation heat loss into consideration. For the theoretical model, we set the top and bottom surfaces as the insulators, whereas the left and right boundaries were prescribed temperatures in accordance with the experimental measurements. The model comprised a background material with thermal conductivity k = 1W/(mK) in both the interior and exterior of the circle, and the circular material was given theoretical anisotropic thermal conductivity (see Methods) with R 70 2 mm 2 = , = R 2 mm 1 35 2 and L x = L y = 210 mm, as shown in Fig. 1(a). The effective TCC model was constructed based on Supplementary Tables S1 and S2. The thermal conductivity of copper, epoxy and background materials were defined as k 1 , k 2 , and k 3 , respectively. The boundary conditions were set identical to those in the theoretical model, and the configuration of the proposed TCC is shown in Fig. 1(b), with k 1 = 400W/(mK), k 2 = 0.3W/(mK), k 3 = 90W/(mK), L x = L y = 210 mm and l x = l y = 140 mm. Moreover, we further considered mimicking the experimental setup by adding a thin layer of thermal compound in between adjacent unit cells, and set the thin-layer (0.1 mm) thermal compound with a low thermal conductivity k 4 . The choice of k 4 is described in Supplementary Section 1 and discussed later. The experimental setup consisted of a thermal infrared (IR) camera (Fluke Ti-450), heat baths (Aron WB-500D), a fixing device, and a specimen, as shown in Fig. 1(c). The heat bath on the left side served as the source applied to the specimen, whereas the heat bath on the right side, filled with an ice-water mixture, was used as the heat sink. The specimen was assembled into 4 × 4 unit-cell thermal shifters composed of copper and epoxy, with 4 × 4 rotating squares serving as the base connected with joints, as shown in Fig. 1(d1),(d2), and (d3). The design of joints connecting the rotating squares in the present study is illustrated in Supplementary Figure S2. In addition, the assembled specimen was connected to the thin copper plate with 0.3 mm thickness at each end. We polished the interfaces of the unit-cell thermal shifters and added the thermal compound (Cooler Master RG-ICF-CWR2-GP) (Thermal conductivity k = 1W/(mK), from the product sheet provided by the manufacturer) in between adjacent unit cells to reduce the thermal resistance. As illustrated in Fig. 1(c), the temperature profile on the topside of the specimen was obtained through a calibrated IR camera, with a thin coating of high-emissivity (ε = 0.94) black acrylic paint, ART-ANDREA-S-72075801, applied on the topside for accurate thermal imaging. Figure 2(a1,a2) illustrate the simulated temperature profile and isothermal lines of the theoretical model, respectively, with the red arrows indicating the directions of the heat flux. The simulation revealed the properties of the proposed TCC, where no external distortion existed and no internal gradient was observed (no heat flux through the inner region). After the specimen was rotated 90°, no external distortion existed, but a much greater internal gradient was observed (with greater heat flux through the inner region), as shown in Fig. 2(a1,a2). Figure 2(b1,b2) present the temperature profile and isothermal line of the TCC, respectively. An annular ring of the theoretical model was plotted to clarify the difference between the theoretical and effective models. As illustrated in Fig. 2(b1), the temperature inside the annular ring was almost constant, with the isothermal lines outside the annular ring exhibiting minimal disturbance, similar to the properties displayed by a thermal cloak. As shown in Fig. 2(b2), the model rotated 90°, demonstrating the ability to guide the heat flux into the inner region of the annular ring, thus causing more temperature variations inside the annular ring and a slight disturbance outside the ring. The preceding results show that the proposed MMT-based TCC can control the gradient within a particular region. However, it is reasonable to question whether such an ability can still exist when the radiation heat loss as well as the thermal resistance in the contact interfaces are considered. To explore this question numerically, in the final case we applied a surface-to-ambient radiation boundary condition in COMSOL; that is where Q is the thermal energy leaving the surface, ε u = 0.94 is the emissivity of the surface, σ is the Stefan-Boltzmann constant, and T amb = 28.9 °C is the ambient temperature. To compare the experimental results, the revised effective model and the experimental specimen were constructed as similar as possible. For the revised model, we simultaneously simulated the thermal radiation with the thermal resistance due to the contact interface (see Supplementary Section 1) and concluded that the temperature profile along y 0 = when setting k = 0.5 W/(mK) is closest to that of the experimental data. Figure 2(c1,c2) show the simulated temperature profile and isothermal lines of the revised effective model, respectively. The isothermal lines detoured around the inner region in the model rotated 0°, and were compressed into the inner region in the model rotated 90°. Clearly, when the radiation heat loss was considered, the prominent control of the gradient within the inner region varied when rotated either 0° or 90°. Experimental validation of MMT-based TCC. The steady-state temperature profile of the specimen was then measured using the IR camera, and the results of which are shown in Fig. 2(d1,d2). The temperature of the inner region was almost constant, as shown in Fig. 2(d1), but it changed drastically, as shown in Fig. 2(d2). However, owing to heat loss to the surrounding area, the overall temperature was lower near the right side of the specimen, which was determined to be consistent with the revised model in Fig. 2. Comparisons of the theoretical, effective, revised model, and experimental results of the TCC are discussed further. Discussion First, the theoretical and effective models of the TCC, along with the revised model and the experimental results, are discussed. From Fig. 2, the temperature along y = 0 in all model types and experimental results can be plotted, as shown in Fig. 3, in which the broken lines indicate the inner region of the TCC. We noted that the effective model was consistent with the theoretical model, as shown in both Fig. 3(a1,b1). This means that the proposed MMT-based TCC is equivalent to the theoretical model deduced from transformation thermodynamics. However, the overall temperature in the experimental results was lower than that observed in the effective model, as shown in both Fig. 3(a1,b1). This implies a loss of heat into the ambient. Furthermore, the thermal interfacial For further comparison, the ability to change the temperature gradient inside the proposed TCC is presented in Fig. 3(a2,b2) , is the gradient of the temperature within line segment bc over the applied gradient. We noted that the performance of cloaking improved when the device had a small value of m; moreover, a large value of m indicated better performance for the concentrator. As illustrated in Fig. 3(a2), when the device was rotated 0°, the m value was 0 for the theoretical model, 0.26 for the effective model, and 0.24 for the revised model, whereas the measured result was 0.29. Because the overall temperature was lower in the revised , causing a slight increase in m. Furthermore, as shown in Fig. 3(b2), when the device was rotated 90°, the m value was 1.94 for the theoretical model, 2.06 for the effective model, and 1.76 for the revised model, whereas the measured result was 1.51. Notably, the gradient of the effective model was even greater than that of the theoretical model, a result caused by the fact that the inner region of the theoretical model was composed of background material which had no ability to concentrate the heat. Moreover, the thermal resistance of unit-cell thermal shifters resulted in a lower m value, compared with the revised model. The effect of the modulation of the temperature range and the surrounding temperature due to radiation was numerically examined. Specifically, the 0° rotated TCC was investigated. First, the surrounding temperature in the model was fixed at 25 °C, and the modulation of the temperature range was varied. The simulation results, as shown in Fig. 4(a), reveals that the overall normalized temperature in the cloaking zone decreases when the surrounding temperature is close to the lower-bound temperature (See the case: 10 °C ~25 °C~50 °C), and vice versa. The normalized temperature profile along y = 0, however, behaves similarly as the surrounding temperature and the modulation of the temperature range have the same deviation. (Fig. 4(b)). For application considerations, the unit cells of the inner region of the effective model in the simulation were removed, and the number of unit-cell thermal shifters is discussed in this section. The temperature along y = 0 is shown in Fig. 5, in which the broken line indicates the inner region of the TCC. In the device rotated 0°, we noted that when the number of unit cells increased, the gradient within the inner region decreased, as shown in Fig. 5(a1); by contrast, in the device rotated 90°, the gradient increased as the number of unit cells increased, as shown in Fig. 5(b2). For ease of comprehension, Table 1 shows the gradient within line segment bc over the applied gradient, m. We observed that the ability to change the gradient internally was below the expected level in the 3 × 3 unit cell-based TCC, because the number of unit cells in the model was insufficient for the anisotropic thermal conductivity to be deduced from transformation thermodynamics. However, once the number of unit cells was sufficient, for example 6 × 6, an ability to change the gradient was then demonstrated. Moreover, the isothermal distortion of the exterior of the device can be observed in Fig. 5(a2,b2) and Fig. 6(a1,a2,b1,b2,c1,c2). The isothermal distortion was observed to decrease when the number of unit-cell thermal shifters increased in both 0° and 90° rotated TCCs. Conclusion By combining rotating squares with auxetic property, we proposed the use of thermal metamaterials with tunable functionalities. An MMT-based TCC, which can change from a cloak to a concentrator when the device configuration is transformed, was then investigated. The MMT-based TCC can thermally protect a region and enable a concentrator to focus heat flux in a small region, and this was verified in both simulations and experiments. In summary, the proposed MMT-based TCC could control the gradient within the inner region in both the theoretical and the effective models. However, this ability was slightly suppressed when the radiation heat loss considered, which is reasonable given that the transformation thermodynamics is based on the theory of heat conduction. However, the contact interface of the thermal shifters should be sufficiently smooth to minimize thermal interfacial resistance. Additionally, for application considerations, we removed the unit cells of the region within the device and found that the greater the number of unit cells, the better the ability to control the gradient within the device (and vice versa). Furthermore, when the number of thermal shifters was sufficient, we observed only a small amount of isothermal distortion outside the device. With such functionality to control the gradient internally, the proposed TCC can be used for thermal protection of a heat-sensitive device or as a thermal concentration application within a given region. Methods In this study, we adapted effective medium theory to construct the unit-cell thermal shifters. The theoretical and effective models of a unit-cell thermal shifter are illustrated in Supplementary Figure S4. The effective thermal conductivity of the composite was fabricated by alternately stacking two sheets of thermal conductivities and can be obtained by Bandaru 35 is the thermal conductivity in series, as shown in Fig. 7(b). The effective thermal conductivity of the composite rotated in the x-y plane by θ can be given as follows: where J is the Jacobian for the rotation, and Trans(J) denotes the transpose of J. Using the equation (2), we can obtain the effective thermal conductivity of the thermal shifters composed of alternately stacking two sheets of thermal conductivities rotated by θ, as shown in Fig. 7(c). However, for the fabrication of thermal metamaterials using the assembly concept, the contact interface between thermal shifters will change the effective thermal conductivity of the thermal shifters. Therefore, the effective thermal conductivity of the thermal shifters with contact interface must be derived. Using the equation (2), we can obtain the following: We then use the same technique in deriving the equation (1). First, we consider the thermal shifter surrounded by the contact interface as a unit cell, and then divide the contact interface into four segments, as shown in Fig. 7(d). Hence, we can obtain the effective thermal conductivity of thermal shifters with Segment No.1, using the equation (1): , as shown in Fig. 7 Once the thermal conductivity of a unit-cell thermal shifter with contact interface is obtained, the discretized thermal cloak can be designed. According to Gueeanu 4 , the anisotropic thermal conductivity of a thermal cloak is given as follow: where r′=R 1 +r (R 2 − R 1 )/R 2 , and R 1 and R 2 are the inner radius and outer radius of the cloak, respectively. First, we let R 2 /R 1 = 4 and = R 70 2 2 mm; thus, the length of the thermal shifter is 35 mm. Substituting these values into the equation (6), we obtain the following: We then discretize the thermal cloak into 4 × 4 unit-cell thermal shifters, as shown in Fig. 8(a1-a3). Subsequently, the equation (7) can be expressed in Cartesian coordinates as follows: 3 3 70 2 cos 3 70 2 3 sin 3 70 2 3 3 3 70 2 cos sin 3 70 2 3 3 3 70 2 cos sin 3 3 70 2 sin 3 70 2 3 cos By substituting each center coordinate of the thermal shifters into the preceding equation, we obtain the desired thermal conductivity of each thermal shifter (See Supplementary Table S1). Accordingly, we could obtain the required arrangement of each unit-cell thermal shifter by substituting the thermal conductivity in Supplementary Table S1 into the equation (5). The preceding task should be considered an optimization problem, which involves determining all the parameters in the equation (5) to obtain an optimum solution. In this instance, the unit cell thermal shifters in the thermal cloak are composed of layered, two-dimensional structures comprising copper as the high thermal conductivity and epoxy as the low conductivity material. Hence, we let k 1 = 400W/mK, k 2 = 0.3W/mK, d = 1 mm, and L = 35 mm in the equation (5) for manufacturing consideration, leaving only L 1 , L 2 , and θ to be determined. For the chosen parameter, the current optimization problem is subject to the following box constraints: By solving the preceding optimization problem, we can obtain the exact parameters described in the equations (9)-(11) (See Supplementary Table S2). As a consequence of the optimal parameters shown in Supplementary Table S2, the effective thermal conductivity varies inside the different unit-cell thermal shifters, as shown in Fig. 8(b1-b3), and is generally anisotropic. To obtain the anisotropic thermal conductivity of the MMT-based TCC, we rotated the thermal conductivity in the equation (6) by 90°, identical to the angle rotated by the unit cell of rotating squares relative to each other (the tunable mechanism of the MMT-based TCC is displayed in Supplementary). Thus, we obtain the following: In the equations (12) and (13), when the rotating angle, ψ, is 0°, k ψ is the anisotropic thermal conductivity of the thermal cloak. However, when the rotating angle is 90°, k ψ is the anisotropic thermal conductivity of the thermal concentrator. Hence, for other MMT-based thermal metamaterials with tunable functionalities, the anisotropic thermal conductivity can be obtained using the coordinate transformation technique, which involves rotating the original thermal conductivity by 90°. Data availability statement. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
4,603.2
2018-02-06T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
A Remote Raman System and Its Applications for Planetary Material Studies A remote Raman prototype with a function of excitation energy adjusting for the purpose of obtaining a Raman signal with good signal-to-noise ratio (SNR), saving power consumption, and possibly avoiding destroying a target by high energy pulses, which may have applications for Chinese planetary explorations, has been setup and demonstrated for detecting different minerals. The system consists of a spectrograph equipped with a thermoelectrically cooled charge-coupled device (CCD) detector, a telescope with 150 mm diameter and 1500 mm focus length, and a compact 1064 nm Nd:YAG Q-switched laser with an electrical adjusted pulse energy from 0 to 200 mJ/pulse. A KTP crystal was used for second harmonic generation in a 1064 nm laser to generate a 532 nm laser, which is the source of Raman scatting. Different laser pulse energies and integration time were used to obtain distinguishable remote Raman spectra of various samples. Results show that observed remote Raman spectra at a distance of 4 m enable us to identify silicates, carbonates, sulfates, perchlorates, water/water ice, and organics that have been found or may exist on extraterrestrial planets. Detailed Raman spectral assignments of the measured planetary materials and the feasible applications of remote Raman system for planetary explorations are discussed. Introduction As a powerful spectroscopic analysis technique, Raman spectroscopy that has been applied into many geoscientific areas including mineralogy, gemology, planetary analyses and space exploration, astrobiology and biomineralization, cultural heritage and archaeometry, etc., can provide accurate and detailed molecular and structural information of Earth and planetary materials. Owing to its advantages in no sample preparations, quick and non-destructive analyses, unambiguous phase identifications, as well as low-mass and robust behaviors on the mineralogy and mineral chemistries of rock and soil samples, many Raman spectroscopic studies on returned samples [1], meteorites [2] and planetary analogues [3] have been reported. However, almost all of these studies used traditional micro-Raman with a working distance of several centimeters or less. During planetary exploration, tools with long detective distance are desirable. In the early 1960s, remote Raman was developed and employed in the detection of gases. Kobayasi and Inaba successfully observed Raman features of SO 2 and CO 2 from an oil smoke plume using a remote Raman system at a distance of 20 m [4]. In 1992, Angel employed a remote Raman system to identify solid inorganics and organics such as NaNO 3 , NaNO 2 , and acetaminophen at a distance of ten meters [5]. To date, several Raman systems have been proposed for either in situ or remote Experimental Setup and Samples In this work, we built a remote Raman system in Shandong University using a compact Nd:YAG Q-switched laser source (Beamtech Optronics Co, Ltd. Company, Beijing, China. Dawa-200 Laser, 1064 nm, 0~200 mJ/pulse, 0~20 Hz, pulse width 9 ns, central laser spot divergence~1 mrad, diameter~6 mm), a spectrograph equipped with a CCD (Andor, DU416A-LDC-DD, Oxford Instruments Company, Oxford, UK) detector, and a telescope (CELESTRON NexStar 6SE, 150 mm diameter, 1500 mm focus length) shown in Figure 1. Commonly, shorter wavelength lasers will excite stronger Raman scattering because of the 1/λ 4 increase in Raman scattering cross-sections; however, sample degradation or fluorescence may appear. Thus, 532 nm laser, which is generated by the second harmonic generation of 1064 nm laser using a frequency doubling crystal KTP (8 × 8 × 8 mm) with an exchange efficiency of 45-50%, was selected as the light source. The 532 nm green laser used in oblique geometry to excite Raman scattering was reflected onto samples by two 532 nm mirrors (45 • ). The oblique geometry delivers all the laser pulse power to the target and creates less near-field scattering at a short distance [10]. Laser spot diameter on samples, which is straight ahead of the telescope, is about 1.5 cm at a distance of 4 m. The laser energy on sample is about 0-65 mJ/pulse due to the energy loss caused by oblique geometry, which causes a larger angle of incidence (θ > 45 • ) shown in Figure 1. Sensors 2021, 21, x FOR PEER REVIEW 3 of 14 the 1/λ 4 increase in Raman scattering cross-sections; however, sample degradation or fluorescence may appear. Thus, 532 nm laser, which is generated by the second harmonic generation of 1064 nm laser using a frequency doubling crystal KTP (8 × 8 × 8 mm) with an exchange efficiency of 45-50%, was selected as the light source. The 532 nm green laser used in oblique geometry to excite Raman scattering was reflected onto samples by two 532 nm mirrors (45°). The oblique geometry delivers all the laser pulse power to the target and creates less near-field scattering at a short distance [10]. Laser spot diameter on samples, which is straight ahead of the telescope, is about 1.5 cm at a distance of 4 m. The laser energy on sample is about 0-65 mJ/pulse due to the energy loss caused by oblique geometry, which causes a larger angle of incidence (θ > 45°) shown in Figure 1. When the laser hits the samples, Raman signal will be generated and collected by the telescope and then focused onto a fiber probe (FP) by the convex lens (L). The reflected 532 nm laser is removed by a 532 nm Notch filter (NF) fixed between the telescope and L. An optical fiber with 600 µm diameter core transferred the Raman signal into the spectrograph, in which two volume holographic gratings are employed. The spectral resolutions of the two volume holographic gratings with a 50 µm slit were 5.48 cm −1 in the wavelength region of 531-614 nm and 4.8 cm −1 in the wavelength region of 605-699 nm, respectively. Raman scattering was first focused to pass through a 50 µm slit by lens combination and then transmitted into a collimated beam by lens, finally irradiated on the volume holographic grating, which will disperse Raman scattering onto CCD. While conducting Raman scattering measurement, CCD worked in CW mode, which means the detector is "on" during the integration time. All of the Raman measurements were conducted in our lab at 4 m and 20 Hz with different pulse energies as well as different integration time in a dark environment for obtaining a better SNR. With the purpose of testing the performance of this remote Raman system, we conducted a series of remote Raman measurements on minerals, analytical grade chemicals, deionized water, and water ice. Mineral samples (calcite, quartz, olivine, albite, K-feldspar) were collected by petrographic laboratory in Shandong University. No cutting or polishing was performed with these mineral samples. Analytical grade chemicals (e.g., epsomite (MgSO4·7H2O), KClO4, melanterite (FeSO4·7H2O), gypsum (CaSO4·2H2O), alunogen (Al2(SO4)3·18H2O), potassium nitrate (KNO3), potassium carbonate (K2CO3), NaClO4·H2O, Mg(ClO4)2·6H2O, L-Alanine, L-Phenylalanine, L-Glutamine and ethanol (C2H6O)) of analytical grade were purchased from the Sinopharm Chemical Reagent Beijing Co., Ltd. Rhomboclase (FeHSO4·4H2O) was synthesized using the method mentioned by Ling et al. [25]. We regarded these chemicals as pure samples. In order to obtain pure water ice, 70 mL deionized water held in a wide-mouth bottle of size 4.8 cm in diameter and 10.5 cm in height was placed into a freezer remaining at temperature around −20 °C When the laser hits the samples, Raman signal will be generated and collected by the telescope and then focused onto a fiber probe (FP) by the convex lens (L). The reflected 532 nm laser is removed by a 532 nm Notch filter (NF) fixed between the telescope and L. An optical fiber with 600 µm diameter core transferred the Raman signal into the spectrograph, in which two volume holographic gratings are employed. The spectral resolutions of the two volume holographic gratings with a 50 µm slit were 5.48 cm −1 in the wavelength region of 531-614 nm and 4.8 cm −1 in the wavelength region of 605-699 nm, respectively. Raman scattering was first focused to pass through a 50 µm slit by lens combination and then transmitted into a collimated beam by lens, finally irradiated on the volume holographic grating, which will disperse Raman scattering onto CCD. While conducting Raman scattering measurement, CCD worked in CW mode, which means the detector is "on" during the integration time. All of the Raman measurements were conducted in our lab at 4 m and 20 Hz with different pulse energies as well as different integration time in a dark environment for obtaining a better SNR. With the purpose of testing the performance of this remote Raman system, we conducted a series of remote Raman measurements on minerals, analytical grade chemicals, deionized water, and water ice. Mineral samples (calcite, quartz, olivine, albite, K-feldspar) were collected by petrographic laboratory in Shandong University. No cutting or polishing was performed with these mineral samples. Analytical grade chemicals (e.g., epsomite (MgSO 4 ·7H 2 O), KClO 4 , melanterite (FeSO 4 ·7H 2 O), gypsum (CaSO 4 ·2H 2 O), alunogen (Al 2 (SO 4 ) 3 ·18H 2 O), potassium nitrate (KNO 3 ), potassium carbonate (K 2 CO 3 ), NaClO 4 ·H 2 O, Mg(ClO 4 ) 2 ·6H 2 O, L-Alanine, L-Phenylalanine, L-Glutamine and ethanol (C 2 H 6 O)) of analytical grade were purchased from the Sinopharm Chemical Reagent Beijing Co., Ltd. Rhomboclase (FeHSO 4 ·4H 2 O) was synthesized using the method mentioned by Ling et al. [25]. We regarded these chemicals as pure samples. In order to obtain pure water ice, 70 mL deionized water held in a wide-mouth bottle of size 4.8 cm in diameter and 10.5 cm in height was placed into a freezer remaining at temperature around −20 • C for 2 h. No bubbles were found in water ice. All liquids and powder samples were measured through glass vials of size 2 cm in diameter and 5 cm in height with caps. Figure 2 displays the remote Raman spectra of CaCO 3 , K 2 CO 3 , and KNO 3 with 10 s integration time and 30 mJ/pulse laser (532 nm) incident to sample. According to previous works [26,27], Raman peaks of KNO 3 in remote Raman spectrum can be identified as follows: the strongest Raman peaks 1052 cm −1 are assigned to ν 1 (A g ) mode of NO 3 −1 ; another A g mode observed at 1362 cm −1 is assigned to ν 3 mode of NO 3 −1 ; peak at 1346 cm −1 belongs to ν 3 (B 1g ) mode of NO 3 −1 ; peak at 716 cm −1 is attributed to ν 4 (A g + B 1g ) mode of NO 3 −1 . Raman peaks in wavenumber region of 190-150 cm −1 and 310-285 cm −1 are attributed to relative translations between the cations and anionic groups [28]. Peaks at 690 and 712 cm −1 are asymmetric bending mode of ν 4 of K 2 CO 3 and CaCO 3 , respectively. Symmetric stretching mode of ν 1 are observed at 1063 cm −1 for K 2 CO 3 and at 1085 cm −1 for CaCO 3 . Asymmetric stretching mode of ν 1 are identified by peaks near 1407 and 1755 cm −1 in remote Raman spectra of K 2 CO 3 and CaCO 3 respectively. An amount of 1786 cm −1 in K 2 CO 3 and 1755 cm −1 in CaCO 3 are ν 1 + ν 4 modes. Detailed assignments of Raman spectra of KNO 3 , CaCO 3 and K 2 CO 3 are shown in Table 1. Based on these remote Raman spectra, we can distinguish carbonates with different cations with ease. For the same cation (e.g., K), nitrates and carbonates share similarities in number of Raman peaks and their positions, although the stretching modes (ν 1 , ν 3 ) of Raman peak seems to be lower for NO 3 (1052 cm −1 and 1362 cm −1 ) than that of CO 3 (1063 cm −1 and 1407 cm −1 ). for 2 h. No bubbles were found in water ice. All liquids and powder samples were measured through glass vials of size 2 cm in diameter and 5 cm in height with caps. Figure 2 displays the remote Raman spectra of CaCO3, K2CO3, and KNO3 with 10 s integration time and 30 mJ/pulse laser (532 nm) incident to sample. According to previous works [26,27], Raman peaks of KNO3 in remote Raman spectrum can be identified as follows: the strongest Raman peaks 1052 cm −1 are assigned to ν1(Ag) mode of NO3 −1 ; another Ag mode observed at 1362 cm −1 is assigned to ν3 mode of NO3 −1 ; peak at 1346 cm −1 belongs to ν3 (B1g) mode of NO3 −1 ; peak at 716 cm −1 is attributed to ν4 (Ag + B1g) mode of NO3 −1 . Raman peaks in wavenumber region of 190-150 cm −1 and 310-285 cm −1 are attributed to relative translations between the cations and anionic groups [28]. Peaks at 690 and 712 cm −1 are asymmetric bending mode of ν4 of K2CO3 and CaCO3, respectively. Symmetric stretching mode of ν1 are observed at 1063 cm −1 for K2CO3 and at 1085 cm −1 for CaCO3. Asymmetric stretching mode of ν1 are identified by peaks near 1407 and 1755 cm −1 in remote Raman spectra of K2CO3 and CaCO3 respectively. An amount of 1786 cm −1 in K2CO3 and 1755 cm −1 in CaCO3 are ν1 + ν4 modes. Detailed assignments of Raman spectra of KNO3, CaCO3 and K2CO3 are shown in Table 1. Based on these remote Raman spectra, we can distinguish carbonates with different cations with ease. For the same cation (e.g., K), nitrates and carbonates share similarities in number of Raman peaks and their positions, although the stretching modes (ν1, ν3) of Raman peak seems to be lower for NO3 (1052 cm −1 and 1362 cm −1 ) than that of CO3 (1063 cm −1 and 1407 cm −1 ). The remote Raman spectra of three perchlorates NaClO4⋅H2O, Mg(ClO4)2⋅6H2O, and KClO4 are shown in Figure 3. All major Raman peaks of these three perchlorates can be The remote Raman spectra of three perchlorates NaClO 4 ·H 2 O, Mg(ClO 4 ) 2 ·6H 2 O, and KClO 4 are shown in Figure 3. All major Raman peaks of these three perchlorates can be identified according to their remote Raman spectra obtained with 10 s integration time and 40 mJ/pulse laser (532 nm) incident to sample. Usually, perchlorates would show three main Raman bands, the symmetric Cl-O stretching mode (ν 1 (A 1 )) in the wavenumber region of 950-930 cm −1 , Raman active deformation (ν 4 (T 2 )) of Cl-O between 635 and 625 cm −1 , and a theoretical Raman inactive deformation (ν 2 (E)) belonging to Cl-O within range of 470-445 cm −1 . A theoretical inactive vibrational mode, which displays a negligible intensity as the fourth Raman band of perchlorates caused by the anti-symmetric stretching vibration (ν 3 (T 2 )), was observed between 1150-1040 cm −1 . Raman peak positions of KClO 4 , NaClO 4 ·H 2 O, and Mg(ClO 4 ) 2 ·6H 2 O agree well with previous works [31] and are shown in Table 2 with vibrational modes. Perchlorates have been identified on Mars at the Phoenix landing site [32], Gale Crater [33], Viking sites [34,35], and in Martian meteorite EETA79001 [32] and ClO 4 −1 also has been found in Apollo lunar samples and carbonaceous chondrite meteorites [35]. The presence and distribution of perchlorate on Mars would have implications for Martian Cl cycles and the preservation of biosignatures [36]. Remote Raman with the ability of identifying perchlorates will be a desirable tool to reveal more perchlorates in our solar system. identified according to their remote Raman spectra obtained with 10 s integration time and 40 mJ/pulse laser (532 nm) incident to sample. Usually, perchlorates would show three main Raman bands, the symmetric Cl-O stretching mode (ν1 (A1)) in the wavenumber region of 950-930 cm −1 , Raman active deformation (ν4 (T2)) of Cl-O between 635 and 625 cm −1 , and a theoretical Raman inactive deformation (ν2 (E)) belonging to Cl-O within range of 470-445 cm −1 . A theoretical inactive vibrational mode, which displays a negligible intensity as the fourth Raman band of perchlorates caused by the anti-symmetric stretching vibration (ν3 (T2)), was observed between 1150-1040 cm −1 . Raman peak positions of KClO4, NaClO4⋅H2O, and Mg(ClO4)2⋅6H2O agree well with previous works [31] and are shown in Table 2 with vibrational modes. Perchlorates have been identified on Mars at the Phoenix landing site [32], Gale Crater [33], Viking sites [34,35], and in Martian meteorite EETA79001 [32] and ClO4 −1 also has been found in Apollo lunar samples and carbonaceous chondrite meteorites [35]. The presence and distribution of perchlorate on Mars would have implications for Martian Cl cycles and the preservation of biosignatures [36]. Remote Raman with the ability of identifying perchlorates will be a desirable tool to reveal more perchlorates in our solar system. As a significant secondary mineral, sulfates may have important implications for past and present environmental evolutions of Mars. Different types of hydrous/anhydrous sulfates (e.g., Mg-, Ca-, Fe-, and Al-sulfates) have been found on Mars by orbital remote sensing and roving missions [37]. We conducted remote Raman measurements on five types of hydrous sulfates with 65 mJ/pulse laser (532 nm) incident to sample and 20 s integration time. As shown in Figure 4, the remote Raman system is able to clearly obtain the fingerprint Raman bands of FeSO4⋅7H2O, MgSO4⋅7H2O, Al2(SO4)3⋅18H2O, CaSO4⋅2H2O, and FeHSO4⋅4H2O from 100 to 4000 cm −1 . The strongest peaks of five hydrous sulfates at 977, 985, 992, 1008, and 1101 cm −1 are caused by symmetrical stretching mode (ν1(SO4 2− )) of SO4 2-ions in FeSO4⋅7H2O, MgSO4⋅7H2O, Al2(SO4)3⋅18H2O, CaSO4⋅2H2O, and FeHSO4⋅4H2O, As a significant secondary mineral, sulfates may have important implications for past and present environmental evolutions of Mars. Different types of hydrous/anhydrous sulfates (e.g., Mg-, Ca-, Fe-, and Al-sulfates) have been found on Mars by orbital remote sensing and roving missions [37]. We conducted remote Raman measurements on five types of hydrous sulfates with 65 mJ/pulse laser (532 nm) incident to sample and 20 s integration time. As shown in Figure 4, the remote Raman system is able to clearly obtain the fingerprint Raman bands of Table 3. The ability of exact identification of hydrous salts would make it possible to acquire a more refined knowledge of past and/or present aqueous environment of extraterrestrial planets using remote Raman spectrometers. Table 3. The ability of exact identification of hydrous salts would make it possible to acquire a more refined knowledge of past and/or present aqueous environment of extraterrestrial planets using remote Raman spectrometers. Water displays strong Raman signals in longer wavenumber region due to intramolecular stretching such as the symmetric and antisymmetric stretching vibrational modes. In deionized water Raman spectra obtained at room temperature, two strong broad Raman bands were observed around 3223 and 3448 cm −1 which are attributed to the symmetric (ν 1 ) and antisymmetric stretching (ν 3 ) vibrational modes of the water molecule, respectively [40]. The increase of order in the ice structure will sharpen and shift the H-O-H symmetric stretching mode of water toward short wavenumber region. A sharper Raman band near 3147 cm −1 [41], which is clear enough to distinguish ice and water, was observed in remote Raman spectrum of water ice shown in Figure 5. Water is thought to be essential for supporting life; thus, the presence of water and/or water ice on a planet could make things easier for astronauts drinking and living, as well as in situ resource utilization for creation of oxygen and rocket hydrogen-oxygen fuel. Water ice on the surface of the Moon, Ceres, Mercury, Mars and other planets as well as moons has been reported mostly by visible and near infrared spectral remote sensing [42][43][44]. A remote Raman system carried on a rover would provide positive evidence for the presence and distributions of water ice, which is conductive to further exploration and utilization of water and/or water ice on these celestial bodies. Results and Discussions Organics, which might be indicative for life, are always attractive targets in planetary explorations. Definite detection of organics on extraterrestrial planets and moons might suggest the presence of life. We conducted remote Raman measurements on liquid ethanol (C 2 H 6 O), solid L-alanine (C 3 H 7 NO 2 ), L-phenylalanine (C 9 H 11 NO 2 ), and L-glutamine (C 5 H 10 N 2 O 3 ), shown in Figures 6 and 7. The weak peak at 431 cm −1 in Figure 6 is caused by bending vibration of CCO. The strongest peak at 884 cm −1 is attributed to stretching vibrational mode of CCO and peaks at 1053 and 1097 cm −1 are assigned to deformation mode of CCO. Peak around 1456 cm −1 is produced by bending vibrational modes of CH 3 and CH 2 . The shoulder at 1486 cm −1 is bending vibrational mode of CH 3 .The symmetric stretching mode of CH 3 is observed at 2873 cm −1 . The symmetric and asymmetric CH 3 stretching modes produce peaks at 2924 and 2972 cm −1 , respectively [45]. We did not observe water peaks because of the 99% volume ratio of the ethanol. Figure 6 is caused by bending vibration of CCO. The strongest peak at 884 cm −1 is attributed to stretching vibrational mode of CCO and peaks at 1053 and 1097 cm −1 are assigned to deformation mode of CCO. Peak around 1456 cm −1 is produced by bending vibrational modes of CH3 and CH2. The shoulder at 1486 cm −1 is bending vibrational mode of CH3.The symmetric stretching mode of CH3 is observed at 2873 cm −1 . The symmetric and asymmetric CH3 stretching modes produce peaks at 2924 and 2972 cm −1 , respectively [45]. We did not observe water peaks because of the 99% volume ratio of the ethanol. Figure 6 is caused by bending vibration of CCO. The strongest peak at 884 cm −1 is attributed to stretching vibrational mode of CCO and peaks at 1053 and 1097 cm −1 are assigned to deformation mode of CCO. Peak around 1456 cm −1 is produced by bending vibrational modes of CH3 and CH2. The shoulder at 1486 cm −1 is bending vibrational mode of CH3.The symmetric stretching mode of CH3 is observed at 2873 cm −1 . The symmetric and asymmetric CH3 stretching modes produce peaks at 2924 and 2972 cm −1 , respectively [45]. We did not observe water peaks because of the 99% volume ratio of the ethanol. L-alanine (C 3 H 7 NO 2 ), L-phenylalanine (C 9 H 11 NO 2 ), and L-glutamine (C 5 H 10 N 2 O 3 ) serving as basic elements for the various proteins would display many Raman peaks due to many different kinds of vibrational modes. The Raman spectra of them are shown in Figure 7. For L-alanine, the strongest Raman peak 846 cm −1 is attributed to the (CCH 3 ) symmetric stretching vibration, the 532 cm −1 peak is due to the rocking motion of the CO 2 − group. The rocking motion of the CH 3 group appeared at 1010 and 1024 cm −1 . A Raman peak of 1112 cm −1 is attributed to CN stretching vibration and 1304 cm −1 is caused by CH deformation. The CH 3 symmetric deformation band (1357 cm −1 ) and asymmetric deformation bands (1457 and 1478 cm −1 ) were also observed. In Raman spectra of L-Phenylalanine, the intense Raman peaks at 1004 cm −1 of L-Phenylalanine is caused by C-C symmetric stretching in ring. Peak at 621 cm −1 exhibits the presence of O-C=O inplane bending modes. Wagging vibration of CH 2 is observed near 832 cm −1 . 854 cm −1 is assigned to NH 2 deformation. 1035 cm −1 is distributed to C-H plane deformation in ring. Raman band corresponding to C-N stretching is observed at 1163 cm −1 . Raman peaks at 1216, 1309, 1414, and 1450 cm −1 are, respectively, identified to ring breathing, CH 2 wagging vibration, symmetric stretching vibration of COO−, and CH 2 scissor vibration. C-C stretching in ring displays two Raman peaks around 1589 and 1605 cm −1 . From Raman spectra of L-Glutamine, we can find that the ν(CN) vibrational band can be found at 843 cm −1 and the ν(CC) vibrational bands are at 995 and 1047 cm −1 . The δ(CH 2 ) rocking vibrations (891 cm −1 ) and δ(CH 2 ) twist vibrations were also obtained. Raman peaks at 1093 and 1328 cm −1 are attribute to the symmetric ν(CN) stretching vibrational band and the CH deformation vibration. Two δ(NH 3 + ) rocking vibrations were observed at 1129 and 1162 cm −1 . The detailed assignments of these three organics are listed in [46]. The ability for distinguishing organics of our remote Raman system has been demonstrated by experiments with L-alanine, L-phenylalanine, and L-glutamine. L-alanine (C3H7NO2), L-phenylalanine (C9H11NO2), and L-glutamine (C5H10N2O3) serving as basic elements for the various proteins would display many Raman peaks due to many different kinds of vibrational modes. The Raman spectra of them are shown in Figure 7. For L-alanine, the strongest Raman peak 846 cm −1 is attributed to the (CCH3) symmetric stretching vibration, the 532 cm −1 peak is due to the rocking motion of the CO2group. The rocking motion of the CH3 group appeared at 1010 and 1024 cm −1 . A Raman peak of 1112 cm −1 is attributed to CN stretching vibration and 1304 cm −1 is caused by CH deformation. The CH3 symmetric deformation band (1357 cm −1 ) and asymmetric deformation bands (1457 and 1478 cm −1 ) were also observed. In Raman spectra of L-Phenylalanine, the intense Raman peaks at 1004 cm −1 of L-Phenylalanine is caused by C-C symmetric stretching in ring. Peak at 621 cm −1 exhibits the presence of O-C=O in-plane bending modes. Wagging vibration of CH2 is observed near 832 cm −1 . 854 cm −1 is assigned to NH2 deformation. 1035 cm −1 is distributed to C-H plane deformation in ring. Raman band corresponding to C-N stretching is observed at 1163 cm −1 . Raman peaks at 1216, 1309, 1414, and 1450 cm −1 are, respectively, identified to ring breathing, CH2 wagging vibration, symmetric stretching vibration of COO−, and CH2 scissor vibration. C-C stretching in ring displays two Raman peaks around 1589 and 1605 cm −1 . From Raman spectra of L-Glutamine, we can find that the ν(CN) vibrational band can be found at 843 cm −1 and the ν(CC) vibrational bands are at 995 and 1047 cm −1 . The δ(CH2) rocking vibrations (891 cm −1 ) and δ(CH2) twist vibrations were also obtained. Raman peaks at 1093 and 1328 cm −1 are attribute to the symmetric ν(CN) stretching vibrational band and the CH deformation vibration. Two δ(NH3 + ) rocking vibrations were observed at 1129 and 1162 cm −1 . The detailed assignments of these three organics are listed in [46]. The ability for distinguishing organics of our remote Raman system has been demonstrated by experiments with L-alanine, L-phenylalanine, and L-glutamine. Figure 8 shows remote Raman spectra of olivine, quartz, albite, and K-feldspar with Figure 8 shows remote Raman spectra of olivine, quartz, albite, and K-feldspar with 30 s integration time and 65 mJ/pulse laser (532 nm) incident to sample. The dominant double Raman peaks occurring near 824 and 856 cm −1 are characteristic Raman features of olivine arising from coupled symmetric and asymmetric stretching vibrational modes of SiO 4 tetrahedra. Raman spectra related to quartz have been investigated in detail under different pressures and temperatures in previous studies [47,48]. In our remote Raman spectra of quartz obtained at room temperature, Raman peaks from quartz agree well with that from α-quartz and are identified as the fundamental frequencies of A (206, 357, 465 and 1084 cm −1 ) and E (128, 264, 394, 697, 795-806, 1065 and 1162 cm −1 ) modes. Feldspars, which include different members, are the most abundant among all minerals and are thought to be a primary tool for classifying igneous rock. Raman spectra of feldspars should be similar with each other due to closely related members. Alkali feldspar series are between Na(Si 3 Al)O 8 and K(Si 3 Al)O 8 , and the plagioclase feldspar series are between Na(Si 3 Al)O 8 and Ca(Si 2 Al 2 )O 8 . It is a significant index whether remote Raman system is sufficiently sensitive to distinguish the small changes in structure and chemistry occurring between members of feldspars. K-feldspar and albite were selected as two feldspar members to conduct remote Raman measurement. In the region of 600 to 450 cm −1 , the double Raman bands caused by a mixed Si-O-Si (or Si-O-Al) bending/stretching mode are observed at 481 and 508 cm −1 in albite, but triple characters near 457, 476 and 512 cm −1 are found in K-feldspar [49]. Comparing the Raman spectra of K-feldspar and albite collected by our system, we concluded that our remote Raman system is capable of identifying different feldspar members. Raman features of olivine, albite, and K-feldspar are useable for determining mineralogy, providing mineral chemistries as well as aiding lithologic distinction [49,50]. The shift of the most intense Raman peak of quartz near 465 cm −1 has been used to estimate impact pressure [51], which would cause a distortion of the SiO 2 structural framework. Applications for estimating impact pressure using this Raman peak shift have been demonstrated in planetary materials, e.g., lunar soils [52], lunar meteorites [53]. The instrument, which is individually capable of providing mineralogic, compositional information and impact pressure, is strongly and scientifically desired in planetary explorations. between members of feldspars. K-feldspar and albite were selected as two feldspar members to conduct remote Raman measurement. In the region of 600 to 450 cm −1 , the double Raman bands caused by a mixed Si-O-Si (or Si-O-Al) bending/stretching mode are observed at 481 and 508 cm −1 in albite, but triple characters near 457, 476 and 512 cm −1 are found in K-feldspar [49]. Comparing the Raman spectra of K-feldspar and albite collected by our system, we concluded that our remote Raman system is capable of identifying different feldspar members. Raman features of olivine, albite, and K-feldspar are useable for determining mineralogy, providing mineral chemistries as well as aiding lithologic distinction [49,50]. The shift of the most intense Raman peak of quartz near 465 cm −1 has been used to estimate impact pressure [51], which would cause a distortion of the SiO2 structural framework. Applications for estimating impact pressure using this Raman peak shift have been demonstrated in planetary materials, e.g., lunar soils [52], lunar meteorites [53]. The instrument, which is individually capable of providing mineralogic, compositional information and impact pressure, is strongly and scientifically desired in planetary explorations. The Raman spectra of four samples (in Figure 9) were selected to demonstrate the experimental repeatability which is crucial for developing instrument. Figure 9 indicates that the repeatability of our remote Raman system is reliable. In Figure 9a, the SNR increases when increasing laser energy from 25 to 50 mJ/pulse with 5 s integration time and Raman signal got saturated with 10 and 15 s integration time as laser energy is 50 The Raman spectra of four samples (in Figure 9) were selected to demonstrate the experimental repeatability which is crucial for developing instrument. Figure 9 indicates that the repeatability of our remote Raman system is reliable. In Figure 9a, the SNR increases when increasing laser energy from 25 to 50 mJ/pulse with 5 s integration time and Raman signal got saturated with 10 and 15 s integration time as laser energy is 50 mJ/pulse. Similar results could be found in Figure 9b. We can also find that the SNR increases while prolonging integration time with same laser energy in Figure 9c and increases with the increase of laser energy with same integration time. In conclusion, stronger Raman peaks and higher SNR could be obtained by either prolonging integration time (Figure 9a-c) or increasing pulse energy (Figure 9d). However, some saturated Raman signal were observed in Figure 9a,b, e.g., 1052 cm −1 in KNO 3 obtained with 50 mJ/pulse laser (532 nm) and 10-15 s integration time, 942 cm −1 in KClO 4 obtained with 55-65 mJ/pulse laser (532 nm) and 10 s integration time. Thus, long integration time and/or high pulse energy would cause higher power consumption and unnecessary saturated Raman signal. Distinguishable Raman spectra can be obtained using 25 mJ/pulse laser (532 nm) and 5 s for KNO 3 , 40 mJ/pulse laser (532 nm) and 5 s for KClO 4 , 28 mJ/pulse laser (532 nm) and 15 s for CaSO 4 ·2H 2 O, and 30 mJ/pulse laser (532 nm) and 30 s for albite in our experiment. It is not our goal to find out the minimum pulse energy/integration time for excitation of Raman signal for each sample, but to propose a remote Raman system with a function of excitation energy adjusting for reducing power consumption, obtaining Raman spectra with good SNR, and possibly avoiding damage of targets. The root mean square (RMS) SNR values shown in Figure 9 were calculated using formula below [54], The strongest peaks were selected as I peak , and I background , which were the strongest background intensity in a region where no Raman signal was detected. N rms, background is the RMS value of the region mentioned before. The RMS SNR values of some Raman peaks were not calculated due to the saturation that led to flat-top peaks with no accurate intensity value. 65 mJ/pulse laser (532 nm) and 10 s integration time. Thus, long integration time and/or high pulse energy would cause higher power consumption and unnecessary saturated Raman signal. Distinguishable Raman spectra can be obtained using 25 mJ/pulse laser (532 nm) and 5 s for KNO3, 40 mJ/pulse laser (532 nm) and 5 s for KClO4, 28 mJ/pulse laser (532 nm) and 15 s for CaSO4⋅2H2O, and 30 mJ/pulse laser (532 nm) and 30 s for albite in our experiment. It is not our goal to find out the minimum pulse energy/integration time for excitation of Raman signal for each sample, but to propose a remote Raman system with a function of excitation energy adjusting for reducing power consumption, obtaining Raman spectra with good SNR, and possibly avoiding damage of targets. The root mean square (RMS) SNR values shown in Figure 9 were calculated using formula below [54], The strongest peaks were selected as Ipeak, and Ibackground, which were the strongest background intensity in a region where no Raman signal was detected. Nrms, background is the RMS value of the region mentioned before. The RMS SNR values of some Raman peaks were not calculated due to the saturation that led to flat-top peaks with no accurate intensity value. Conclusions A remote Raman prototype working in oblique geometry was developed in Shandong University, aiming at providing suggestions on planetary exploration analysis techniques for Chinese planetary exploration. The Raman spectra acquired by our remote Raman system at a distance of 4 m demonstrates the ability of remote Raman for detecting silicates, carbonates, anhydrous/hydrous minerals, water/water ice, and organics. With its character of being sensitive to the molecular structure of a sample, Raman spectra enable us to identify the mineralogy and deduce the mineral chemical composition. For planetary mineralogy detection and survey, a remote Raman system has advantage on no sample preparations, quick and remote analyses, and unambiguous phase identifications that can help us identify minerals, organics, water/water ice, and other volatiles (e.g., CO 2 and H 2 S) on the Moon, Mars, Venus, asteroids, and icy satellites, etc. Author Contributions: Z.L. designed the entire set of experiments. H.Q. and X.Q. built the remote system and conducted the experiments. H.Q. analyzed the obtained spectra and drafted this manuscript with the help from Z.L., Y.X., C.L. and H.C. All co-authors contributed to discussions, interpretations, and final writing. All authors have read and agreed to the published version of the manuscript.
8,453.2
2021-10-21T00:00:00.000
[ "Physics", "Chemistry" ]
Epigenetics Approaches toward Precision Medicine for Idiopathic Pulmonary Fibrosis: Focus on DNA Methylation Genetic information is not transmitted solely by DNA but by the epigenetics process. Epigenetics describes molecular missing link pathways that could bridge the gap between the genetic background and environmental risk factors that contribute to the pathogenesis of pulmonary fibrosis. Specific epigenetic patterns, especially DNA methylation, histone modifications, long non-coding, and microRNA (miRNAs), affect the endophenotypes underlying the development of idiopathic pulmonary fibrosis (IPF). Among all the epigenetic marks, DNA methylation modifications have been the most widely studied in IPF. This review summarizes the current knowledge concerning DNA methylation changes in pulmonary fibrosis and demonstrates a promising novel epigenetics-based precision medicine. Introduction IPF is a chronic, devastating, and irreversible lung disease that is characterized by microinjury-induced alveolar epithelial cell stress, progressive pathogenic myofibroblast differentiation, imbalanced macrophage polarization, and the extensive deposition of the extracellular matrix (ECM) [1][2][3]. The progression of patients with IPF is associated with lung function decline, progressive respiratory failure, high mortality, recurrent acute exacerbations, and an overall poor prognosis [4][5][6]. The morphological hallmark of IPF on histopathological and/or radiological is usual interstitial pneumonia (UIP), composed of heterogeneous areas of normal-appearing lung intermixed with collagenized fibrosis in sub-pleural and paraseptal, a honeycombing pattern, and ECM-producing myofibroblasts termed fibroblast foci (FF) [7,8]. A recent hypothesis stated that recurrent injuries drive the aberrant activation of epithelial cells to transdifferentiate into fibroblast epithelial-mesenchymal transition (EMT), which might induce fibrosis independently of inflammatory events [9,10]. Even though there is no implicit mechanism, several shreds of evidence emphasize that alveolar epithelial injury induced by environmental triggers results in lung fibrosis. Recurrent microenvironment injury on senescent epithelial cells in genetically susceptible individuals leads to the aberrant activation of fibroblasts, accumulating ECM, and fibrosis [11]. The pathogenic mechanisms involved in the initiation, development, and progression of IPF are unclear. However, many studies have demonstrated that dynamic interactions of genetic susceptibility, environmental factors, and host risk factors in older individuals contribute to epigenetic pro-fibrotic reprogramming, resulting in the development of IPF [12]. Hey et al. found a strong association between the microenvironment-driven epigenetic changes that could induce macrophage inflammation and polarization [13]. Omics-based approaches, including high-throughput technologies that provide snapshots of a holistic view of the molecules that make up a cell, tissue, or organism, consist of (1) genomics, measuring deoxyribonucleic acid (DNA) sequence variation; (2) epigenomics, focusing on the genome-wide characterization of reversible modifications of DNA or DNA alterations; (3) transcriptomics, evaluating the standard of ribonucleic acid (RNA) expression; (4) proteomics, determining protein expression or its chemical changes; (5) metabolomics, assessing metabolite/small molecule levels; and (6) microbiomics, investigating all the microorganisms of a given community [14,15]. Pulmonary disease omics studies mainly focus on tissue-and cell-specific omics data and have identified several fundamental mechanisms that underlie pulmonary biological processes, disease endotypes, and appropriate novel therapeutics for selected individuals [16]. Epigenomics is a technique for analyzing gene expression through epigenetic mechanisms, including DNA methylation, RNA, and histone modification [17]. These components interact and stabilize each other; therefore, the disruption of epigenetic nucleosomes can lead to their inappropriate expression, resulting in epigenetic disorders [18]. Epigenetics and epigenomics help explain how our environment affects our phenotype. Epigenomics studies commonly use methods such as Hi-C, a comprehensive technique developed to capture chromosome conformation, and another tool for whole genome methylation profiling: MBD-isolated genome sequencing (MiGS) [19]. Essential technical and experimental parameters that should be considered when designing epigenomic experiments are reviewed in detail by these authors [20]. Precision or personalized medicine is designed for patients who do not respond to conventional therapy due to genetic heterogeneity and epigenetic alterations [21]. The research-based precision medicine approach identifies the complex genetic, molecular, environmental, and behavioral variables that could provide greater efficacy and tolerability in IPF patients [22]. This review provides a brief outlook on identifying the epigenetic marks of IPF via epigenome-wide association studies that might also inform precision medicine approaches. Epigenetics Genetics focuses on heritable changes in gene activity or function due to the direct alteration of the DNA sequence. In contrast, epigenetic mechanisms, such as DNA methylation, histone modifications, or miRNA expression, regulate heritable gene activity or function changes that can be transferred without modifying the DNA sequence itself [23,24]. Epigenetics provides beneficial information concerning the adaption of genes to environmental changes or other stresses [25]. Furthermore, epigenetic modifications can be transferred from generation to generation, providing an alternative mechanism for disease inheritance and risk [26,27]. Whereas epigenetics refer to how and when single genes or sets are turned on and turned off, epigenomics analyzes epigenetic changes associated with genome-wide profiles and effects [28]. Several factors and processes affect the imbalance of epigenetic mechanisms, including development in utero and during a lifetime, environmental factors, aging, and lifestyle [29,30]. In comparison with genetic changes, epigenetic changes are acquired in a gradual rather than a short process [31]. Epigenetics is responsible for numerous cellular functions, such as the regulation of gene expression and transcription, cell growth and differentiation, and chromosome remodeling and inactivation [32][33][34]. Many studies demonstrated an epigenetic link between environmental stimuli and gene expression as an adaptation of genes in response to ecological changes without modifying the DNA sequence [35]. However, those processes become unbalanced in many diseases, such as cancer and fibrosis. Aberrant epigenetic regulations have been reportedly associated with the progression of chronic respiratory diseases, such as chronic obstructive pulmonary disease (COPD) and lung fibrosis [36][37][38]. Many epigenomics studies in IPF patients reported aberrant epigenetic influences, including DNA methylation, miRNA, and histone modifications, which mediate alterations to the DNA without impacting the genomic sequence [38][39][40]. Our review systematically provides current knowledge concerning various epigenetic aberrations in IPF, particularly DNA methylation. However, DNA methylation interacts with histone modifications and miRNA to activate or silence gene expression, contributing to disease development and progression. DNA Methylation The mammalian DNA methylome pattern is a dynamic process of inheritable pretranscriptional modifications that are balanced by two antagonizing processes, including methylation and demethylation [41]. DNA methylation is typically reconfigured after fertilization during zygote formation and gametogenesis and then re-established during embryogenesis [42]. DNA methylation is essential for normal development and several vital processes, including genomic imprinting, X-chromosome inactivation, the suppression of repetitive element transcription, transposition, and preservation of chromosome stability [43][44][45]. Therefore, DNA methylation is the most intensively investigated epigenetic element that influences gene activities. The environment can affect physiological or pathological changes in DNA methylation, resulting in modified global and gene-specific DNA methylation [46]. DNA methylation regulates chromosomal and extrachromosomal DNA functions in response to environmental exposures [47]. DNA methylation is a reversible process that refers to transferring the methyl (CH 3 ) group from S-adenosyl methionine (SAM) to the fifth carbon of a cytosine of the CpG dinucleotide forming 5-methylcytosine (5mC) [48] (Figure 1). DNA methylation occurs at cytosines in any context of the genome; however, most of the genome does not contain CpG sites. Cytosine methylation in mammalian cells occurs predominantly in CpG dinucleotide clusters, which are called CpG islands, the regions enriched in CpGs that coincide with gene promoters [49]. DNA methylation is also found at non-CpG sites or non-CpG methylation, including CpA, CpT, and CpC [50]. DNA methylation is catalyzed by a family of DNA methyltransferases (DNMTs), including DNMT1, DNMT2, DNMT3a, DNMT3b, and DNMT3L, which are responsible for setting up and maintaining methylation patterns at specific genome regions [51]. Furthermore, DNA methylation is also controlled by methyl-binding proteins (MBPs), such as the methyl-CpG-binding domain (MBD) proteins family, Kaiso and Kaiso-like proteins, and the SRA domain proteins that interact with MBD proteins [52,53]. DNA demethylation can occur actively or passively in mammals. Active DNA demethylation means the direct removal of a methyl group from cytosines through the oxidation of 5mC by ten-eleven translocation enzymes (TETs) to form 5-hydroxymethylcytosine (5hmC), 5-formylcytosine (5fC), and 5-carboxylcytosine (5caC), followed by the excision of 5fC and 5caC by thymine DNA glycosylase (TDG) coupled with base excision repair [54,55] ( Figure 1). This alteration can also take place passively by a reduction or inhibition activity of DNMTs during DNA replication [56,57]. Demethylation is linked to transcription factor binding and proper gene expression [58]. According to their functions, enzymes that establish, recognize, and remove DNA methylation, such as DNMTs, MBPs, and TET, could be referred to as writers, readers, and erasers, respectively [23,59]. These epigenetic writers catalyze the process and introduce various chemical modifications to the DNA and histones; readers may identify, recognize, and bind to methyl groups to influence gene expression and eraser, modify, and remove the methyl group [23,60]. DNA methylation regulates proor anti-fibrotic gene expression in organ fibrosis [59]. Hypomethylated gene promoters correspond to increased gene expression, while hypermethylated refers to decreased gene expression [61]. (2) Active demethylation is the direct removal of a methyl group from cytosines through the oxidation of 5mC by ten-eleven translocation enzymes (TETs) to form 5-hydroxymethylcytosine (5hmC), 5-formylcytosine (5fC), and 5-carboxylcytosine (5caC) followed by the excision of 5fC and 5caC by thymine DNA glycosylase (TDG). (3) Passive demethylation corresponds to the reduction or inhibition activity of DNMTs during DNA replication. DNMTs DNA methylation patterns are established through de novo methylation by DNMT3a, DNMT3b, and the maintenance of methylated cytosine via DNMT1 [62]. However, many partners of DNMTs are involved in both the regulation of DNA methylation and DNMT recruitment, such as the DNMT3L/DNMT3a complex, which is related to de novo DNA methylation and maintains global DNA methylation that is regulated by the DNMT1/PCNA/UHRF1 complex [63]. DNMT1 preferentially methylates hemimethylated DNA and is responsible for copying DNA methylation patterns from the mother to the daughter strands during the S phase or DNA replication [64]. The role of DNMT2 in DNA methylation remains unclear, though it appears to have significance in the methylation of tRNA [65]. In contrast to DNMT1, DNMT3a and DNMT3b prefer unmethylated CpG dinucleotides, which perform de novo methylation during embryogenesis, and set up genomic imprints during germ cell development [66]. DNMT3L acts as a cofactor and stimulates de novo methylation by DNMT3A or DNMT3b [67]. It should be noted that the maintenance vs. de novo function of these enzymes is not absolute; maintenance and de novo methylation usually cooperate to maintain a stable methylation pattern [68]. DNMTs have exhibited their essential roles in multiple organ fibrosis. The DNMT1mediated suppression of cytokine signaling 3 (SOCS3) leads to the deregulation of STAT3 to promote cardiac fibroblast activation and collagen deposition in diabetic cardiac fibrosis [69]. The down-regulation of SOCS3 promotes the activation of STAT3-related fibroblast-to-myofibroblast transition. The DNMT3A-induced silencing of SOCS3 expression stimulates TGF-β-dependent fibroblast activation in experimental systemic sclerosis (SSc) (2) Active demethylation is the direct removal of a methyl group from cytosines through the oxidation of 5mC by ten-eleven translocation enzymes (TETs) to form 5-hydroxymethylcytosine (5hmC), 5-formylcytosine (5fC), and 5-carboxylcytosine (5caC) followed by the excision of 5fC and 5caC by thymine DNA glycosylase (TDG). (3) Passive demethylation corresponds to the reduction or inhibition activity of DNMTs during DNA replication. DNMTs DNA methylation patterns are established through de novo methylation by DNMT3a, DNMT3b, and the maintenance of methylated cytosine via DNMT1 [62]. However, many partners of DNMTs are involved in both the regulation of DNA methylation and DNMT recruitment, such as the DNMT3L/DNMT3a complex, which is related to de novo DNA methylation and maintains global DNA methylation that is regulated by the DNMT1/PCNA/ UHRF1 complex [63]. DNMT1 preferentially methylates hemimethylated DNA and is responsible for copying DNA methylation patterns from the mother to the daughter strands during the S phase or DNA replication [64]. The role of DNMT2 in DNA methylation remains unclear, though it appears to have significance in the methylation of tRNA [65]. In contrast to DNMT1, DNMT3a and DNMT3b prefer unmethylated CpG dinucleotides, which perform de novo methylation during embryogenesis, and set up genomic imprints during germ cell development [66]. DNMT3L acts as a cofactor and stimulates de novo methylation by DNMT3A or DNMT3b [67]. It should be noted that the maintenance vs. de novo function of these enzymes is not absolute; maintenance and de novo methylation usually cooperate to maintain a stable methylation pattern [68]. DNMTs have exhibited their essential roles in multiple organ fibrosis. The DNMT1mediated suppression of cytokine signaling 3 (SOCS3) leads to the deregulation of STAT3 to promote cardiac fibroblast activation and collagen deposition in diabetic cardiac fibrosis [69]. The down-regulation of SOCS3 promotes the activation of STAT3-related fibroblast-tomyofibroblast transition. The DNMT3A-induced silencing of SOCS3 expression stimulates TGF-β-dependent fibroblast activation in experimental systemic sclerosis (SSc) murine models [70]. A recent study found that the peroxisome proliferator-activated receptorγ (PPARγ) expression was low, but increased DNMT 1/DNMT3a and PPARγ promoter hypermethylation in IPF patients; therefore, the inhibition of DNA methylation that restored PPARγ led to attenuated pulmonary fibrosis [71]. The aberrant expression of DNMTs patterns is associated with pulmonary fibrosis, although the exact mechanisms underlying this pathogenesis remain elusive. MBPs DNA methylation has an essential regulatory function to suppress non-transcribed genes stably in differentiated adult somatic cells. The detailed mechanism of DNA methylation that represses gene expression remains unknown. The repressive effects of DNA methylation on the expression of genes involve two mechanisms, directly by DNA methylation that prevents transcriptional regulators and indirectly via MBP-binding methylated DNA and the recruitment of co-repressors, such as histone deacetylases (HDAC), to silence the genes [59]. MBPs recognize DNA methylation and translate the DNA methylation signal into appropriate functional states, such as chromatin organization and transcriptional repression [53]. Three families of the MBPs protein are involved in gene repression or the inhibition and binding of the transcription factor to DNA [23]. MBPs not only bind specifically to CpG dinucleotides but also participate in the translation of DNA methylation and histone modifications [72]. The most widely studied MBPs are MBD protein families, including MeCP2, MBD1-MBD6, STEDB1, STEDB2, BAZ2A, and BAZ2B [73]. MeCP2 is a well-characterized MBD protein that has made progress in defining its biochemical properties. MeCP2 controls myofibroblast transdifferentiation and inhibits fibrosis. MeCP2 inhibits the α-tubulin acetylation-related fibroblast proliferation in cardiac fibroblasts through HDAC6 [74]. MeCP2 expression was increased in macrophages; therefore, MeCP2 expression was inhibited through siRNA-inhibited M2 macrophage polarization [75]. MeCP2 appears to mediate pro-fibrotic and anti-fibrotic effects, although more studies are needed to define its role in the pathogenesis of pulmonary fibrosis. TETs TET enzymes TET 1, TET 2, and TET 3 mediate active DNA demethylation mechanisms. Although the role of TET in organ fibrosis is not clear, many studies have reported associations between DNA demethylation, TET enzyme activity, and fibrosis. TET3 and TGF-β are pro-fibrotic factors. The inhibition of TET3 signaling attenuates hepatic fibrosis by abrogating the positive feedback loop TET3/TGF-β1 to promote pro-fibrotic gene expression [76]. On the contrary, other studies have proven the anti-fibrotic role of TET in kidney fibrosis. Hypoxia triggers renal fibrosis due to the reduced expression of TET and induced Klotho methylation; therefore, the administration of a low-dose sodium hydrosulfide (NaHS) restores TET activity [77]. DNA Methylation in Fibrosis Extensive alterations in DNA methylation profiles are known to be involved in the pathogenesis of pulmonary fibrosis. DNA methylation microarray demonstrated a higher DNA methyltransferase expression in lung tissue samples of IPF patients [81]. IPF fibroblasts exhibited significant heterogeneity in global DNA methylation patterns compared to non-fibrotic control cells [82]. Additionally, a study identified extensive alterations in gene expression-associated DNA methylation that were involved in fibroproliferation [83]. Recently, a genome-wide DNA methylation study found that CpGs methylation was responsible for cell adhesion, molecule binding, chemical homeostasis, surfactant homeostasis, and receptor binding categories in the pathogenesis of IPF [84]. Global methylation patterns in IPF are very different from that of the control samples and significantly overlap with methylation changes observed in lung adenocarcinoma samples [85]. DNA methylation biomarkers also overlap between IPF, other ILD, cancer, and COPD; hence, it is unlikely that molecular discrimination between these diseases can be achieved using a single marker [86]. Interestingly, low-methylation lung squamous cell carcinoma (SCC) significantly correlated with IPF to show a bad outcome [87]. Cigarette smoking-associated aberrant methylation contributed to pulmonary fibrosis [88]. Many cell types are well known to be involved in the process of pulmonary fibrosis. The essential cells driving the development and progression of pulmonary fibrosis are epithelial, fibroblasts and myofibroblasts, and alveolar macrophages. Epigenetic regulation has a variety of ways. This study focuses on DNA methylation regulation in fibroblasts, epithelial, and macrophages. DNA Methylation and EMT EMT is a cellular and molecular process in which epithelial cells lose their epithelial identity (apical-basal polarity and adhesion), which is characterized by down-regulated epithelial markers, including E-cadherin, occludin, and claudin-1. In contrast, fibroblastspecific genes, such as α-SMA, N-cadherin, fibroblast-specific protein 1 (FSP-1), and type I collagen, are up-regulated [10,89]. A hallmark of EMT is the repression of E-cadherin: a transmembrane glycoprotein encoded by the epithelial marker gene CDH1. DNA methylation might be involved in regulating EMT. During EMT induction, EMT-transcriptional factors (EMT-TFs), including ZEB1, SNAIL, and TWIST, interact with DNMTs and undergo the hypermethylation of CpG islands in the CDH1 promoter [90,91]. The DNA hypermethylation of the CDH1 through DNMTs was correlated with tumor progression [90,92]. In addition, the knockdown of DNMTs was equated with the transfection of siDNMT1, DNMT3a, or DNMT3b, which reversed the TGF-β1-induced suppression of E-cadherin expression and the induction of α-SMA, vimentin, and fibronectin expression [93]. In normal lung epithelial wound healing, this process ends with the apoptosis of myofibroblasts and inflammation reduction. Yet, aberrant responses to tissue injury may turn into lung fibrosis. Bidirectional EMT cross-talk assists the pro-fibrogenic positive feedback loop, while epithelial cells become "vulnerable and sensitive to apoptosis" and myofibroblasts become "apoptosis-resistant and immortal", resulting in fibrosis progression instead of wound resolution [94]. DNA methylation mediated the down-regulation of genes involved in apoptosis. The hypermethylation of the Caspase 8 promoter by DNMT1 and DNMT3b was associated with apoptosis resistance in cancer cells [95,96]. Cisneros et al. showed the silencing expression of pro-apoptotic p14ARF in the fibroblast of the patient with IPF due to hypermethylation [97]. Along with these changes, DNA methylation was linked with the dysregulation of apoptosis and EMT, leading to the development of lung fibrosis processes (Figure 2). DNA Methylation and Myofibroblast Differentiation Under normal physiology, in response to epithelial injury, the cell releases various pro-inflammation and fibrotic cytokines that are responsible for local inflammation and the activation of fibroblasts. Fibroblasts are mesenchymal cells that are recruited to and accumulate at the injured site and undergo a change in phenotype to highly proliferative and contractile myofibroblasts. Lung myofibroblasts are heterogeneous in terms of their origins. The predominant sources of myofibroblasts are resident fibroblasts (lipofibroblasts, matrix fibroblasts, and alveolar niche cells) and pericytes (residing within basal membranes or perivascular linings), along with minor sources such as hematopoietic CXCR4+ fibrocytes, alveolar epithelial cells (AECs), endothelial cells (ECs), and mesenchymal stem cells (MSCs) [98]. Under the influence of environment-specific factors, cytokines, such as TGF-β, fibroblast-tomyofibroblast transition/transdifferentiation (FMT) changes lung resident fibroblast behavior/phenotypes to another type of normal differentiated cell and the downstream effects at the tissue level [99]. Myofibroblasts, characterized by the expression of contractile α-SMA, regulate connective tissue remodeling by producing and modifying ECM components, such as fibronectin and collagen [100]. Furthermore, one of the mechanisms that may account for FMT is DNA methylation. DNA methylation modulates myofibroblast differentiation (Figure 3). Several genes and signaling pathways, such as those involving ErbB, focal adhesion, and MAPK, underlie mechanisms for DNA methylation alteration in myofibroblast differentiation and influence the synthesis of ECM [101]. DNA methylation regulates gene expression to DNA Methylation and Myofibroblast Differentiation Under normal physiology, in response to epithelial injury, the cell releases various pro-inflammation and fibrotic cytokines that are responsible for local inflammation and the activation of fibroblasts. Fibroblasts are mesenchymal cells that are recruited to and accumulate at the injured site and undergo a change in phenotype to highly proliferative and contractile myofibroblasts. Lung myofibroblasts are heterogeneous in terms of their origins. The predominant sources of myofibroblasts are resident fibroblasts (lipofibroblasts, matrix fibroblasts, and alveolar niche cells) and pericytes (residing within basal membranes or perivascular linings), along with minor sources such as hematopoietic CXCR4+ fibrocytes, alveolar epithelial cells (AECs), endothelial cells (ECs), and mesenchymal stem cells (MSCs) [98]. Under the influence of environment-specific factors, cytokines, such as TGF-β, fibroblastto-myofibroblast transition/transdifferentiation (FMT) changes lung resident fibroblast behavior/phenotypes to another type of normal differentiated cell and the downstream effects at the tissue level [99]. Myofibroblasts, characterized by the expression of contractile α-SMA, regulate connective tissue remodeling by producing and modifying ECM components, such as fibronectin and collagen [100]. Furthermore, one of the mechanisms that may account for FMT is DNA methylation. DNA methylation modulates myofibroblast differentiation (Figure 3). Several genes and signaling pathways, such as those involving ErbB, focal adhesion, and MAPK, underlie mechanisms for DNA methylation alteration in myofibroblast differentiation and influence the synthesis of ECM [101]. DNA methylation regulates gene expression to facilitate the formation of fibroblastic foci and lung fibrosis [47]. DNMT is an essential regulator of α-SMA gene expression during myofibroblast differentiation. The specific effect of DNA methylation in regulating the α-SMA gene is unknown. During the development of cardiac fibrosis, He et al. found that TGF-β could inhibit DNMT1-mediated DNA methylation expression, leading to the overexpression of α-SMA [102]. TGF-β1 also regulates lung fibroblast differentiation through the hypermethylation of the "fibrosis suppressor" gene, Thymocyte differentiation antigen-1 (THY-1); therefore, DNMT1-attenuated TGF1-mediated THY-1-inducing α-SMA fiber formation is silenced [103]. In addition, the expression of α-SMA was significantly increased by adding IL-6 [104]. A recent study revealed that DNMTl knockdown suppressed the DNA methylation-related myofibroblast phenotype via inhibiting an IL-6/STAT3/NF-κβ positive feedback loop [105]. facilitate the formation of fibroblastic foci and lung fibrosis [47]. DNMT is an essential regulator of α-SMA gene expression during myofibroblast differentiation. The specific effect of DNA methylation in regulating the α-SMA gene is unknown. During the development of cardiac fibrosis, He et al. found that TGF-β could inhibit DNMT1-mediated DNA methylation expression, leading to the overexpression of α-SMA [102]. TGF-β1 also regulates lung fibroblast differentiation through the hypermethylation of the "fibrosis suppressor" gene, Thymocyte differentiation antigen-1 (THY-1); therefore, DNMT1-attenuated TGF1-mediated THY-1-inducing α-SMA fiber formation is silenced [103]. In addition, the expression of α-SMA was significantly increased by adding IL-6 [104]. A recent study revealed that DNMTl knockdown suppressed the DNA methylation-related myofibroblast phenotype via inhibiting an IL-6/STAT3/NF-κβ positive feedback loop [105]. MBD protein families have been suggested to play an essential role in myofibroblast differentiation. The hypermethylation of the transcriptional regulator, c8orf4, decreased the capacity of fibrotic lung fibroblasts to up-regulate COX-2 expression and COX-2-derived PGE2 synthesis [106]. Evidence showed that MeCP2 binds with the IκBα promoter and induces the α-SMA promoter [107,108]. The inhibition of MeCP2-associated TGF-β1 significantly blocks the expression levels of α-SMA and fibronectin production during myofibroblast differentiation [109]. In addition, suppressing MeCP2 or MBD2 might reduce pro-fibrotic factors. Wang et al. determined higher MBD2 levels in different types of pulmonary fibrosis patients, including COVID-19, systemic sclerosis-associated interstitial lung disease, and IPF. Therefore, deleting the MBD2 gene attenuated bleomycin-induced lung injury and fibrosis [110]. MBD2 promoted fibroblast differentiation via repressing the expression of the erythroid differentiation regulator (Erdr) 1 [111]. Interestingly, other studies showed the anti-fibrotic effects of MBDs protein families. The in vitro study of diffuse cutaneous scleroderma (SSc) showed that the overexpression of MeCP2 via the modulating genes PLAU, NID2, and ADA suppressed fibroblast proliferation, migration, and myofibroblast differentiation [112]. Perhaps the final effects of DNA methylation on pulmonary fibrosis depends on the hypomethylation of pro-fibrosis MBD protein families have been suggested to play an essential role in myofibroblast differentiation. The hypermethylation of the transcriptional regulator, c8orf4, decreased the capacity of fibrotic lung fibroblasts to up-regulate COX-2 expression and COX-2-derived PGE2 synthesis [106]. Evidence showed that MeCP2 binds with the IκBα promoter and induces the α-SMA promoter [107,108]. The inhibition of MeCP2-associated TGF-β1 significantly blocks the expression levels of α-SMA and fibronectin production during myofibroblast differentiation [109]. In addition, suppressing MeCP2 or MBD2 might reduce pro-fibrotic factors. Wang et al. determined higher MBD2 levels in different types of pulmonary fibrosis patients, including COVID-19, systemic sclerosis-associated interstitial lung disease, and IPF. Therefore, deleting the MBD2 gene attenuated bleomycin-induced lung injury and fibrosis [110]. MBD2 promoted fibroblast differentiation via repressing the expression of the erythroid differentiation regulator (Erdr) 1 [111]. Interestingly, other studies showed the anti-fibrotic effects of MBDs protein families. The in vitro study of diffuse cutaneous scleroderma (SSc) showed that the overexpression of MeCP2 via the modulating genes PLAU, NID2, and ADA suppressed fibroblast proliferation, migration, and myofibroblast differentiation [112]. Perhaps the final effects of DNA methylation on pulmonary fibrosis depends on the hypomethylation of pro-fibrosis genes and hypermethylation of anti-fibrosis genes [111]. This evidence revealed that the role of the epigenetic gene transcription regulator in myofibroblast differentiation was complex. Hypoxia can alter DNA methylation patterns and contribute to the fibrotic process. Global DNA methylation was detected in hypoxic fibroblasts and was associated with increased DNMT1 and DNMT3A expression and decreased THY-1 expression [113,114]. Hypoxia altered DNA methylation, which is characterized by the increased expression of TGF-β1 and DNMT1, and significantly decreased RASAL1 [115]. Chronic hypoxia induces oxidative stress. Recently, the administration of the antioxidant enzyme extracellular superoxide dismutase (EC-SOD) has been shown to alleviate the hypoxia-induced DNA methylation of the Ras association domain family 1 isoform A (RASSF1A) through the Ras/ERK pathway [116]. DNA Methylation and Macrophage Polarization Pulmonary macrophages consist of monocyte-derived alveolar (AMs) and lung tissueresident alveolar or interstitial macrophages (IMs). Both lung tissue-resident macrophages and monocyte-derived macrophages can be polarized into classically activated macrophages (M1) or alternatively activated macrophages (M2) by their capacity to induce inflammatory or anti-inflammatory immune responses, respectively [117]. The heterogeneity of macrophages regulates the development of pulmonary fibrosis from the early phases of injury and the fibrotic phase. The M2 macrophage, instead of the M1 phenotype, is involved in the progression of lung fibrosis [118]. After the resolution of the lung injury, monocyte-derived AM persisted and expressed higher pro-inflammatory and pro-fibrotic functions, and therefore, leading to the depletion of monocyte-derived AM ameliorates lung fibrosis [119]. DNA methylation alterations in IPF lungs have been well documented, but the role of DNA methylation in macrophages needs to be better defined. Many studies have shown that DNA methylation plays a fundamental role in macrophages, including the regulation of injury-induced inflammation and the control of fibroblast activation in the wound-healing response. Chen et al. examined DNA methylation changes in lung macrophages and found that changes in DNA methylation patterns drove the dysregulation of innate immune cells in cystic fibrosis [120]. Profiling DNA methylation from healthy subjects and IPF patients showed that aberrant macrophage polarization played a crucial role in developing IPF, but epigenetic alterations were not associated with accelerated aging [38]. DNMTs are associated with the activation of gene expression during macrophage phenotypic changes. During the development of IPF, DNMTs are associated with inappropriate gene expression activation, resulting in the opposite roles of M1/M2 polarization. DNMTs can act as pro-and anti-fibrotic by regulating M2 polarization. Yang et al. revealed that DNMT3a and DNMT3b suppress the Proline-Serine-Threonine Phosphatase Interacting Protein 2 (PSTPIP-2) by hypermethylation results, which caused a mixed induction of hepatic macrophage M1 and M2 [121]. In obesity, DNMT3b regulates M2 macrophage polarization, and the deletion of DNMT3b induces M2 macrophage polarization [122]. On the contrary, another study provided different results. Qin et al. found that DNMT3b ameliorates the development of bleomycin-induced pulmonary fibrosis by limiting M2 macrophage polarization [123]. In this case, DNMT3b acts as a negative regulator of M2 macrophage polarization. The role of DNMT3-associated macrophage polarization in organ fibrosis might have different functions. DNMT3 limits M2 macrophage polarization and alleviates the progression of lung fibrosis, yet DNMT3 induces M2 polarization and enhances liver fibrogenesis. Consistent with the involvement of DNMTs in macrophage phenotype change, MBPs interact with methylated DNA to regulate the expression of multiple genes. However, MeCP2 can accelerate both M1 and M2 polarization. MeCP2 might tend to M1 in the early phase of inflammation. The deletion of MeCP2 down-regulated M1 differentiation and inhibited the expression of iNOS and the secretion of cytokines that are proinflammatory in acute lung injury (ALI) [124]. MeCP2 regulates the gene expression of macrophages; thus, MeCP2 deficiency leads to the dysregulation of macrophage polarization [125]. MeCP2 accelerates M2 polarization by inhibiting Ship expression and enhancing the phosphatidylinositol 3-kinases/protein kinase B (PI3K/PKB) signaling pathway in bleomycin-induced pulmonary fibrosis [110]. Furthermore, silencing MeCP2 using siRNA blunted M2 macrophage polarization to elevate IRF4 expression [75]. Conversely, MeCP2 directly or indirectly promoted the differentiation of M0 macrophages to M1 or M2 macrophages and the polarization of M2 to M1 macrophages [126]. Therefore, we might suggest that MeCP2 stimulates M2 macrophages to promote the progression of lung fibrosis but protect against renal fibrosis via M1 polarization. Little is known about the impact of TET proteins on macrophage polarization. TET regulates macrophage differentiation and M1-polarization [127]. Therefore, the inhibition of the TET protein using dimethyloxallyl glycine (DMOG) promotes M2 polarization by up-regulating Arg1, Fizz1, and Ym1 [128]. Targeted Gene and Personalized Medicine IPF is an epigenetic disease in which environment-associated epigenomic alterations may lead to aberrant regulation in fibroblasts, epithelial, and macrophages during lung wound healing. The growing knowledge of epigenetic mechanisms should facilitate precision medicine that is closer than ever to patients with IPF. Precision medicine means personalized medicine. This concept emphasizes the customization of medical practice, including prevention and treatment strategies, focusing on the individual with regard to omics-based biomarkers and individual and environmental factors [129,130]. There have been attempts to apply personalized medicine to manage pulmonary diseases, such as asthma, COPD, lung cancer, cystic fibrosis, and ILD [131][132][133]. Personalized medicine revolutionizes patient care as targeted genes can be associated with a particular phenotype or disease. Precision medicine focuses on the use of the 'omics' approach to direct more personalized diagnosis through the use of biomarkers and costeffective strategies based on personal clinical and molecular information [134,135]. From the perspective of targeting epigenetics in precision medicine, the strategy for epigenetics therapy encompasses enhancing, silencing, and repressing gene expression [136]. DNA methylation is the most crucial of all the epigenetic mechanisms, since it controls gene expression. The pathogenesis of IPF is complex and interconnected. Therefore, targeting several genes involved in the DNA methylation-associated pathogenic mechanisms of IPF, including inflammation, epithelial apoptosis, myofibroblast differentiation, macrophage polarization, and extracellular matrix metabolism, is very crucial [137,138]. Several specific genes exhibit the DNA hypermethylation alteration that is involved in the development of IPF and their potential future for targeted therapeutics. Manipulating the epigenetic control of gene expression related to fibrosis could be a promising therapeutic target for IPF. THY-1 is a suppressor gene that maintains cell-stromal balance in normal lung fibroblasts. Fibroblasts from normal lungs are THY-1 positive, while THY-1 expression in IPF fibroblasts is negative or minimal. THY-1 overexpression reduced pulmonary fibrosis and down-regulated fibroblast markers such as MMP-2, occludin, α-SMA, and vimentin [139]. DNA hypermethylation down-regulates THY-1 expression. TGF-β1 is a potential modifier of the DNA methylome; therefore, stimulation with the TGF-β1 modified DNA methylation of normal and IPF fibroblasts are characterized by the significantly decreased expression of THY-1 and the up-regulation of DNMT1, DNMT3a, and DNMT3b [140]. Hypermethylation in the promoter of THY-1 suppressed the THY-1 gene and enhanced the anti-apoptotic roles of lung fibroblasts, resulting in ECM deposition and lung scar formation [141]. In addition, hypoxia-induced global DNA hypermethylation decreased the expression of THY-1 [114]. The PGE2 function inhibits the activation of the fibroblast. However, there are not many fibroblasts in IPF and they are resistant to PGE2. The diminished capacity of IPF and SSc lung fibroblasts to the up-regulation of COX-2-derived PGE2 synthesis results from hypermethylation and the silencing of the transcriptional regulator, c8orf4 [106]. PGE2 increases global DNA methylation via DNA methyltransferase; therefore, the inhibition of COX-2-derived PGE2 and DNMT inhibits cell growth [142]. Various studies have shown that the alteration of DNA methylation is associated with fibrogenic gene expression. Aberrant DNA methylation patterns, hyper-and hypomethylation, are involved in the pathogenesis of pulmonary fibrosis. While DNA hypermethylation contributes to repressing anti-fibrotic genes, hypomethylation results in the induction of pro-fibrotic genes. Therefore, epigenetic-modifying drugs are attractive therapeutic agents in IPF for reversing DNA methylation. 5-aza-2 -deoxycytidine (5aza), a DNA methylation inhibitor, displayed beneficial effects in organ fibrosis and cancers. 5aza inhibited the profibrotic factor TGF-β1-induced hypermethylation and repression of erythropoietin in the pericytes of kidney fibrosis [143]. In addition, 5aza reduced global DNA methylation in chemo-sensitive cancer cells [143]. Targeting DNMT 1/DNMT3a and the peroxisome proliferator-activated receptor-γ (PPAR-γ) axis with 5-aza can demethylate the PPAR-γ promoter, restore PPAR-γ loss, and alleviate fibrotic lung [71]. 5aza is a potent inducer of DNA de-methylation and has been approved by FDA for epigenetics-therapeutics in some cancer. However, their potential effect of inhibiting fibrosis in IPF still needs to be fully understood. The newest evidence suggests that hypermethylation is not always associated with transcriptional repression, but is also related to high transcriptional activity [144]. Conclusions Environmental circumstances and genetics influence epigenetic mechanisms. Epigenetic changes via DNA methylation, miRNA, and histone modification induce genephenotype alterations. Extensive alterations in DNA methylation profiles are known to be involved in the pathogenesis of IPF. Aberrant DNA methylation is associated with various cells and mechanisms that underlie the pathogenesis of IPF. Hypermethylation mediated by DNMTs and MBPs resulted in silencing of the expression of the anti-fibrotic gene. These alterations lead to the induction of myofibroblast differentiation, apoptosis resistance, EMT, and the secretion of macrophage-associated pro-fibrotic factors, promoting the progression of lung fibrosis. Activating a pro-fibrotic phenotype provides a positive feedback loop. In addition, hypoxia-related epigenetic alteration plays an essential role in shaping fibrosis. Indeed, the epigenetic gene transcription regulator in myofibroblast differentiation is complex; therefore, the final effects of DNA methylation on pulmonary fibrosis depend on the hypomethylation of pro-fibrosis genes and hypermethylation of anti-fibrosis genes. DNA methylation is reversible; thus, finding mechanisms or drugs with which to control the methylation pattern is crucial. DNMT inhibitors or other modifiers that modulate DNA methylation are currently available only for the preclinical stage. Further studies to understand the alteration and methylation pattern involved in the pathogenesis of IPF are beneficial in developing novel targeted/personalized treatments focused on DNMTs, MBPs, or TET to prevent or even reverse the fibrotic process. Conflicts of Interest: The authors declare no conflict of interest.
7,964.6
2023-03-28T00:00:00.000
[ "Medicine", "Biology" ]
Rapid Driving Style Recognition in Car-Following Using Machine Learning and Vehicle Trajectory Data , Introduction Driving style refers to the ways that drivers choose to habitually drive and the driver states that represent the common parts of varied driving behavior [1].Driving style of drivers plays an important role in driving safety as well as vehicle energy consumption.Different driving styles may lead to different possibilities for traffic incidents.Recognition of a driver's driving style based on rear-end collision risk is of great significance to improve the safety of driving.With the development of connected autonomous vehicles and Advanced Driver Assistance System (ADAS), there is an urgent demand for enhancing recognition of driving style.It is not only important to guarantee the safety and adequate performance of drivers, but also essential to meet drivers' need, adjust to the drivers' preference, and ultimately improve the safety of the driving environment.Driving style recognition also has potential value to help traffic agencies design control strategies effectively [2,3]. The availability of high-definition surveillance camera makes it possible to collect numerous vehicle motions from real world traffic flow.The advanced video extraction software can extract vehicle trajectory data accurately and efficiently from the surveillance video.The technologies provide a good opportunity to recognize driving style using the videoextracted vehicle trajectory data.Moreover, the machine learning technique is playing a crucial role in driving behavior recognition.A growing amount of studies on machine learning algorithms have been conducted in recent years [4][5][6][7].This paper builds a driving style recognition model based on vehicle trajectory data.Four supervised machine learning algorithms, including Supporting Vector Machine (SVM), Random Forest (RF), K-Nearest Neighbor (KNN), and Multi-Layer Perceptron (MLP), are used in model training.A new method based on rear-end collision risk is proposed to label the driving style of each driver in the sample data.Three feature extraction methods, including Discrete Fourier Transform (DFT), Discrete Wavelet Transform (DWT), and statistical method, are also adopted to extract the most effective features of driving style recognition. To the best knowledge of the authors, there are three main contributions in this paper: (1) This paper is organized as follows.Section 2 presents the related work on driving behavior data analysis and machine learning algorithms.Section 3 introduces the data analyzed in this paper.Section 4 details the driving style recognition method implemented in this paper.Section 5 shows the results and discussion.Section 6 concludes this paper and raises the possible future work. Literature Review In recent years, the machine learning algorithms applied to the driving behavior recognition have been studied in many previous works.Different types of neural network (NN) algorithms have been used.Molchanov et al. [8] proposed a convolutional deep neural network (CDNN) to recognize the risky driving.Other types such an artificial neural network (ANN) [9] and pulse coupled neural network (PCNN) [10] were adopted to classify the driving behaviors.In the study by Srinivasan [11], the effectiveness of three types of NN methods was compared.The results show that the Multi-Layer Perceptron (MLP) model can achieve excellent classification results.However, the learning rate of NN is difficult to be determined, resulting in higher possibility to be trapped in local minima.A larger size of the network could lead to a long training time [12].The tree-like structures including decision tree algorithm [13] and Random Forest algorithm [14] are also adopted to detect the driving behaviors according to the extracted features.Some researchers proposed Hidden Markov Model (HMM) to effectively detect dangerous driving behaviors.Berndt et al. [15] established the HMM to identify lane change, steering, and follow-up intention.The recognition accuracy of leftchange and right-change is, respectively, 76% and 74%.Meng et al. [16] trained the HMM by collecting driver's operation data on acceleration pedal, brake pedal, and steering wheel to recognize the driver's profiles online.Some researchers also combined the HMM with dynamic Bayesian networks or ANN to predict the driving behavior by learning the driving data [17,18].While HMM requires long training time, especially for a high number of states, the recognition time also increases with the number of states [19].Therefore, a more suitable and effective method should be found to identify the driving style.SVM has been widely applied to various kinds of pattern recognition problems, including voice identification, text categorization, and face detection [6,20,21].In addition, SVM performs well with a limited number of training samples, and SVM has fewer parameters to be determined [22,23].Therefore, many studies employed SVM to build driving style recognition models [24][25][26][27][28]. Along with machine learning algorithms, driving behavior data collection is crucial to the success of driving style recognition.Table 1 summarizes the advantage and the disadvantages of different driving data collection approaches.Researchers used instrumented vehicles to conduct naturalist driving experiments to identify behaviors [29][30][31].Some instrumented vehicles were equipped with in-vehicle mounted cameras to capture video images of drivers [32,33], while others got help from specialized hardware and sensors to acquire throttle opening, pedal brake, wheel steering, vehicle speed, acceleration rate, and yaw rate [10,24].Although the driver controlling data and vehicle kinematic data can be collected on the instrumented vehicles, the requirement of expensive devices and sensors is a major obstacle to large scale naturalist driving experiments.In addition, extreme driving conditions, like extreme weather and driving under the influence, could be unobservable in naturalist driving studies.Some research adopted driving simulators to collect driving behavior data [24][25][26] in the designed and controlled driving environment.However, the results heavily relied on the fidelity and validity of the driving simulator used in research, because the driving behavior observed in the simulator may not always correspond to real-world driving. Besides Naturalist Driving Studies (NDS) and driving simulator, another important data source is traffic video, because surveillance cameras deployed on the roadside can provide a large amount of traffic environment data and vehicle trajectory data [34].Traffic video contains all vehicle trajectory data on the road and can offer a full view of vehicle's interactions with other during car-following and lane-change, etc.However, extracting vehicle trajectory from video could be challenging, which depends on video quality and algorithms used [35][36][37]. Except for unsupervised machine learning algorithms, for example, clustering, other machine learning algorithms require labelled or partially labelled driving behavior data.In the field of driving style recognition, the method of driving style labelling for each driver in the sample is of great importance to the reliability of the recognition model.There are several methods to label driving style.One is the behaviorbased or accident-based method.The driver's driving style depends on risky behavior or accident happened during observation.Chen et al. [20] defined the dangerous driving behaviors according to criteria as frequent lane changes, abrupt double lane change, and illegal lane occupation.The accidents data are also adopted to determine the risk level of driving behavior [38].However, risky behavior or accident is hardly observable in daily traffic.Therefore, driver self-reported questionnaire [39] and expert scoring [13] are also adopted to evaluate driving style.However, these two methods rely on subjective judgments of drivers or experts and can be very time-consuming when the number of drivers in the sample is hundreds or even thousands.Some research used the facial movement or driving duration to label driver's drowsiness or fatigue driving [9,10].The unsupervised clustering methods including the K-means [40] and fuzzy clustering [41] are also used to label drivers in each clustering group. This paper proposes a new driving data labelling method based on collision surrogates.There are many effective surrogates to evaluate the collision risk [42,43].Mahmud et al. [44] compared the advantages and disadvantages between temporal proximity indicators, i.e., Time to Collision (TTC), Time to Accident (TA), Time-Headway (THW), and distance based proximal indicators, i.e., Margin to Collision (MTC), Proportion of Stopping Distance (PSD).Many automobile collision avoidance systems or driver assistance systems used TTC as an important warning criterion for its theoretical and reliable reasons [45][46][47].Since TTC can not handle zero relative speed in car-following, the Inversed TTC (ITTC) was adopted to measure the collision risk [41].THW is another surrogate used to estimate the criticality of a follow-up situation, which is applicable in all traffic environments [44].MTC provides the possibility of conflict when the preceding and following vehicle at the same time decelerate abruptly [48].Modified MTC (MMTC) considers the reaction time for drivers when preceding vehicle abruptly decelerates.These three surrogates can be adopted to label the driving style corresponding to different rear-end collision effectively. In this paper, the vehicle trajectory data extracted from traffic video is analyzed to study the driving style.Three surrogates, i.e., ITTC, THW, and MMTC, are used to effectively measure the rear-end collision risk and label the driving style.This labeling method is more efficient and objective compared with questionnaires [10] and expert scoring [20].Then the SVM is applied to build a driving style recognition model.The vehicle trajectory features are extracted using the Discrete Fourier Transform (DFT), Discrete Wavelet Transform (DWT), and statistical methods.The performance of SVM is also compared with RF, KNN, and MLP.This paper provides an efficient method to identify driving style based on the trajectory data. Data A high-fidelity vehicle trajectory dataset, Next Generation Simulation (NGSIM), was collected by U.S. Federal Highway Administration (FHWA) in 2005.This dataset is still widely used in transportation research, especially in traffic flow analysis and modelling, traffic-related estimation and prediction, and vehicular ad hoc network-related studies [49].It has rarely been applied to driving style recognition.Since this dataset was collected more than a decade ago, the accuracy of NGSIM dataset was questioned in recent years [50].The measurement errors in NGSIM dataset were found to be far beyond negligible, partially due to low-resolution cameras and mis-tracking of vehicles from video images.Montanino et al. [51] removed outliners and noise and reconstructed the I-80 dataset 1 (from 4:00 p.m. to 4:15 p.m.), which showed significant improvement over the original NGSIM dataset. In this paper, the I-80 trajectory dataset is adopted to study driving style.The trajectory data was collected on a segment of I-80 freeway in Emeryville, California.The segment contains 6 lanes, where lane 1 is a high occupancy vehicle (HOV) lane.The frequency of data collection is 10 Hz, and each leader-follower pair of dataset contains detailed information including the vehicle ID, position, length, and width of the vehicle, velocity, acceleration, lane ID, and following and preceding vehicles.About 206,000 records of vehicle trajectory for 370 Leader-follower Vehicle Pairs (LVP) on HOV lane are chosen to study the driving style in this paper since there are fewer interrupting vehicles from other lanes. Methodology The flow of driving style recognition in this paper is depicted in Figure 1.Three collision risk surrogates are used to determine the risk level of every moment in the car-following process for each LVP.K-means algorithm is applied to group the drivers as normal or aggressive driving style based on their trajectory risk levels.Given the labeled driving data, driving style recognition model is built using machine learning algorithms.The input features of machine learning algorithms are extracted by DFT, DWT, and statistical methods from trajectory features, without using surrogates and risk levels.The recognition results recognized by SVM are compared with other machine learning algorithms. Collision Risk Surrogates. For each driver, it is essential to find the most effective surrogates to describe the collision risk when driving on the road.Vehicle trajectory data such as velocity and acceleration of the vehicle usually are not good enough to estimate the rear-end collision risk.Three collision surrogates are considered to measure the collision risk, including Time to Collision (TTC), Time-Headway (THW), and Margin to Collision (MTC).These three collision risk surrogates are defined and modified as follows. Inversed Time to Collision (ITTC). TTC is the predicted time to collision between the preceding vehicle (PV) and following vehicle (FV) when two vehicles remain the current relative velocity. where and V denote relative distance and velocity between two following vehicles, respectively. and denote the front position of FV and rear position of PV, respectively.V and V , respectively, denote the velocity of FV and PV, respectively.However, TTC can be very large with lower relative velocity for two following vehicles, which happened a lot in the real driving environment.To reduce the scope of TTC, the ITTC is adopted to measure the collision risk in the paper.The risk of rear-end collision is higher with larger ITTC value. Time-Headway (THW).THW indicates the time for FV to reach the present position of PV with the current velocity.The potential collision risk of drivers is determined by THW in the steady vehicle following situation. The potential collision risk can be evaluated by THW when FV approaches PV with constant V .Lower THW indicates a higher potential collision risk. The Modified Margin to Collision (MMTC).MTC indicates the final relative position of PV and FV if two vehicles decelerate abruptly. where a f and a p denote the deceleration for FV and PV, respectively.Usually, both are defined as 0.7.A modified MTC (MMTC) is used in the paper to include the reaction time of the following vehicle when the PV abruptly decelerates.The equation is modified as follows. MMTC evaluates the minimum reaction time needed for FV to avoid a collision when PV abruptly decelerates at 0.7. The collision risk is higher with lower MMTC value since there is little time for drivers to react.MMTC can evaluate potential collision risk with abrupt deceleration of PV. Driving Style Clustering. The threshold values of surrogates are adopted to divide the trajectory of each driver into several collision risk levels.Then the K-means method is used to group the drivers into normal or aggressive driving style based on their components of collision risk levels.The purpose of the method is to provide an objective and stable label of driving style for each driver in the sample data and then make it ready to use in supervised machine learning. Assume that there are sets of driving data, and each set consists of v dimensional features denoting , which belongs to a class .Therefore, the driving data of each driver can be described as { , }.The K-means method finds the best class for each driving data.The objective function of the K-means algorithm is to minimize the total in-class error squares shown as follows. where is the number of classes.( λ , ) is the mean vector of all points in class . Trajectory Feature Extraction. In this paper, the vehicle acceleration a f , relative distance x r , and relative velocity v r are adopted to recognize the driving style.The Discrete Fourier Transform (DFT), Discrete Wavelet Transform (DWT), and statistical method are used, respectively, to extract the effective features from the vehicle acceleration a f , relative distance x r , and relative velocity v r . Discrete Fourier Transform. DFT has been applied to convert time series of trajectory data to signal amplitude in the frequency domain [7].The DFT of a given time series ( 1 , 2 , . . ., ) is defined as a sequence of N complex numbers ( 0 , 1 , . . ., −1 ): where is the imaginary unit.The first 10 DFT coefficients of trajectory data are used to recognize the driving style. Discrete Wavelet Transform. DWT is shown to be more suitable to analyze and decompose a given signal in some studies [54].This paper follows the DWT method described in [54] and uses the energy of approximation subtime series and detail sub-time series, which are decomposed from vehicle acceleration a f , relative distance x r , and relative velocity v r , to recognize the driving style. Statistical Method. The key statistical parameters that can capture most of the distribution information of vehicle acceleration a f , relative distance x r , and relative velocity v r are also selected for recognition.The statistical parameters are the maximum, minimum, mean, standard deviation, and 85% percentiles, which were proved useful in previous driving behavior study [20]. Feature Combinations. For each driver, during carfollowing process, there are three time series: acceleration a f , relative distance x r , and relative velocity v r .This paper tries 7 different feature combinations as the input of driving style recognition model: Single-source features: use only one time series out of acceleration a f , relative distance x r, and relative velocity v r , and extract features from this time series. Two-source features: use two time series out of acceleration a f , relative distance x r , and relative velocity v r .Therefore, there are three combinations: a f + x r , x r + v r , and v r + a f .Features are extracted from two time series separately. Three-source features: use all three time series and extract features from three time series separately. Threhold Value of Collision Risk Surrogates.The correlation analysis among three surrogates is shown in Table 2. Table 2 shows that the Pearson coefficient between THW and MMTC is 0.980, indicating a strong positive correlation.ITTC and THW have a weak negative correlation.Therefore, ITTC and THW are selected to measure driving behavior risk.The classification result will not be influenced by the adopting of THW instead of MMTC because of the strong correlation between the two surrogates. To make a reasonable adjustment on collision risk along the car-following process, each surrogate has a risk threshold that can be obtained through the probability density distribution and fitting results of ITTC, THW shown in Figure 2. Figure 2(a) shows the fitting results of ITTC, THW by adopting three distributions, i.e., normal distribution, logistic distribution, and distribution.The t distribution achieves a better fitting performance than other two distributions on probability density distribution of ITTC and THW.Therefore, the distribution is adopted to determine the threshold value of features.The percentile values of ITTC are shown in Figure 2(b).The 25%, 45%, 65%, 85%, and 95% percentile values of ITTC are 0.02, 0.08, 0.12, 0.19, and 0.28 s −1 , respectively.The 25%, 45%, 65%, and 85% percentile values of THW are 1.26, 1.71, 2.13, and 2.73 s, respectively. ITTC.The upper threshold of ITTC is 0.28 s −1 , which is equivalent to 3.5 s for TTC.Previous studies show that the desirable TTC is 4 s for urban road [46] and 3.5 s for nonsupported drivers [45].The desirable TTC for signalized intersection and two-lane rural roads is 3 s [47].Therefore, 3.5 s is adopted in this paper as the rear-end collision risk threshold.When TTC is lower than 3.5 s, the FV is labeled as having a higher collision risk. N-fitting Logistic THW.Since a lower THW indicates a higher collision risk, the author first chose the 25% percentile, which is 1.26 s.However, many road administrations in European countries recommend a safe THW of 2 s [48].The THW below 2 s may cause uncomfortable driving feelings and potential risk for drivers.Finally, 2 s is used as the threshold value for THW in this study. Trajectory Risk Level. The threshold values of ITTC and THW, i.e., 0.28 s −1 and 2 s, are used to divide the driving trajectory into different risk levels.To be more specific, the different values of ITTC and THW are corresponding to different driving risk level.The driving trajectory for each driver can be divided into four risk levels: safe, lowrisky, high-risky, and dangerous driving behavior, shown in Figure 3. Safe Driving Behavior.The FV has THW above 2 s and ITTC below 0.28 s −1 , which indicates that the FV keeps low velocity and a large gap with the PV at car-following state.Low-Risky Driving Behavior.The FV has THW above 2 s and ITTC above 0.28 s −1 , which indicates that the FV keeps low velocity and a small gap with the PV at car-following state. High-Risky Driving Behavior.The FV has THW below 2 s and ITTC below 0.28 s −1 , which indicates that the FV remains high velocity and a large gap with the PV at car-following state. Dangerous Driving Behavior.The FV has THW below 2 s and ITTC above 0.28 s −1 , which indicates that the FV remains high velocity and a small gap with the PV at car-following state. The driving trajectory of each driver can be divided into several segments, which belongs to different driving risk levels.Two drivers are selected to show the trajectory segments according to the threshold values of ITTC and THW, shown in Figure 4. As Figure 4 shows, for most drivers, the safe and high-risk driving behaviors account for over 80% of driving trajectory.The proportion of dangerous driving and low-risk driving behaviors is limited to 10% and 5%, respectively.The driving style of each driver can be determined by the proportions of trajectory segments with different risk levels.The 370 drivers are clustered into two groups in Section 5.1.3. Driving Style Clustering. Based on the proportions of trajectory segments determined by the threshold values of ITTC and THW, the drivers can be grouped into two classes using the K-means algorithm.The results show one class has 246 drivers and the other has 124 drivers.On average, drivers in the first class have 45.5% safe driving behavior, 37.5% high-risk driving behavior, and 11.4% dangerous driving behavior, and drivers in the second class have 7.4% safe driving behavior, 77.8% high-risk driving behavior, and 13.5% dangerous driving behavior.Therefore, drivers in the first class are labelled as normal drivers, while drivers in the second class are labelled as aggressive drivers.The driving style labels provided by K-means are used to train SVM in Section 5.2. Driving Style Recognition. The SVM method is adopted to recognize the driving style for 370 drivers.In this paper, the trajectory data including the vehicle acceleration a f , relative distance x r , and relative velocity v r are adopted to recognize the driving style, respectively.The DFT, DWT, and statiscal methods are both applied to extract effective features from trajectroy data.Every single feature can also be combined with other features as multisource features to recognize the driving style.The recognition accuracy rates are compared to find the best feature extraction method and the most important trajectory features.The z-score method is adopted to standardize features before model training. In the study, the accuracy, precision, and recall rates are assessed to evaluate the model's ability to recognize aggressive drivers among all vehicles on the road.The performance of the recognition model is evaluated using the "leave-one-out" cross-validation method.Driving style recognition results based on different feature extraction methods and SVM are shown in Tables 3-7.Except mentioned, the SVM algorithm uses linear kernel function.3, the recognition accuracy rate is 83.2% based on v r and 88.9% based on x r .The recognition accuracy rate is 88.9% based on x r and a f , and 87.8% based on x r and v r .In general, the features x r and v r are better than in recognizing the driving style.A possible reason is that the driving style label is determined by the rear-end collision risk, the feature a f can not accurately describe the relative motivation between two following vehicles.The accuracy rate based on all three features can achive 87.6%.Suprisingly, using DFT coefficients of x r along has the highest accuracy rate. Discrete Wavelet Transform. For DWT, there are two parameters to be determined, which could affect the performance of the recognition model.One is an appropriate wavelet mother function; the other is the number of decomposition levels.This paper tried 15 different wavelet mother functions (listed in Table 4) and 5 decomposition levels (listed in Table 5).The results show that Daubechies 4 mother function can generate the highest accuracy rate: 91.7%.The best decomposition level is 1, while decomposing time series further does not help to improve the accuracy rate. With Daubechies 4 mother function and 1 decomposition level, SVM performance is assessed with different combinations of features.Shown in Table 6, the recognition accuracy rate is 83.8% based on v r and 86.8% based on x r .Therefore, when using x r along in SVM, DFT extraction method works better than DWT.The recognition accuracy rate is 88.7% based on x r and a f and 90.2% based on x r and v r .The accuracy rate based on all three features can achive 91.7%.Compared with DFT coefficients, DWT methods also get higher precision rate 92.8% and higher recall rate 81.8%. Conclusion In this study, a novel driving style labelling method is proposed to assign normal and aggressive labels based on collision risk, which is critical to sample data needed in supervised machine learning.The method is based on the vehicle trajectory extracted from traffic video.The rearend collision risk surrogates are adopted to evaluate the risk during the car-following process.The study also applies the SVM algorithm to recognize the driving style based on the trajectory features.Three feature extraction methods are tested.Other machine learning algorithms including RF, MLP, and KNN are also adopted to compare with the SVM.Several conclusions can be obtained from this study. (1) Three effective rear-end collision risk surrogates, namely, ITTC, THW, and MMTC, are selected to evaluate the collision risk in the car-following process.Since THW and MMTC show a strong positive correlation, only ITTC and THW are kept to evaluate driving risk level.This paper gives threshold values of ITTC and THW based on their distribution and previous studies.Each driver's trajectory can be divided into four risk levels, and all drivers can be grouped into two classes using the K-means algorithm.Using NGSIM dataset, this method labels 246 normal drivers and 124 aggressive drivers.On average, normal drivers have 45.5% safe driving behavior, 37.5% high-risk driving behavior, and 11.4% dangerous driving behavior, and aggressive drivers have 7.4% safe driving behavior, 77.8% high-risk driving behavior, and 13.5% dangerous driving behavior. (2) DFT, DWT, and statistical methods are adopted to extract the effective features from trajectory data to facilitate the driving style recognition.Using relative distance along DFT method can convert relative distance time series into coefficients in the frequency domain and help SVM reach the accuracy rate of 88.9%, the precision rate of 86.3%, and the recall rate of 80.2%.However, when using multiple features, including acceleration, relative distance, and relative speed, DWT method can improve the accuracy rate to 91.7%, precision rate to 92.8%, and recall rate to 81.8%.Among 15 wavelet mother functions tested, Daubechies 4 mother function provides the best results. (3) The driving style can be accurately recognized by the proposed SVM model based on the trajectory features with 91.7% accuracy rate.The recognition accuracy is superior to other famous and frequently used classifiers: RF, MLP, and KNN.This result indicates that the SVM method is a more appropriate method for driving style recognition based on the trajectory features. (4) The proposed method can be effectively used to label and recognize the driving style based on the traffic video surveillance systems.The development of network connected vehicles can help to collect the data more preciously.The model with machine learning algorithm can be trained to better recognize driving style.It can help to evaluate the collision risk on the road network and also provide real-time decision support to drivers. This study offers the possibility of developing more sophisticated driving style recognition methods.For further work, the proposed method can be extended by selecting other features that can reflect the driving style more accurately.As we know, the driving style is also influenced by the road conditions and traffic flow level.Such results can also be used to improve the driving style recognition.It is possible to use some semi-supervised and unsupervised methods to save the label time in the future. Figure 2 : Figure 2: Fitting results and thresholds of surrogates. Figure 3 : Figure 3: The threshold values of surrogates indicating different driving risk. Figure 4 : Figure 4: Trajectory segments for four drivers based on threshold values of ITTC and THW. This paper proposes The trajectory of each driver is divided into segments with different risk level by the threshold of rear-end collision surrogates.(2) The DFT, DWT, and statistical feature extraction methods are all applied on vehicle trajectory data, and their performance is compared.(3) This paper builds a driving style recognition model based on vehicle trajectory data with 92.7% accuracy rate.The recognition results of SVM and other popular classification algorithms including RF, MLP, and KNN are compared. Table 3 : The evaluation results of driving style based on SVM using DFT. Table 4 : The accuracy of driving style recognition using DWT-SVM with different wavelet mother functions. Table 5 : The performance of DWT-SVM with different decomposition levels. Table 7 . With any combinations of features, the accuracy rate of the statistical method is lower than that based on DFT and DWT.The highest accuracy rate in Table7is 85.7% when adopting three features. Table 6 : The evaluation results of driving style based on SVM using DWT.: using polynomial kernel function in SVM to produce better results. * Table 7 : The evaluation results of driving style based on SVM using the statistical method. * : using polynomial kernel function, in SVM to produce better results. Table 8 : The performance comparison of multiple algorithms.
6,648.6
2019-01-23T00:00:00.000
[ "Computer Science" ]
Predictive Models for Optimisation of Acetone Mediated Extraction of Polyphenolic Compounds from By-Product of Cider Production Response surface methodology (RSM) was applied to provide predictive models for optimisation of extraction of selected polyphenolic compounds from cider apple pomace under aqueous acetone. The design of experiment (DoE) was conducted to evaluate the influence of acetone concentration % (v/v), solid-to solvent ratio % (w/v), temperature (˚C) and extraction time (min) and their interaction on phenolic contents, using the Central Composite Rotatable Design (CCRD). The experimental data were analysed to fit statistical models for recovery of phenolic compounds. The selected models were significant (P < 0.05) and insignificant lack of fits (P > 0.05), except for Chlorogenic acid and Quercetin 3-glucoside which had significant lack of fits (P < 0.05). All models had satisfactory level of adequacies with coefficients of regression R 2 > 0.9000 and adjusted 2 Adj R reasonable agrees with predicted 2 Pri R . Coefficient of variation < 5% for each determination at the 95% confidence interval. These models could be re-lied upon to achieve optimal concentrations of polyphenolic compounds for applications in nutraceutical, pharmaceutical and cosmetic industries. Introduction Mathematical modelling is an indispensable tool in many applications in science How to cite this paper: Ibrahim, S., Santos, R. and Bowra, S. (2020) Predictive Models for Optimisation of Acetone Mediated Extraction of Polyphenolic Compounds from By-Product of Cider Production. Advances in Chemical Engineering and Science, 10, 81-98. and engineering. It is the art of translating problems from an application area into tractable mathematical formulations whose theoretical and numerical analysis provides deep understanding to answers and guidance that are useful for originating applications [1]. Mathematical concepts and language are employed to facilitate proper explanation of the system and also explain the effects of different factors, and to make predictions of their behaviour [2]. Modelling based on mathematics provides thorough understanding of the system to be modelled and allows different applications of modern computing capabilities [3]. Models serve as tools for the understanding of very important and complex processes or systems [4]. Different types of models have been proposed and applied in chemical process for optimisation and for designing experiments to give better understanding of complex systems. Response surface methodology (RSM) is a multivariate statistical technique that evaluates the interrelationship between process parameters and responses [5] [6] [7]. Response Surface Methodology was set out by Box et al., 1950 [8] and was a collection of mathematical and statistical techniques used to improve the performance of systems for maximum benefits [9]. By fitting a polynomial equation to an observed data from within a designed of experiment (DoE), the technique was able to predict the behaviour of a response based on the set of independent variables [9]. Response surface methodology provides adequate information from a relatively fewer experimental runs compared to one factor at time procedure which involved plenty of time in experimental trials for model generation. The one factor at a time procedure requires more experiments to be able to explain the interaction of the independent variables on overall dependent quantity or response. Response surface methodology utilises three (3) levels of independent factors to produce experimental designs and employ polynomial models for analysis. RSM has important application in process development, formulation and design of contemporary products in addition to established ones. The technique is widely applicable in chemical and biochemical processes for varied objectives [10]. Comprehensive description of design of experiments by response surface methodology can be obtained from [11] [12] [13]. The current research seeks to demonstrate the possibility of developing predictive models that are reliable for optimisation of the recovery of polyphenolic compounds from cider apple pomace using aqueous acetone as a solvent. Apple pomace is the residue of apple juice and cider production and composed between 20% -35% by weight of the original production feedstock. The amount of the pomace generated and its composition will depend on the variety of the apple and the techniques used in extracting the juice [14]. Apple pomace is a potential source of carbohydrate, fibre, polyphenolics and pectin [15] [16] which find application in the food, feed, pharmaceutical, cosmetics, chemical, and biofuels sectors [17]. The major polyphenolic compounds found in apples include; Epicatechins, Procyanidins, Phloridzin, Quercetin conjugates and Chlorogenic acids. Advances in Chemical Engineering and Science Apple Pomace The apple pomace sample composed of 7 varieties of cider apples made of Harry Masters Jersey, Yarlinton Mill, Michelin, Dabinett, Brown Snout, Vilberie and Chisel Jersey, and were collected from Universal Beverages Limited (UBL), Ledbury owned by Heineken international. The apple pomace residues were mixed rigorously to obtain mixture characteristic of the original pomace sample and divided into parts and stored in freezer bags at −20˚C till further investigations. Chemical Reagents All chemical standards and solvents employed in this investigation were ordered at the highest grade of purity from suppliers indicated in the methodologies. Acetonenitrile, and glacial acetic acid were obtained from Fisher Scientific (UK). Dry Weight Content of Apple Pomace A bench top laboratory convention oven (103˚C ± 3˚C) from STATUS International, UK was used for dry weight content. The American Oil Chemist Society (AOCS) standard procedure was utilised to determine the dry matter content, and the results were expressed as the percentage of total fresh weight of the apple pomace as received. Apple Pomace Sample Preparation The apple pomace samples were freeze dried using a vacuum freeze dryer EQ03 (Vacuum and Industrial products). The dried pomace samples from the freeze dryer were placed in desiccator for 30 minutes for samples to return to ambient conditions. Freeze dried pomace residue was pulverised using a domestic Moulinex blender 530 (KEMAEU, France). The blending machine was stopped intermittently after every 20 seconds of milling and the pomace powder packed in dark plastic bags and stored in a cool dry place for subsequent use. Extraction of Polyphenolic Compounds from Freeze-Dried Apple Pomace Known weight of homogenised freeze dried apple pomace was weighed into 100 ml Duran bottles and acetone was added 1% -8% (w/v) solid-to-solvent ratio and the bottle tightly covered. Extractions were done in an incubator Max Q 4000 series benchtop shaker (Thermo Scientific). Extraction temperatures and time were set and shaking (150 rpm) and automatically stops when extraction time elapses. Extracts rich in polyphenolic compounds were transferred into 50 ml centrifuge tubes and centrifuged in Juan C4 I at 4000 g for 10 minutes. Supernatant volumes were recorded stored at −20˚C. Extractions at 60˚C and 85˚C were done within Grant OLS200 water bath. Experimental Design for Optimization of Acetone Mediated Extraction The design of the experiments was done similar to the procedure previously described in [18]. The design was composed of one factor at a time (OFAT) experiments and the overall design by response surface methodology (RSM). Solvent concentration (%, (v/v)), solid-to-solvent ratio (1% -8% (w/v)), temperature and extraction time influenced the recovery of polyphenolic compounds. Stat-Ease Design Expert software 7.0, was employed to set up experiments with varying independent variables, utilising the central composite rotatable design (CCRD). In all, thirty (30) experimental runs consisting of 16 trials for factorial points, 8 runs for axial points and 6 duplicates run around the central point (Table 1). Identification and Quantification of Polyphenolic Compounds by High Performance Liquid Chromatography (HPLC) High performance liquid chromatographic (HPLC) procedure in a reverse mode, previously published in literature was used to separate phenolic compounds [19]. Polyphenolic compounds in extracts were resolved using an Agilent 1100 series HPLC system with DAD-UV detector linked to a Chemstation Results and Discussion The mean dry matter content of the homogenized cider apple pomace under this investigation was 27.7 ± 0.3 g/100g fresh weight. Dry weight value reported for apple pomace in literature ranges, from 21.8 -33.6 g/100g [20] [21] [22]. Mean dry matter content of the freeze dried apple pomace was 28.3 ± 0.6 g/100g fresh weight. Identification and Quantification of Phenolic Compounds in Extracts The polyphenolic compounds in the extracts were identified by comparing retention times (t R ) and spectra data at maximum absorbance with known phenolic standards. Chlorogenic acid, Caffeic acid, Epicatechin, Procyanidin B, Quer-Advances in Chemical Engineering and Science cetin-3-galactoside and Quercetin-3-glucoside and Phloridzin, were found to be present in the extracts. These phenolic compounds were identified in industrial apple pomace and documented in literature [4] [19] [23] [24] [25]. The chromatogram of the aqueous acetone extract of the phenolic compounds at 320 nm is shown (Figure 1). The calibration equations, derived from the plots of concentrations of phenolic standard versus the chromatographic peak areas are shown in Table 2. Concentrations of phenolic compounds (mg/kg) dry weight of apple pomace of various design combinations were obtained from the regression equations of corresponding standards and reported (Table 3). Model Selection A number of modelling options were explored for possible selection, including two factor interactions, quadratic and cubic models. These were tested to select suitable model that best fits, and capable of depicting the real time response of the surface. For a given model to be appropriate, then it should be significant (P < 0.05) and an insignificant lack of fit (P > 0.05). Analysis of variance (ANOVA) at 95% confidence interval were performed utilising Stat-Ease software on the data shown in Table 3, to study the influence of the solvent concentration, solid to solvent ratio, temperature and extraction time on overall recovery of the responses. The results obtained were fitted into a generalised second order polynomial model as in (1): where Y is the measured response, 0 β i β ii β and ij β are regression coefficients for intercept, linear, quadratic and interaction terms respectively and i x and j x are coded design variables. Selected models for phenolic compounds were significant (P < 0.05) and insignificant lack of fit (P > 0.05), except for Chlorogenic acid Quercetin 3-glucoside which had significant lack of fit (P < 0.05). All models had satisfactory level of adequacies with coefficients of regression R 2 > 0.9000, meaning more than 90% of the data generated can be explained by the predictive models. Adjusted correlation coefficients 2 Adj R reasonable agrees with predicted correlation coefficient 2 Pri R . Coefficients of variation were < 5% for each determination at the 95% confidence interval. The yields of polyphenolic compounds were significantly affected by acetone concentration, solid-to-solvent ratio, temperature in addition to their interactions. Summary of the analysis of variance (ANOVA) of quantified phenolic compounds is shown in Table 4. Chlorogenic acid is polar within polyphenolic compounds and recovery from plant sources require a certain reasonable level of polarity of the solvent. An increase in temperature from 10˚C to 60˚C for 1% solid-solvent ratio for acetone concentration of 40% (v/v), caused yield of Chlorogenic acid to increase by 14%, but decreases by approximately 20% as concentration of acetone approaches 80% (v/v). concentration of acetone 52% (v/v) at 40˚C was reported as good for recovering Chlorogenic acid from apple pomace [26]. The current investigation revealed 46% (v/v) of acetone at 60˚C as good for extracting Chlorogenic acid from the cider apple pomace. Therefore, decreasing the concentration of acetone and increasing temperature favours yield of Chlorogenic acid. Optimal concentration (206.3 mg/kg dry weight of apple pomace) of Chlorogenic acid was recovered and was within the range (30 -1766 mg/kg) reported for selected cider apples [23]. The variation of design parameters and Chlorogenic acid is shown in Figure 2. Predictive Model for Extraction of Phloridzin Phloridzin concentration increased by 16% at 1% solid -solvent ratio as the acetone concentration was increased to 80% (v/v), and decreased by 24% as solid-solvent ratio approaches 8%. Temperature had minimal effect on the recovery of the dihydrochalcone as it only increased by about 1% when temperature was increased from 10˚C to 60˚C as shown in Figure 3. Optimal concentration of Phloridzin (858.92 mg/kg) was achieved using 73% (v/v) acetone at 60˚C for 60 minutes as against 75% (v/v) at 40˚C for 60 minutes reported earlier [26]. Predictive Model for Extraction of Quercetin Glycosides Quercetin-3-galactoside dominates among other quercetin glycosides in apple peels [27] and ranged in the extracts from 133.7 -187.8 mg/kg dry weight of apple pomace with mean concentration of 168.6 mg/kg. Quercetin-3-glucoside ranged in extracts from 60 -128.2 mg/kg. Both results agreed with previous reports (50 -520 mg/kg) for Quercetin-3-galactoside, and (9 -152 mg/kg) of quercetin-3-glucoside in cider apples [23]. Transformed quadratic models excluding outliers were appropriate and described the behaviour of the Quercetin glycosides when the design factors were varied. The model equations are shown in Equation (4) and Equation (5) Quercetin-3-galactoside concentration increased by 5.24% when concentration of acetone was raised to 80% (v/v) and slightly when temperature increases from 10˚C -60˚C with increasing solid-to-solvent ratio as reflected in Figure 4. Interaction between acetone concentration and solid-to solvent ratio (AC) was more significant than between temperature and time (BD) as revealed by their negative coefficient values which was higher in AC ( as their interaction suggested overtime with increasing temperature, less recovery of the glycoside could be recovered as shown in Figure 4. Decrease in concentration of the glycoside may be due to degradation or hydrolysis of the sugar moieties attached to the quercetin aglycone. Similar results were reported during ultra-sonication procedure of solvent extraction of Quercetin glycosides from "Idared'' apple peels [28]. Optimal acetone concentration of 76% (v/v) with 6% solid-to-solvent ratio was good for extracting quercetin-3-galactoside at 41˚C for 58 minutes extraction time. A predicted concentration of 189 mg/kg of quercetin-3-galactoside was suggested for best desirability at the optimal conditions. Quercentin-3-glucoside showed different behaviour with extraction parameters ( Figure 5 The optimal conditions for extracting Quercetin-3-glucoside using aqueous acetone from the apple pomace were 40% (v/v) acetone, 3.5% solid-to solvent ratio for 31 minutes at 23˚C extraction temperature. These optimal conditions were different for predicted conditions for recovering quercetin-3-galactoside from the apple peels. It is very important to emphasise that there are no data available in literature to the best of our knowledge as regards good extraction parameters for extracting Quercetin glycosides from cider apple pomace using acetone as an extraction solvent. Predictive Model for Extraction of Epicatechin Epicatechin, is a major flava-3-ol, in selected cider apples with concentration in extract ranging from 0 -193 mg/kg. Similarly, 46 mg/kg to 2225 mg/kg had been reported in fresh cider apples [23]. The regression analysis predicted model equation as shown in Equation (6) and the variation of design parameters with epicatechin concentration shown in Figure 6. Predictive Model for Extraction of Procyanidin B2 under Acetone Molecular and structural differences within Proanthocyannidins make their extraction and quantification very challenging. Their complexation with other non-soluble polymers underestimates their quantification due to incomplete extraction [29]. About 50% -93% of apple Procyanidins may be retained within cell wall material during processing of apple juice [30]. Procyanidin B2, is a major representative of the various groups of the proanthocyanidins in apple peels [19] and varied in the extract from 0 (not detectable) to 227.8 mg/kg with mean concentration of 137.68 mg/kg. Result was consistent with previous reports (56 mg/kg to 1362 mg/kg) of selected British cider apples [23]. Predicted model equation in terms of actual factors of Procyanidin B2 is shown in Equation (7). The variation Procyanidin B2 with experimental factors is shown in Figure 7. The concentration of Procyanidin B2 increases as solid-solvent ratio, temperature and acetone concentration increases and decreases significantly for further increase in these parameters. Optimal solvent concentration and solid-solvent ratio for extracting Procyanidin B2 from the apple pomace at 25˚C for 40 minutes were 54% (v/v) and 6% respectively. Effect of Design Variables on Total Phenolic Content by the HPLC Method Acetone concentration and solid-to solvent ratio and their interaction was the most significant factors in the recovery of the polyphenolic compounds. The model predicted total phenolic content in terms of actual design factors as in Equation (8 The case statistics report showing actual values versus those predicted using the model equation is shown in Table 5 The contour plot of the total phenolic content (mg/kg) quantified by HPLC method is shown in Figure 8. Acetone concentration and solid-to-solvent ratio significantly affected the overall yields of extraction of polyphenolic compounds. Optimised conditions of 65% (v/v) of acetone, 6% solid-to solvent ratio for 60 minutes at 60˚C were suggested using the statistical model equation with optimal total phenolic content of 1394.01 mg/kg. Validation of the regression model was conducted using the conditions above. The experimental value was determined to be 1392.20 ± 2.9 mg/kg which was in agreement with that predicted by the model. Higher amounts of phenolic compounds were mobilised around the optimised conditions. The chromatographic methods allowed quantification of individual phenolic compounds present in the extracts without any interference. The HPLC method may not well resolve all phenolic compounds in the extract. For instance, oligomeric flavanols which represent about 71% -90% of polyphenolic content in apples [31], were not observed in extracts under HPLC used because they might not be retained by the stationary phase. Conclusion The research demonstrated the application of statistical tools to design experiments for optimisation of recovery of polyphenolic compounds from cider apple pomace using aqueous acetone as solvent. Model equations were generated for selected phenolic compounds by studying the influence of acetone concentration, solid-solvent ratio, temperature and extraction time on extraction of polyphenolic compounds. The independent variables have shown selectivity towards efficient recovery of selected polyphenolic compounds. Improving the polarity of acetone by adding water sufficiently improved recovery of Chlorogenic acid and Procyanidin B2. Quercetin-3-glucoside and Quercetin-3-galactoside exhibited different relationship with temperature and solid-to-solvent ratio although both are classified under Quercetin glycosides. The experimental design predicted 65% (v/v) acetone, 6% (w/v) solid-to-solvent ratio, 60 minutes extraction time at 60˚C as optimum conditions for extracting polyphenolic compounds from the by-product of apple juice and cider production.
4,128.6
2020-01-01T00:00:00.000
[ "Environmental Science", "Chemistry", "Agricultural and Food Sciences" ]
ONTOGENETIC VARIATION IN THE SAGITTA OTOLITH OF CENTROPOMUS UNDECIMALIS (ACTINOPTERYGII: PERCIFORMES: CENTROPOMIDAE) IN A TROPICAL ESTUARY Background. The presently reported study was initiated in order to increase the available information on this species of commercial and sporting importance, thus the study aimed to identify possible differences in the shape of the sagitta otolith during the ontogenetic development of the common snook, Centropomus undecimalis (Bloch, 1792), sampled between May 2017 and April 2018 at the mouth of the São Francisco River along its estuary stretch (approximately 10 km). Morphometric study of otoliths is important as a support for future studies on the trophic ecology of ichthyophagous fishes and studies on fishing stocks using the contour of otoliths of this species. Materials and methods. The fish were sampled monthly at five sampling sites distributed between the mouth of the São Francisco River and the municipality of Brejo Grande. For the collection, a beach seine (30 m long, 2.8 m high, and 5 mm mesh between opposite knots) was used. In the laboratory, the otoliths were extracted, photographed, described morphologically, and the possible differences in their contour were analyzed using the wavelets. Results. We analyzed 148 otoliths grouped into six class intervals. Otolith shape varied from rounded to trapezoidal during the ontogenetic growth and showed a gradual decrease in the percentage of presence of the excisura ostii (absent in the largest specimens). PERMANOVA evidenced significant differences in the contour between the smallest size class and the others. For wavelet 4, the LDA correctly reclassified 47.97% otoliths in the size classes, with the best reclassifications occurring in the 5.0–10.0 (43.33%) and 10.1–15.0 cm (65.52%) intervals. While for wavelet 5, the LDA correctly reclassified 59.46% otoliths according to the size class, with the best reclassifications occurring in the length classes 5.0–10.0 (46.67%), 10.1–15.0 (75.86%), 15.1–20.0 (66.67%), and 20.1–25.0 cm (59.38%). Conclusion. The ontogenetic differences found both in the shape and in the otolith structures are important for the enhancement of knowledge on fish biology and indicate the need for further studies. The lack of such information on estuarine species makes it difficult to conduct studies on the trophic ecology and the management of these species. INTRODUCTION Otoliths are mineralized structures formed by the deposition of calcium carbonate in a protein matrix. They are located in the inner ear of bony fishes and assist in the balance and hearing systems (Ladich and Schulz-Mirbach 2016). There are three pairs of otoliths (sagitta, lapillus, and asteriscus) representing different location, size, function, shape and structure (Thresher 1999). The otolith shape usually has an interspecific pattern among the species Echeverría 1999, Tuset et al. 2008), however, some internal (physiological) and external factors can modify the shape of otoliths in populations of the same species throughout the ontogenetic development. Several studies demonstrate how the shape of the otoliths can vary (Carvalho and Corrêa 2014, Maciel et al. 2019, Carvalho et al. 2020) and the ontogenetic variation influenced by growth has already been described for several species (Capoccioni 2011, Vignon 2012, Carvalho et al. 2015, Yan et al. 2017, Song et al. 2019). In addition to species physiology, environmental parameters influence the shape of otoliths. Due to hearing adaptation, depth proved to be a significant parameter in the shape of otoliths, as observed by Torres et al. (2000) and Cruz and Lombarte (2004). Changes in otolith shape caused by salinity were also observed (Capoccioni et al. 2011, Avigliano et al. 2012. It was also possible to detect the influence of temperature on otolith shape. The same fish-species populations living in bodies of water with wide temperature ranges distinctly differ in their otolith shape (Leguá et al. 2013). Recent studies have shown that environmental stress can cause morphological changes, even irregularities, in the deposition of crystals in otoliths (Carvalho et al. 2019, Holmberg et al. 2019. Several methods are implemented in the description of the morphology and contour of otoliths . Among them are: • polar coordinates , • landmarks (Monteiro et al. 2005, Carvalho et al. 2015, • Fourier harmonics (Libungan et al. 2015, Bose et al. 2017, and • wavelets (Sadighzadeh et al. 2014. Fourier harmonics yield better results with phylogenetically distant species, while wavelets provide better results both in distinguishing phylogenetically close species and in identifying intraspecific variations (Sadighzadeh et al. 2012). Fishes of the family Centropomidae are distributed in the tropical and subtropical regions of the Atlantic and Pacific oceans along the coasts of the American continent (Rivas 1986 (Junior et al. 2007, Ostini et al. 2007. Therefore, centropomids are the target of artisanal, commercial, and recreational fishing Taylor 2013, Muller et al. 2015). Even though they are euryhaline species, they are more frequently found in estuarine systems (Seaman and Collins 1983). The common snook, Centropomus undecimalis, popularly known as sea bass, is a protandrous hermaphrodite species, with euryhaline, diadromous, and demersal habits (Taylor et al. 2000, Perera-García et al. 2011. Its distribution extends from North America (Florida, USA) to South America (Rio de Janeiro, Brazil) and is widely distributed along the Brazilian coast (Figueiredo and Menezes 1980). The species is a predator, with primarily piscivorous feeding habit and occupies high levels in the trophic web (Figueiredo and Menezes 1980, Aliaume et al. 2005, Lira et al. 2017. Therefore, the objective of the presently reported study was to identify possible ontogenetic differences in the sagitta otolith of C. undecimalis, caught in a tropical estuary, as a support for future studies on the trophic ecology of ichthyophagous fishes in the region and studies on fishing stocks using the contour of otoliths of this species. MATERIALS AND METHODS Sample collection and processing. The specimens of Centropomus undecimalis were sampled monthly, between May 2017 and April 2018, at five sampling sites distributed between the mouth of the São Francisco River and the municipality of Brejo Grande (Fig. 1), in the lower São Francisco River (10°28′34.02′′S-36°24′27 .02′′W). For collection, a beach seine (30 m long, 2.8 m high, and 5 mm mesh between opposite knots) was used. Subsequently, the caught fish individuals were refrigerated, identified to the species taxonomic level using specialized literature (Figueiredo and Menezes 1980), measured (total length TL; 0.01 cm), weighed (total weight TW; 0.1 g), divided into six length classes ( here Ψ is the function with local support in an amplitude limited in the abscissa axis, φ is the lowest tone filter and s is the scale parameter (Mallat 1991). From the wavelets, 512 equidistant coordinates are distributed in each otolith starting from the rostrum (1) and ending at the same (512) (Fig. 2B). The acquisition of wavelets was carried out on the AFORO website * as described by Parisi-Baradad et al. (2010). Statistical analysis of otolith contour data. Data obtained for the wavelets did not meet the assumptions required for parametric tests (Shapiro-Wilk; P < 0.05 and Bartlett's test; P < 0.05). Thus, to identify variations in the otolith contour between the class intervals, a Permutational Analysis of Variance (PERMANOVA) was applied. If the test detects significant differences in the otolith shape between the size classes (P < 0.05), a Bonferroni test was used to identify between which intervals the significant interaction is. From the principal component analysis (PCA), using the variance-covariance matrix, the wavelet functions were * http://isis.cmima.csic.es summarized without losing information (Tuset et al. , 2016. The broken-stick method indicated the principal components (PC) to retain, which further explain the variability in the otolith contour (Gauldie and Crampton 2002). To exclude the effect of otolith allometry, a linear regression was run between the PC and the total length of the fish (TL); from the regressions between PC and TL that showed significance, the residuals were used for the linear discriminant analysis (LDA). Using the PCs and the class intervals, it was possible to employ an LDA to check the percentage of correct reclassification of otoliths within the class intervals. Otoliths of C. undecimalis presented some morphological variations throughout ontogeny, being rounded otoliths ( The results of PERMANOVA indicated significant differences in the contours between size class intervals (F = 9.583, P < 0.0001) and the Bonferroni test pointed out that these differences are caused by the first class interval, which differed from all others (Table 1). Figure 6 shows a high variability in the shape of otoliths of C. undecimalis along its ontogeny obtained by wavelet 4. Axis 1 explained 61.63% variability in the shape of otoliths. On the positive axis 1, the otoliths from For the wavelet 4, LDA presented 47.97% correct global reclassifications of otoliths between the defined class intervals; when considered the intervals, the best reclassifications were found in the intervals 5.0-10.0 cm with 43.33% and in the 10.1-15.0 cm, with 65.52% (Table 2). DISCUSSION The morphology of otoliths of Centropomus undecimalis in the presently reported study indicates as a diagnostic trait of this species the otolith shape (from elliptical to trapezoidal, varying ontogenetically), heterosulcoid sulcus acusticus and the presence of dorsal depression. Some characteristics varied a lot during the ontogenetic development, such as type of margins, excisura ostii, and stages of development of the rostrum. The absence of in-depth studies like this on the ontogenetic variation of C. undecimalis otoliths makes it difficult to compare and identify differences influenced by the environment in all life stages of this species. Otoliths from adult individuals of this species, however, caught both in Florida (USA) * and on the Brazilian coast (Brenha-Nunes et al. 2016) show morphological similarity with those observed in the presently reported study. The otoliths analyzed in the present study presented a shape that varies from elliptical to trapezoidal, a characteristic diagnostic trait for the genus Centropomus, as already observed in previous studies (Lombarte et al. 2006 It is still possible to denote that the analyzed otoliths have the cauda curved towards the posterior ventral region of the otoliths, and this is considered a standard for the genus. It is also worth noting the presence of a heterosulcoid sulcus acusticus in this species, and this characteristic is common to the order Perciformes, however, it is also present in the orders Atheriniformes and Clupeiformes (see Carvalho and Corrêa 2014, Siliprandi et al. 2014, Carvalho et al. 2015. The presence of an elongated, rounded and pronounced rostrum are characteristics pointed out by Gallardo-Cabello et al. (2017) and Espino-Barr et al. (2019) for the genus Centropomus and are characteristics that were also observed in the presently reported study. Gallardo-Cabello et al. (2017) also point out that for otoliths of the species Centropomus nigrescens Günther, 1864 there is an absence of pronounced notches (excisura major and minor) causing the absence of both antirostrum and pararostrum, this seems to be a characteristic of the genus in larger individuals, since that the presently reported study detected that the prevalence of the excisura decreases along with the ontogenetic development of Centropomus undecimalis. The prevalence of crenulated borders was constant during the analyzed ontogenetic development, Ontogenetic changes in otolith contour have been widely observed in several species (Capoccioni et al. 2011, Vignon 2012). In C. undecimalis, variations in the contour throughout ontogeny were also found (Table 1 and Table 2), studies that describe the ontogenetic variations in the shape of otoliths are of paramount importance for the identification of prey of ichthyophagous fishes (Bugoni and Vooren 2004, Carvalho et al. 2019, Rodrigues et al. 2019). The lack of morphological studies on otoliths at different stages of life makes it difficult to identify species, causing an erroneous identification among ingested prey or an increase in the number of specimens in the "unidentified" category. For example, otoliths from C. undecimalis at the intermediate phases have characteristics very similar to other perciform fishes, such as Pomadasys corvinaeformis (Steindachner, 1868), otoliths from C. undecimalis at the adult phases are similar to otoliths from Lutjanus analis (Cuvier, 1828) (see Martínez et al. 2007, Brenha-Nunes et al. 2016. The results obtained in the presently reported study show changes in the shape of otoliths of C. undecimalis throughout the ontogenetic development (from elliptical to trapezoidal). Such an effect can be caused by the The number in parentheses corresponds to the frequency of reclassification. The information in bold print is the number and percentage of otoliths correctly reclassified when comparing the size class with itself. It is highlighted to show that the reclassifications have had a good success rate. change of habitat and by the exposure of individuals to different salinity throughout their development since the salinity is recognized for causing changes in the shape of otoliths (Capoccioni et al. 2011, Avigliano et al. 2012. As mentioned above, C. undecimalis had a diadromous habit, moving to places of higher salinity (close to the mouth of estuarine systems) in reproductive periods. After hatching, young individuals tend to migrate to more internal areas of the estuaries where they remain until the reproductive period (Perera-García et al. 2011). According to Avigliano et al. (2012), elliptical-shaped otoliths are associated with fish found in environments with higher salinity. Therefore, the change in otolith shape for C. undecimalis (from elliptical to trapezoidal) may be reflecting the species migrations in the estuarine environment throughout development. Nevertheless, other variables (such as diet and physiological stress) may also be influencing this change in the otolith shape, so further studies are still required to define which variables are actually causing this change. In addition to changes in shape, otoliths of C. undecimalis also showed morphological variations in the rostrum and in the excisura ostii during ontogenetic development. The results indicate a tendency to decrease the percentage of otoliths with the excisura ostii and a decrease in the development of rostrum with the growth of individuals. The development of the rostrum and the excisura ostii were considered by Volpedo and Echeverría (2003) as a diagnostic trait for the position in the water column and swimming ability. Otoliths that have a prominent rostrum and deep excisura ostii are characteristic of fish with a pelagic habit, while otoliths with little pronounced rostrum and a shallow or absent excisura ostii are characteristic of fish that have a demersal habit (Volpedo and Echeverría 2003). These changes in the rostrum and the excisura ostii along the ontogeny of C. undecimalis may be the result of the change in habitat caused by the diadromous habit, with smaller individuals more present in the water column and larger individuals more associated with substrate and rigid structures, according to Froese and Pauly (2019), the species is associated with rigid structures like rocks or tree branches. Studies using wavelets in otoliths usually employ this technique for the characterization of fish stocks (Wiff et al. 2019), characterization of populations (Libungan et al. 2015), or for ecomorphological studies (Sadighzadeh et al. 2014), however, the use of this technique to identify differences in the morphology of otoliths during ontogenetic development is still scarce. It is also worth mentioning that the presently reported study is a pioneer in testing separately wavelet 4 and 5 by class interval, in which wavelet 5 obtained a good rate of correct reclassification of otoliths between intervals; perhaps this wavelet is sensitive to ontogenetic differences, however further studies are required to confirm this. Conclusions and future perspectives. The ontogenetic differences found in the presently reported study highlight the importance of conducting further studies of this type, as for the majority of species (commercially important or not) information such as these is still lacking. The lack of this information makes it difficult to develop studies on the trophic ecology of ichthyophagous fishes, leading to the identification failure of many of the otoliths in the stomach contents or the confusion between close species that may present similarities in their otoliths. Moreover, this study indicated the possibility that the wavelet 5 is sensitive to ontogenetic variations, however, more studies are needed to confirm it.
3,659.6
2020-12-07T00:00:00.000
[ "Environmental Science", "Biology" ]
Assessing the Risks Associated with the Canadian Railway System Using a Safety Risk Model Approach Canada’s national rail network plays a vital role in moving goods and people, transferring $320 billion worth of goods and over 100 million passengers annually. Severe train occurrences are rare events. But they have the potential to cause fatalities and injuries, as well as environmental and property damage. Recent severe incidents, such as Burlington in 2012 and Lac-Mégantic in 2013, have shown that there is still a need for increased awareness and enhanced risk assessment. This work focuses on risk assessment on the Canadian railway system using the Safety Risk Model (SRM). The study applied a customized Canadian SRM (C-SRM) to two groups of hazardous events: main-track derailments and collisions with fatality and injury consequences, calibrated for data between 2007 and 2017. The model used Fault Tree Analysis (FTA) and Event Tree Analysis (ETA) to identify the risks of hazardous events. The individual risks of the hazardous events were then evaluated for three groups of people: passengers, employees, and members of the public (MOP). Finally, the effectiveness of introducing a new control measure, Enhanced Train Control (ETC), was assessed. The results of the study showed that the collective risk of main-track derailments is higher than main-track collisions. Moreover, the risk to MOP and employees form the most significant proportion of individual risk. Finally, risk reduction analysis of the ETC revealed that developing this system reduced the risk of main-track derailments and collisions. This new control measure thus has the potential to make Canadian railways safer. Canada's national rail transportation system is the third largest in the world, operating more than 40,000 km of track across the country (1).The rail network connects industries, consumers, and resource sectors to ports on the Atlantic and Pacific coasts, which has resulted in transporting $320 billion worth of goods by rail each year (2).Furthermore, each year, over 100 million passengers travel on Canada's railways (3).Severe freight and passenger train occurrences are rare events.However, when they happen, they have the potential to cause injuries and fatalities, along with environmental and property losses (1).The Lac-Me´gantic accident, where 47 people lost their lives after a freight train derailment in 2013 (4), and the Burlington accident in 2012, where a passenger train derailment resulted in three fatalities and 45 injuries of various degrees (5) are two examples of recent railway accidents showing the potential consequences of these kinds of event.These rail accidents demonstrate the dangerous nature of the railway industry and emphasize the need for increased awareness and continuous enhancement and updating of risk assessments to control existing residual risks. Understanding risk is essential for the safe management of any business, particularly those sectors called 'high-hazard' such as oil and gas extraction, mining, airlines, and railways (6).Applying an appropriate risk assessment tool will help to identify the current level of risk and develop a plan to mitigate those risks.Various risk assessment techniques are currently used in the railway industry (7)(8)(9).One popular and widely used model among those comprehensive and quantitative approaches is the Safety Risk Model (SRM).This model was developed in the UK for the first time to improve railway safety in an impartial and scientifically supportable manner.The SRM is developed in the form of a cause and consequence analysis using Fault Tree Analysis (FTA) and Event Tree Analysis (ETA).The model is focused on hazardous events which have the potential to lead directly to death or injury.Risk in the context of SRM is defined as the estimate of the potential for harm to passengers, staff, and members of the public (MOP) from the operation and maintenance of the railway.The results of SRM represent the level of residual risk.In other words, it shows the level of risk with the assumption that all current control measures are established with their current degree of effectiveness (6). SRM has been widely used in the UK (6,9) and the US (10).In the UK, the outcome of SRM is used to produce a regularly updated ''Risk Profile Bulletin,'' which is used by UK railways in the production of their statutory Safety Cases.The model is also helpful in testing the impact of proposed new controls on risk levels (6).The Safety Management Information System (SMIS) database that is used to populate the SRM for the UK is very large and contains more than 2 million records (11).Dealing with managing these complex rail assets, recent studies have focused on the big data risk analysis (BDRA) program (11,12).This program is investigating how big data processing techniques can support the current SRM of the Rail Safety and Standards Board (RSSB), whether they will change traditional risk analysis, and if so, how.Van Gulijk et al. (11) presented six BDRA projects, including an on-train data recorder (OTDR)-based signal passed at danger (SPAD)-safety indicator, red aspect approach to signals (RAATS), learning from text-based close call records, visual analytics (VA), ontology, and Safety Management Intelligence System (SMIS + ).These different projects provide a broad overview of the usefulness of big data to railway safety.Reviewing the literature also revealed that the UK further works toward its rail safety by developing geospatial models (GeoSRM) of rail safety hazards across its main-track rail network.The GeoSRM is webbased and shows how risk is distributed across the network.It provides the opportunity for users to submit queries on the risk level for specified regions, routes, hazard types, and so forth, and display the risk levels overlaid on a map (13,14).Sadler et al. (13,14) presented a technique for the development of a full-scale GeoSRM for a representative subset of the UK rail network.They also suggested an approach to make this accessible to a broad range of users.The UK is also working on developing a new SRM methodology.For this, users' requirements were first identified, collated, and prioritized.Then the current SRM was reviewed and evaluated against these requirements to identify the areas that have the potential to enhance the current methodology.Potential new methodologies were then assessed for feasibility and potential benefits to determine the best approach for building the new SRM.The study suggested that the new SRM should provide more accurate and easier-to-use starting points for risk assessments, especially when undertaking localized assessments and evaluating the impacts of design changes (15).As mentioned earlier, the SRM is also used in the US.The Federal Railroad Administration (FRA) has developed the SRM as a means of quantitative risk-ranking to facilitate project selection.The results of the SRM assist the FRA in focusing its R&D effort on the topics that cause the highest level of harm in the railroad industry.It is beneficial in making strategic project investments for maximum safety benefit.Employing SRM also allows for future assessment of risk reduction resulting from implementation of the mitigation strategies (10). Close collaborations between US and Canadian railways (1) motivated this research to investigate the application of the SRM in the Canadian rail network.Reviewing the literature revealed that a comprehensive SRM that aggregates Canadian railway operations (federally regulated) is not available.As a result, to address this gap and contribute to the continuous improvement of rail transportation safety, this research will focus on risk assessment of railway accidents by applying a customized Canadian SRM (C-SRM). The techniques used in the SRM are applicable to other railroad industries; however, the FTA and ETA must be amended for the particular configuration of the railway.C-SRM is a nationwide risk assessment model that reflects Canadian railway operations' characteristics.For example, the Arthur D. Little Inc. (ADL) cause classification was used to develop the FTA, which is commonly used in the Rail Occurrence Database System (RODS) for classifying the causes of Canadian rail occurrences.The contributing factors in the ETA analysis were based on the RODS database for Canadian rail events.Moreover, the main difference between SRM and C-SRM is that SRM considers different subdivisions for individual rail occurrence types.For example, the collision was divided into different consequence categories, such as a collision between two passenger trains (other than on the platform), a collision between a passenger train and a non-passenger train, and so on.In this study, given the limited database, all of the subdivisions related to one accident type (collision, derailment) were considered in one group, and no further subdivisions were investigated. Developing C-SRM also provides an opportunity to apply risk reduction analysis to determine the effectiveness with the introduction of a new control measure, Enhanced Train Control (ETC).ETC technologies developed to increase awareness of the train operator in combination with fail-safe systems similar to the PTC system functionality implemented in the US (16).PTC is a wellknown control measure that has been mandated by the Rail Safety Improvement Act of 2008 (RSIA) for development on certain Class 1 railroad main lines in the US to improve rail transportation safety.In December 2020, FRA announced that PTC technology would be implemented on all required freight and passenger railroad route miles (10).While the railway industry supports this measure, railway operators in Canada have raised several major concerns with regard to their experience with PTC implementation in the US such as expensiveness, complexity, and the significant effort requirement to apply it (16).The US experiences revealed that the systems' cost and complexity greatly depend on the scope of the development and the amount of fail-safe functionality that is required.In reviewing PTC deployment in the US, in 2016, Canada's Advisory Council on Railway Safety recommended that the development of a targeted, riskbased, corridor-specific train control approach is the best option for the deployment of this technology in Canada since the ''one-size-fits-all'' approach is not the best solution for the Canadian environment.Any ETC initiative for Canadian railways would be scaled and based on well-established risk factors and a thorough cost-benefit assessment (16,17).This suggestion has been the ''working assumption'' for implementing ETC in Canada.ETC technologies help to prevent certain rail accidents caused by human error and, as a result, improve safety for passenger and freight trains.These technologies act as a driver-assist mechanism by alerting the train crew to danger and, at their highest functionality, applying train brakes to slow or stop a train to prevent a collision or derailment.The recent Notice of Intent published in the Canada Gazette on February 5, 2022, revealed that Transport Canada (TC) intends to implement ETC in Canada to make the country's rail transportation system even safer (1).Although a fail-safe system has already been implemented in the US, its functionality will differ for the Canadian network, as described in TC's Notice of Intent.Therefore, an empirical evaluation of ETC's effectiveness in risk reduction could not be completed and needs further investigation.A study conducted by the Canadian Rail Research Laboratory (CaRRL) showed that between 20.4% and 30.5% of main-track collisions and between 1.1% and 2.4% of main-track derailments could have been prevented with an ETC system (16).The current study is investigating if the result of risk reduction analysis of ETC through C-SRM is in line with the result of CaRRL research. In high-hazard industries like railways, developing a good SMS, sound engineering, and competent staff can decrease the probability of hazard occurrence to a very low level.It could be beneficial in improving the overall safety performance of these industries in comparison with less-controlled human activity such as road transportation (6).To implement an effective SMS, it is essential to prioritize risks and available controls.Demichela et al. (18) stated that the SMS is often formulated without a quantitative risk assessment as it is seen as too costly and time consuming.Moreover, the required data is often unavailable to conduct a quantitative analysis.Yet, without a quantitative risk assessment, defining the objective of SMS is difficult (18).When railway engineers, managers, and safety analysts start with an understanding of the risks, they can allocate the limited available resources most effectively to enhance safety (19). In the absence of a comprehensive SRM that aggregates (federally regulated) Canadian railway operations, the current research will focus on risk assessment of railway accidents by applying a C-SRM to quantify the level of risk for fatalities and injuries.Therefore, the risk is calculated as a collective risk (the average number of equivalent fatalities per year) and individual risk (the annual probability of equivalent fatality/year for a particular passenger or staff group using the railway).Then, the risk reduction analysis of ETC implementation is applied. Methods This study is focused on investigating the Transportation Safety Board of Canada (TSB) and Rail Occurrence Database System (RODS) databases for main-track train derailment and collision, Classes 1 to 5 occurrences (Appendix A-Definitions of classes of occurrences).The assessment is performed on the 1,085 reported maintrack derailments and collisions in the eleven-year period between January 1, 2007, and December 31, 2017.This timeframe was chosen given the available databases for part of the study analysis (FTA analysis).For accidents not reported by RODS, TSB reports were used to collect the required information. It is worth mentioning that trespassing and crossing accidents were not included in this research.These accidents are responsible for major fatalities and injuries every year.In 2020, 96.6% of fatalities and 81.6% of serious injuries in the rail industry resulted from trespassing and crossing occurrences (20).However, given the nature of these events, which are mainly related to the self-harm or reckless behavior of a third party, the estimation of the number of people exposed to such events was accompanied by difficulties and uncertainty (21) and outside the scope of this study. SRM is a form of cause and consequence analysis using FTA and ETA to represent each of the hazardous events.A hazardous event is taken to mean an event that has the potential to lead directly to fatalities or injuries.This event can be considered as a knot between FTA and ETA.It is important to note that rail occurrences with a larger amount of Dangerous Goods (DG), a high number of cars derailed, and so forth, but no fatality and injury outcomes, could have had severe consequences if they had happened in a more populated area.Developing potential loss scenarios will be the focus of our future work.The focus of this study is on two groups of hazardous events: main-track derailments and main-track collisions with death and serious or minor injury outcomes.The RODS-injury database was used to see if a maintrack derailment or collision could be taken into account as a hazardous event. To apply FTA, Arthur D. Little Inc. (ADL) cause classification was used.ADL divided similar accident causes into 51 unique groups.These groups were also separated into five main categories, including mechanical, human, signal, track, and miscellaneous causes (22,23).The frequency of these cause groups for main-track derailments and collisions was obtained from RODS database (Appendix B) (21).Then, the probability of a hazardous event was calculated according to the miles that the train traveled. ETA was then implemented.After a hazardous event, there are several processes, such as DG release, fire, and explosion that lead to the fatalities or serious or minor injuries.The critical factors influencing the final outcome of each hazardous event and their probability were identified by investigating the RODS database.Finally, by the combination of the FTA and ETA, the risk of the hazardous event was calculated. In the first step of the study, the collective risk of the hazardous events was evaluated.The collective risk was calculated based on the following Equation 1: where F is the frequency (average frequency at which the hazardous event occurs) in the number of events per year, C is Consequences (the average consequences if a hazardous event occurs) in the number of equivalent fatalities per event, and CR is the collective risk which is the average number of the equivalent fatality per year. As the next step, the individual risk of hazardous events was calculated for three groups of people, including passengers, staff, and MOP.Individual risk is the total annual risk to passengers, staff, and MOP using the railway (6,24).It has a similar formula to the collective risk, but the risk is evaluated in the average number of passenger/staff/MOP equivalent fatalities per year. It might be worth mentioning that, to distinguish the difference in severity implied by injuries and fatalities, different weights were assigned to the fatality and major and minor injury.In an effort to do that, 10 major injuries and 200 minor injuries are both equal to one equivalent fatality (24). During the third phase of the study, a risk reduction analysis was performed by using the C-SRM.This step assessed the impact of ETC implementation on rail transport risk.ETC system functionality falls into Driver Advisory Systems and Automatic Train Protection Systems.The Driver Advisory System provides various information and alerts for the train crew to enhance their situational awareness.In this category, train crews are still responsible for rule compliance.This system may, therefore, reduce the probability of human error but not eliminate it.The Automatic Train Protection System has the Driver Advisory System's capabilities.In addition, in a situation when safety risk is imminent, it provides automatic (''fail-safe'') enforcement by applying the train brakes to prevent derailments (1).In this study, to consider the greatest benefit of ETC implementation, its functionality was considered in the Automatic Train Protection System category.This functionality could eliminate the role of human error in train accidents.Recognizing this potential, a 100% effectiveness was considered for two accident causes, main-track authority and speed, which could be preventable by ETC implementation at the Automatic Train Protection System level.Therefore, zero failure annual probability was considered for these causes in the FTAs.The FTA and ETA calculations were then done for main-track derailments and collisions, and the risk of these hazardous events was assessed in both terms of collective and individual risk.Finally, the results of this phase were compared with the previous phase's results to assess the ETC system's impact on risk reduction.The research methods of this study are summarized in Figure 1. It is noteworthy that, at this time, the details of ETC capabilities are unknown.Therefore, preventing other accident causes, such as trains traveling on misaligned switches, was not considered.However, the consideration of movements exceeding the limits of authority and over speeding is a reasonable control provided by ETC. Collective Risk Investigating the RODS database revealed that 1,026 main-track derailments with class 1-5 occurrences have occurred between 2007 and 2017.Evaluating the RODSinjury database showed that 14 freight and passenger train derailments resulted in fatalities and injuries.Figure 2 represents the distribution of the fatalities and serious and minor injuries based on the year of occurrence. Since the data evaluation was limited to main-track derailments, the number of fatalities and serious or minor injuries has been zero for some of the years.A peak in fatalities in 2013 was related to the Lac-Me´gantic disaster, with 47 casualties (4).Moreover, the spike in 2012 corresponded to the Burlington passenger train derailment which caused three fatalities as well as 10 serious and 35 minor injuries (5).Despite the fluctuation in the recorded data, no fatality or serious injury from 2014 to 2017 revealed an improvement with a decrease in the number of occurrences with the potential for fatality or serious injury.The reason for this improvement could be related to the development of further preventive and mitigative strategies.For example, replacing TC/DOT-111 tank cars with TC-117 tank cars, which are more puncture resistant and equipped with a thermal protection system (25).Moreover, temporarily restricting the train speed in some areas based on the environmental conditions (26) is helpful in reducing rail occurrences or mitigating their consequences. To identify the collective risk of the main-track derailments, FTA was applied.The main groups and subgroups of causes reported for main-track derailments are shown in Figure 3.The FTA also consists of data relating to the failure annual probability of each cause.As shown in Figure 3, cause groups are led by track, roadbed, and structures, followed by mechanical and electrical failures.At the subgroup level, rail brakes, track geometry, and train handling show the highest probabilities respectively.Based on the failure annual probability of the causes, the annual probability of a main-track derailment leading to fatality and injury was calculated.Intending to eliminate the variability associated with train traffic fluctuation, the FTA was normalized per train mile.To do that, the total failure annual probability was divided by the total distance traveled by Canada's freight and passenger trains on the main track between 2007 and 2017 (858.8 million train miles) (27).Followed by FTA, an ETA was applied.Evaluating the RODS showed that a main-track train derailment might lead to a collision, DG release, fire, explosion, and evacuation.In an ETA calculation, the sum of the probabilities of failures and probabilities of success for an intermediate event is equal to one.In the cases where one of these probabilities (failure or success) was equal to one, the probability of the other one was equal to zero.For example, in the first branch of ETA for maintrack derailments, all of the main-track derailments with fatality, injury, and collision consequences resulted in DG release.As a result, the probability of success for DG release as an intermediate event was equal to one, and the probability of failure for this event was equal to zero.The same approach was applied to the other intermediate events.The results of the ETA are presented as frequency of occurrence (number of events per year) and the risk (number of equivalent fatalities per year).The accident scenario leading to all of the mentioned outcomes was associated with the highest level of risk to life which was 4.07 3 10 23 equivalent fatalities per year.It is noteworthy that the evacuation scenarios still have risks as fatalities and injuries may occur as a result of collision, DG release, fire, and explosion.The ETA outcomes and collective risk of main-track derailments causing fatality and injury are shown in Figure 4.The little difference in some of the numbers is caused by rounding up or down the numbers during the calculation process. The next stage of the study was to evaluate the collective risk of the main-track train collisions.The assessment of the RODS database showed that 59 maintrack collisions (class 1-5 occurrences) occurred between 2007 and 2017.Based on the RODS-injury database, six of the freight and passenger train collisions had fatality or injury consequences.The distribution of the fatalities and serious or minor injuries based on the year of occurrence are presented in Figure 5. Restricting the data assessment to the main-track collisions, the number of fatalities and serious or minor injuries has been zero for some of the years.Main-track collision accidents mainly resulted in minor injuries compared with fatality and serious injury consequences.Since 2014, there were no occurrences with fatality or serious or minor injury outcomes which indicates an enhancement in reducing the number of occurrences. A diagrammatic representation of the FTA for maintrack collisions is presented in Figure 6.The results provided by FTA illustrate that the train operation/human factors group is the leading cause group of these kinds of accident.This cause group also includes all the subgroups with the highest probability: speed, violations of authority, train handling/makeup, and the use of brakes.Similar to the previous stage, in the end step of the FTA, the annual probability of a main-track collision with fatality and injury consequences was calculated per miles traveled by the train. According to the RODS database, a main-track train collision might result in a derailment, DG release, fire, explosion, or evacuation.Figure 7 presents the probability of each consequence, frequency of occurrence (number of events/year) and the risk (number of equivalent fatalities per year) for each accident scenario.As shown in Figure 7, the highest level of risk to life was 2.16 3 10 24 equivalent fatalities per year which was related to the main-track collision followed by derailment with no DG release, fire, explosion, or evacuation outcomes.Finally, the risk of different accident scenarios led to identifying the collective risk of main-track collisions causing fatality and injury. Table 1 shows the collective risk, the average number of equivalent fatalities per year, by accident category resulting from Figures 4 and 7.The total collective risk of main-track derailments and collisions is also presented. Individual Risk Individual risk is the probability of fatality per year for a particular group of people using railways, which means passengers, workforce, and MOP (6,24).Identifying the individual risk of two hazardous events, main-track derailments and collision, FTA and ETA were applied.The FTA was similar to the stage of evaluating the collective risk.The ETA also kept its structure with the difference that the number of fatalities and injuries considered for ETA was related to the group of people for which its individual risk was supposed to be calculated.For example, in an effort to evaluate the risk of main-track derailments for passengers, the number of passengers who died or were injured as a result of such an accident was counted toward applying the ETA.The ETAs for individual risk assessment are provided in Appendix C. Table 2 presents the results of this phase. ETC Implementation Applying C-SRM provided an opportunity to evaluate the effects of implementing ETC in Canadian railways.Two accident causes, main-track authority, and speed related to the train operation/human factor cause group of ADL cause classification may be preventable by applying ETC on the rail network.Considering a 100% effectiveness for those causes, the collective and individual risks of the main-track derailments and collisions were reassessed.The FTAs and ETAs, after applying the ETC, are presented in Appendix D. Table 3 shows the results of the risk assessment before (Tables 1 and 2) and after the ETC development. Discussion This research presented a Canadian customized risk assessment model (C-SRM) to improve the understanding of the risk.The model is based on the quantification of the risk resulting from hazardous events that have the potential to lead to fatalities, serious and minor injuries. The results derived from C-SRM will enable the rail industry to understand the current level of the residual risks, prioritize areas for safety improvements, and plan for developing additional control measures that would decrease the risk.Moreover, it allows as low as reasonably practicable (ALARP) assessments and cost-benefit analyses to be undertaken to assist the decision-making process for applying proposed changes and modifications.SRM is also useful in identifying and prioritizing issues for the audit.It provides a basis for evaluating the risk for a particular line of rout or for a particular train company (6).Furthermore, this model enables sensitivity analyses to be carried out to evaluate the risk reduction from the introduction of new control measures (24). Applying C-SRM was useful in evaluating the collective risk, the average number of equivalent fatalities per year, for two groups of hazardous events: the main-track derailments and collisions.The outcomes of the FTA Note: MOP = members of the public. revealed that the annual probability of a main-track collision leading to casualties and injuries per train mile is higher than that probability for a main-track derailment. In other words, when a train collision occurs, it is more likely there will be injuries than if a derailment occurs.However, applying ETA and identifying the collective risks of these two hazardous events demonstrated that the risk of the main-track derailments is greater than the risk of main-track collisions for fatality and injury.Risk is affected by both frequency and consequence.Higher risk of main-track derailments could be related to both factors.Considering the frequency, the proportion of main-track derailment accidents in 2020 was 7%, while only 1% of rail occurrences were related to the maintrack collisions (20).Looking at the consequences, as shown in Figures 2 and 5, the main-track derailments resulted in fatalities and serious or minor injuries.In contrast, the main-track collisions mainly caused minor injuries with a few serious injuries.There were no fatality outcomes for this accident within the study timeframe. Since the weight of fatality and serious injury is higher than the weight of the minor injury in calculating the equivalent fatality, it had a significant impact on increasing the consequences of the main-track derailments and eventually increasing the risk of this hazardous event. Despite identifying a higher collective risk for maintrack derailments compared with the main-track collisions, both risks are at a lower level in comparison with the other countries' SRM results.As presented in Table 1, the risks of the main-track derailment and collision are 4.65 3 10 23 and 2.26 3 10 24 equivalent fatality per year, respectively.However, these risks for the same period of time are 3.69 and 0.68 for Slovakian railways (24) which is higher than Canadian railways.In an effort to compare with the UK railways, the Hazardous Event Train (HET) accident category of UK SRM which includes collision and derailment accidents were considered.The risk of the HET is 7.8 equivalent fatalities per year in the period 1999 to September 2013 (28).According to Table 1, the total risk of the main-track derailments and collisions is 4.88 3 10 23 equivalent fatality per year for Canada, which is much lower than the UK.It is noteworthy that the HET category of the UK's SRM includes some other types of accident (e.g., structural collapse at station, abnormal dynamic forces) in addition to derailments and collisions.Moreover, the UK SRM considers both reportable and non-reportable injuries in its assessment.In C-SRM, given limited available resources only reportable injuries were included.However, considering those differences between UK SRM and C-SRM, the gap between the two countries' risks is still significant enough to show that the Canadian railways are still in the safe zone. It might also be worth mentioning that the results of ETA (Figures 4 and 7) for both kinds of accident demonstrated the important role of evacuation on the risk of these occurrences.Fatalities and injuries were mainly observed in those branches where there was no evacuation after the accident.This shows that mitigation strategies related to the evacuation and emergency response plans still need further assessment and improvement.Evacuating more than 200,000 people after the Mississauga train derailment in 1979 in Mississauga, Ontario, Canada is a successful example of evacuation that saved people with no fatalities after the accident.Fordham (29) investigated some of the reasons for the success of this evacuation. The collective risks of the hazardous events provided an overall view of the safety performance.However, it is also worth assessing this performance in relation to employees, passengers, and MOP.Developing C-SRM was helpful in identifying the individual risks of the main-track derailments and collisions.The risk under this approach is the annual estimation of the potential harm to the employees, passengers, and MOP from the operation of the railways.Table 2 shows that the risk for the MOP forms the greatest proportion of the individual risk of derailment accidents.Main-track derailment is one of the most serious types of rail occurrence when considering the potential risk to the public and financial damage, especially when it occurred in populated areas (20).The Lac-Me´gantic accident with 47 fatalities (4) is Note: ETC = enhanced train control; MOP = members of the public. an example of these rail events which played an important role in increasing the individual risk of train derailments for MOP in this study.These low-frequency highconsequence rail occurrences cause an elevated level of risk.In the UK, approximately 63% of the overall risk from passenger train derailment was related to lowfrequency high-consequence events (6).Sattari et al. (21) further investigated the low-frequency high-consequence rail occurrences by evaluating their societal risk.The potential for this type of occurrence shows that there is still an opportunity for optimizing resource allocation for risk control and mitigation strategies in this regard. For the main-track collisions, the highest individual risk was related to the employees.As shown in the Figure 6, human error is the leading cause of the maintrack collision accidents.As a result, there might be a potential relationship between this factor and high risk to the employees.A study done by Kyriakidis et al. (30) identified distraction/loss of concentration, safety culture, proper communication between employees, workload, training, and stress as the most significant contributors to the operators' performance in the rail industry.Esmaeeli et al. (31) discussed some of the appropriate strategies that can be implemented to reduce the employees' errors.For instance, conducting regular team meetings to strengthen teamwork (32), providing practical training alongside the online training (33), and regular assessment of employees' competencies to ensure that individuals are capable of properly applying their knowledge in practice.However, confirming the correlation between human error and risk for the employees needs further investigations and is beyond the scope of this study.It is noteworthy that no risk for the passengers and MOP might result from a limited database that consists of main-track collisions leading to fatality and injuries (six occurrences).Furthermore, most of the collision accidents within the study database were freight with no passengers. Considering the individual risks of other rail industries is useful to get a better understanding of the level of risk on the Canadian railways.The individual risks of the HET category in UK SRM for passengers, employees, and MOP are 2.8, 1, and 4 equivalent fatalities per year, respectively (28).However, as presented in Table 2, the results of the C-SRM showed that the total individual risks of main-track derailments and collisions for passenger, employee, and MOP are 6.25 3 10 25 , 5.82 3 10 24 , and 4.15 3 10 23 equivalent fatalities per year, respectively.As mentioned in the discussion above, there are differences between UK SRM and C-SRM.Taking those differences into account, the gap between the level of individual risks of two countries is still significant enough to confirm that the risk of the Canadian railways is at a very lower level. The results of employing C-SRM showed that the risk of the Canadian railways in fatalities and injuries is very low.However, applying new control measurements can further reduce the risk and even achieve the ultimate goal of zero fatality.Japan's ''Shinkansen'' high-speed train system's performance is an impressive example in this regard with zero passenger fatalities in train collisions and derailments for more than 35 years (6). Employing C-SRM gave an opportunity to assess the effects of developing a new control measure, ETC, through a risk reduction analysis.ETC systems are failsafe technologies developed to be similar to the PTC system functionality implemented in the US (16,17). Applying Positive Train Control (PTC) on certain Class 1 railroad main lines in the US causes railway operators in Canada to raise several major concerns with regard to their experience with PTC implementation.For example, expensiveness, complexity, and significant effort requirement to apply it (16). According to the Canada's Advisory Council Research on Railway Safety in 2016, developing a targeted, risk-based, corridor-specific train control approach is the best option for deployment of this technology in Canada.This recommendation has been the ''working assumption'' for developing ETC in Canada.ETC systems are intended to prevent certain rail occurrences caused by human error.The technologies provide a wide range of innovative safety solutions to support the train crew.This could range from assisting in recognizing and following the signals to automatically applying the train brakes to prevent a collision or derailment (1).The core functional objective of ETC system in Canada is to prevent train-to-train collisions, over speed derailments, train entering a foreman's work authority, and train occupying improperly aligned switches (16). The current approach of controlling train movements in Canada is ''rule-based.''Occupancy control system rules are applied on low density corridors, with verbal clearances and other instructions issued to train crews.Higher density corridors follow centralized traffic control, with wayside signals indicating speed limits and clearances.In both cases, it is the responsibility of the crews to comply with the rules.Currently, there is no regulatory requirement to install technologies on board locomotives to protect against excessive speed operation or not following a wayside signal indication.The reliance is solely placed on train crews.As a result, there have been occasions of unintended rule violations as a result of a loss of situational awareness resulting in derailments or collisions.It is this potential for human error that ETC technologies aim to address (1). Implementing ETC technologies on the Canadian rail network is a priority for TC which is reflected in the recent Notice of Intent published in the Canada Gazette on February 5, 2022.It revealed that TC intends to establish ETC system on Canadian railways to add an important layer of safety to its already safe rail transportation system and enhance passenger and freight train safety (1). In this study, to assess the impact of ETC on rail transport risk, a 100% effectiveness was considered for two accident causes, main-track authority and speed.As mentioned in the methods section, this level of effectiveness is achievable by ETC implementation at the Automatic Train Protection System level.The results of risk reduction analysis (Table 3) showed that developing an ETC system resulted in reductions of about 2.1% and 47.8% in the risk of main-track derailments and collisions, respectively.It is evident that the proportion of the risk reduction for collision accidents was much greater than that for derailment accidents.The reason for this can be related to the underlying cause of these accidents.The FTA of main-track collisions (Figure 6) demonstrates that human error is the primary cause of these accidents.It also shows speed and main-track authority are the most probable cause of main-track collisions in the subcategory level.However, the FTA for main-track derailments (Figure 3) reveals that human error was the thirdranked cause of these accidents.Moreover, there was no main-track authority subcategory in its FTA and the probability of the speed subcategory was also very low.Therefore, as ETC systems are intended to be effective on human errors (speed and main-track authority), their implementation was much more effective in reducing the risk of main-track collisions compared with the main-track derailments. The result of the risk reduction analysis in this study is in line with the result of the CaRRL report to the TC.CaRRL (16) analysis also showed that a much greater proportion (on a percentage basis) of main-track collisions are preventable compared with the main-track derailments by implementing ETC.The outcomes of their study revealed that between 20.4% and 30.5% of main-track collisions and between 1.1% and 2.4% of main-track derailments could have been prevented with an ETC system. Although the ETC technologies were more effective in decreasing the risk of main-track collisions, their impact on reducing the risk of main-track derailments is not negligible.Main-track derailments and collisions leading to fatality and injury are rare events with potentially high consequences.Decreasing the risk of these events even by a couple of percentage points would significantly improve the safety of passenger and freight trains.This is the reason why the ETC system is a useful control measure to prevent certain rare, but potentially highconsequence accidents. Conclusion This study contains the results of the risk assessment of the Canadian railways by developing a customized Canadian SRM called C-SRM.The risk in this context is defined in relation to rail occurrences leading to fatalities and injuries.The research is focused on two groups of hazardous events: main-track derailments and collision accidents.The outcomes of the employed model will increase the industry's knowledge of the risk and allow the identification of areas of railway operation that need further risk controls.It also enables sensitivity analyses to be carried out to determine the risk reduction from the introduction of new control measures. To implement C-SRM, FTA, and ETA were applied to the hazardous events.The outcomes of the study revealed that, when a train collision occurs, it is more likely that there will be injuries than if a derailment happens.However, the risk of the main-track derailments was greater.Evaluating the individual risk of hazardous events showed that the highest individual risk of the main-track derailment was related to the MOP.Low-frequency highconsequence rail events like the Lac-Me´gantic accident played a prominent role in elevating the risk for this group of people.Moreover, the risk for employees was the highest individual risk of main-track collisions.There might potentially be a relationship between human error which was the most frequent cause of collision accidents and this level of risk for employees.However, it needs further investigation which was beyond the scope of this study.Comparing the results of other countries' SRM with C-SRM demonstrated that Canada's rail transport risks are at a lower level.However, there are still opportunities for enhancing safety and decreasing the risk even further.For example, improving emergency response plans, applying control measurements to mitigate low-frequency high-consequence accidents, and so forth. The last step of the study was the risk reduction assessment of developing ETC systems on Canadian railways.The result of the analysis showed that ETC technologies are useful control measurements in preventing certain rare, but potentially high-consequence accidents.However, decision making about the development of this system on Canadian railways needs further assessment such as cost-benefit analysis. Limitation of the Study The limitation of the research is related to the scope of the study which was focused on main-track derailments and collisions.Considering other types of hazardous events and increasing the level of the details to categorize them would enhance the accuracy of the risk assessment.Furthermore, given the limited available resources, only reportable injuries were included in the analysis.Considering nonreportable injuries would definitely change the risk profiles and improve the level of accuracy. Future Work The C-SRM presented in this study provides a nationwide risk and does not show how the risk is distributed across the network.Future updates to the model will include means to assess the risk for a particular line of route.RSSB is an example in this regard that has developed geospatial models (GeoSRM) of rail safety hazards in Great Britain (GB) (13,14).Macciotta et al. (34) discussed some of the complexities of risk assessment for a particular corridor in Canada. To assess the risk for a particular line, localized FTA and ETA are needed.This would greatly increase the number of calculations required for the C-SRM and therefore new and more efficient ways of calculating will be needed (15).Two different methods would be suggested: 1) converting the FTA and ETA into a Bayesian Network (BN) and undertaking calculations using algorithms in the SamIam software package (35); and 2) using traditional FTA and ETA calculations coded in the R software package (36). Figure 1 . Figure 1.Research methodology for this study.Note: TSB = Transportation Safety Board of Canada; RODS = rail occurrence database system; ADL = Arthur D. Little Inc.; FTA = fault tree analysis; ETA = event tree analysis; MOP = members of the public; ETC = enhanced train control. Figure 2 . Figure 2. Number of fatalities/serious injuries/minor injuries per year for main-track derailments. Figure 3 . Figure 3. FTA for main-track train derailments leading to fatalities and injuries.Note: FTA = fault tree analysis. Figure 4 . Figure 4. ETA for main-track train derailments leading to fatalities and injuries.Note: ETA = event tree analysis. Figure 5 . Figure 5. Number of fatalities/serious injuries/minor injuries per year for main-track train collisions. Figure 6 . Figure 6.FTA for main-track train collisions leading to fatalities and injuries.Note: FTA = fault tree analysis. Figure 7 . Figure 7. ETA for for main-track train collisions leading to fatalities and injuries.Note: ETA = event tree analysis. Table 1 . Collective Risk by Accident Category Table 2 . Individual Risk by Accident Category Table 3 . Risk Reduction Assessment of ETC Implementation
9,926
2023-06-05T00:00:00.000
[ "Engineering", "Environmental Science" ]
Implications of triple correlation uniqueness for texture statistics and the Julesz conjecture Triple correlation uniqueness refers to the fact that every monochromatic image of finite size is uniquely determined up to translation by its triple correlation function. Here that fact is used to prove that every finite image composed of discrete colors is determined up to translation by its third-order statistics. Consequently, if two texture samples have identical third-order statistics, they must be physically identical images and thus visually nondiscriminable by definition. It follows that a third-order version of Julesz's long-abandoned conjecture about spontaneous texture discrimination is necessarily true (notwithstanding such well-known counterexamples as the odd and even textures). The second-order (i.e., original) version of that conjecture is not necessarily true: physically distinct finite images with identical second-order statistics can be constructed, so counterexamples are possible. However, the counterexamples that one finds in the literature are either nonexact or difficult to reconstruct. A new principle is described that permits the easy construction of discriminable black and white texture samples that have strictly identical second-order statistics and thus provide exact counterexamples to the Julesz conjecture. A. The Julesz Conjecture In a seminal paper in 1962, Julesz' introduced a program of research on visual texture perception motivated by the observation that some pairs of distinct textures are instantly seen as different (spontaneously discriminated) while others can be distinguished only after careful scrutiny.He asked whether that perceptual dichotomy can be predicted on the basis of global image statistics corresponding to the joint probability distributions that characterize stationary stochastic processes.A decade later 2 he and co-workers proposed a specific hypothesis along those lines that came to be known as the Julesz conjecture: the hypothesis that textures cannot be spontaneously discriminated if they have the same first-order and secondorder statistics and differ only in their third-order or higher-order statistics.(Those statistics are defined here in Section 2.) For a period in the 1970's that conjecture survived many experimental tests and seemed to have captured a fundamental property of vision. 3By 1981, however, the Julesz conjecture appeared to be disproved by results from his own laboratory 4 6 and elsewhere, 7 and Julesz himself abandoned it in favor of a theoretical approach based on local image features called textons.' Today both the conjecture itself and the global statistical approach that it represented are generally regarded as a closed chapter in the history of texture discrimination research. 9n the present paper that chapter is reexamined in light of recent mathematical discoveries about higher-order autocorrelation functions,' 0 "' which prove to impose strict constraints on image statistics.The strongest evidence against the Julesz conjecture took the form of a pair of easily discriminable black and white textures devised by Julesz et al., 6 the odd and even textures, which were said to have identical third-order statistics.(Figure 1 shows samples of odd and even texture.Section 4 explains their construction.)Identical third-order statistics imply identical second-order statistics, so the odd and even textures apparently provided a strong counterexample to Julesz's second-order conjecture.But they also had broader implications.Underlying that specific hypothesis were ideas about computational complexity and limited processing capacity.Higher-order statistics entail more calculations, and the visual mechanism responsible for spontaneous texture discrimination was thought to be limited to the computation of at most the second-order statistics of images.So the easy discriminability of the odd and even textures, which ostensibly differed only at the level of fourth-order statistics, seemed to discredit not only the second-order conjecture itself but also the general idea that spontaneous texture discrimination is based on a global statistical computation performed by a mechanism with limited capacity.That conclusion was reinforced by the fact that both the odd and the even textures can be easily discriminated from a purely random texture created by coloring all the squares of a checkerboard by independent fair-coin tosses-a texture that was said to have the same third-order statistics as those of the two other textures. 8(Figure 1 shows samples of coin-toss texture compared with samples of odd and even texture.) B. Triple Correlation and Third-Order Statistics What prompts a reexamination of that episode now is the realization that third-order image statistics have unexpectedly strong characterization properties.Using a fairly recent uniqueness theorem for triple correlations,'"'" one can show that the third-order statistics of any image of finite size uniquely determine that image up to a translation.(The proof is given in Section 3.) In other words, two pictures with identical third-order statistics must be physically identical.Consequently, odd and even texture (c) 50 50 samples of even (left side) versus coin-toss (right side) texture.samples (such as the images on the right-hand and lefthand sides, respectively, in Fig. 1) can never have identical third-order statistics, and the same is true of odd or even texture samples compared with physically distinct samples of the coin-toss texture.And, in general, visual discrimination of images with identical third-order statistics is impossible in principle because it would mean discrimination between physically identical objects.Thus a thirdorder version of the Julesz conjecture is necessarily true because it is impossible to construct a counterexample to the claim that texture samples cannot be discriminated if their third-order statistics are identical.(Subsection 1.D discusses possible ways in which one can reinterpret the Julesz conjecture to make its third-order version nontautological.The conclusion is that no logical way to do this exists.) C. Image Statistics Versus Ensemble Statistics If odd and even texture samples cannot have identical third-order statistics, how could Julesz et al. say that they had demonstrated "visual discrimination of textures with identical third-order statistics"? 6The answer is a matter of definitions, which revolves around the distinction between the ensemble statistics and the sample statistics of a stochastic process.The names "odd and even textures" refer not to specific images but rather to probabilistic algorithms for the construction of images-in other words, to stochastic processes whose samples are specific images.(The odd and even processes are described and analyzed in Section 4.) What Julesz et al. 6 actually proved is that the odd and even stochastic processes have identical thirdorder ensemble statistics: the expected values of the third-order statistics of sample images are the same for both processes.In other words, if one causes both processes to generate sequences of images, the averages of the third-order statistics of those images across the odd sequence and the averages of the same statistics across the even sequence should converge to common values as the sequences become infinitely long.That result does not imply that the third-order statistics of any specific pair of odd and even sample images will be identical (which the uniqueness theorem proved in Section 3 shows to be impossible), so there is no conflict between the mathematical results of Julesz et al. 6 and those reported here. D. What Was the Julesz Conjecture? What the odd and even textures actually demonstrate, then, is visual discrimination of texture samples drawn from stochastic processes with identical third-order ensemble statistics.(The same is true of the odd and even textures versus the coin-toss texture.)Consequently, the hypothesis that is actually refuted by the odd and even textures is not Hi: Texture images cannot be spontaneously discriminated if they have the same third-order statistics, i.e., a hypothesis that predicts whether two specific texture images will be discriminable on the basis of statistics computed from the images themselves.Instead, what is refuted is the hypothesis H2: Texture images cannot be spontaneously discriminated if they are samples from stochastic processes whose third-order ensemble statistics are identical, i.e., a hypothesis that makes the discriminability of texture images depend on the statistical properties of the ensembles to which they belong.Since those properties are not directly visible to a naive observer confronted with a specific pair of texture images, that idea clearly poses some conceptual difficulties.Nevertheless, in the paper of Julesz et al. on the odd and even textures, 6 hypothesis H2 was evidently the interpretation implicitly given to the Julesz conjecture (with "third-order" replaced by "secondorder" in statement H2 to form the original conjecture). Under the ensemble-statistics interpretation H2, a third-order version of the Julesz conjecture is not rendered tautological by the fact that every image is uniquely determined by its third-order statistics.But, without further amendment, hypothesis H2 is logically defective because its predictions about the discriminability of specific texture samples are intrinsically ambiguous: any pair of images A and B can always be construed as samples from two stochastic processes that have identical third-order (or nth-order, for any n) ensemble statistics, which implies nondiscriminability, and also as samples from two pro- To remove the ambiguity in hypothesis H2, a reviewer proposed that it should be regarded as applying to predefined stochastic processes in a form such as H3: If two stochastic image-generating processes S and S' have identical nth-order ensemble statistics, sample images from S will not be spontaneously discriminable from sample images from S'.But hypothesis H3 is trivially refutable by the construction of a 2-image stochastic process S whose samples are any two easily discriminable images A and B. Then S and S' = S form a counterexample.To overcome that difficulty, the reviewer suggested the addition of the proviso that S and S' must be ergodic processes (the intuition being that all the sample images of a process should be statistically similar).In that case the nth-order version of the conjecture would become H4: If S and S' are nth-order ergodic stochastic processes whose nth-order ensemble statistics are identical, sample images from S will not be spontaneously discriminable from sample images from S'. But nth-order ergodic literally means that, with probability 1, the nth-order statistics of every sample image generated by a stochastic process equal their expectations (i.e., the ensemble values).So, under this interpretation of an ensemble statistics version of the Julesz conjecture, the ensemble statistics themselves become irrelevant: the hypothesis H5: If S and S' are stochastic processes whose sample images all have identical nth-order statistics, images from S will not be spontaneously discriminable from images from S' is equivalent to the hypothesis H6: If two images A and B have the same nth-order statistics, they will not be spontaneously discriminable, which is the nth-order version of hypothesis H1. (Note too that for stochastic processes whose samples are images of finite size, the uniqueness theorem for thirdorder statistics implies that, if a process is nth-order ergodic for n 2 3, all its sample images must be physically identical.Since only finite images are possible in practice, it follows that two stochastic processes S and S' can satisfy the conditions of hypothesis H4 for n = 3 if and only if both always generate exactly one and the same image.So a third-order version of the Julesz conjecture is still tautological under this ensemble-statistics interpretation.) The conclusion of the preceding analysis is that it does not make sense conceptually to interpret the Julesz conjecture as a hypothesis linking the discriminability of texture samples directly to the ensemble statistics of stochastic algorithms that generate those samples.Instead, the conjecture's only logical interpretation would seem to be as a hypothesis that predicts the discriminability of specific texture images on the basis of the nth-order statistics of the images themselves, i.e., as hypothesis H6, with n = 2 for the original conjecture.In the remainder of this paper that interpretation is assumed.In that case the uniqueness theorem proved in Section 3 implies that for n Ž 3 the Julesz conjecture is irrefutable because an exact counterexample cannot be constructed.But the original n = 2 conjecture is potentially refutable, since it is possible to construct pairs of physically distinct texture images whose second-order statistics are exactly identical.(That point is pursued in Subsection 1.F) E. Third-Order Statistics of Odd, Even, and Coin-Toss Images The uniqueness theorem for third-order statistics proved in Section 3 prohibits only an exact identity between the third-order statistics of physically distinct images.It leaves open the possibility that the third-order statistics of odd, even, and coin-toss texture samples might be close enough to be regarded as equal for all practical purposes.However, a direct examination shows differently. Figure 2 shows scatterplots that compare the third-order statistics of the odd, even, and coin-toss texture samples from Fig. 1.In these graphs the x coordinate of each data point is the value of a given third-order statistic for one member of an image pair, and they coordinate is the value of the same third-order statistic for the other member of the pair.[For example, in Fig. 2 (a) the x coordinate is the value of a third-order statistic for the even member of the pair of images shown in Fig. 1(a) and the y coordinate is the value of the same statistic for the odd member.Thirdorder statistics were computed with Eq. (9') below.]If the texture pairs in Fig. 1 had identical third-order statistics, all data points in the graphs in Fig. 2 would lie on a 450 diagonal line through the origin; and if all the third-order statistics of the images equaled their corresponding ensemble values (i.e., their expectations), all the data points would lie at (1/2,1/2), (1/4,1/4), or (1/8,1/8).It can be seen that neither condition is close to being satisfied.These results are typical of what one always finds for odd, even, and coin-toss images of this size (50 X 50 pixels): overall, their third-order statistics are far from identical.examines that issue.Its conclusion is that the convergence is too slow to be perceptually meaningful; for the images in Fig. 1, even the small-triangle statistics show large differences by comparison with pairs of independent coin-toss images.Statistically, that result is not surprising, because the variance of the statistics of odd and even images decreases not in proportion to the number of pixels but more nearly in proportion to the square root of that number. (The analysis in Section 4 also considers various quirks of the odd and even textures, such as the fact that odd and even sample images can be either matched or unmatched pairs, which have quite different statistical properties.An interesting conclusion from that analysis is that the eye appears to be insensitive to those large statistical differences.)The fact that odd and even texture samples cannot have identical third-order statistics does not automatically rule out the possibility that they might have identical secondorder statistics and thus still provide a counterexample to Julesz's second-order conjecture.However, direct examination removes this possibility: the second-order statistics of odd and even texture samples are positively correlated (and contain a strong hidden correlation, described in Section 4) but clearly are not identical, as is demonstrated by the scatterplots in Fig. 3, which compare the second-order statistics of the texture samples of Fig. 1.Consequently, the odd and even textures do not provide counterexamples to the Julesz conjecture.An earlier analysis by Gagalowicz' 2 arrived at the same conclusion. That realization prompts a second look at all the original evidence against the Julesz conjecture.A decisive counterexample to the conjecture would take the form of a pair of texture samples that have identical second-order statistics and that are instantly discriminable.Most of the counterexamples found in the literature do not come close to meeting that description, because the texture samples involved are created by probabilistic algorithms that are guaranteed only to produce identical second-order statistics in the limit, as the images become infinitely large.That objection applies not only to the odd and even textures but also to other textures created by the floater technique of Gilbert"3 4 and by the +P-matrix technique of Diaconis and Freedman, 7 and also to the counterexamples created by Caelli and Julesz 4 "5 with the four-disk method.The only exceptions appear to be the images of Gagalowicz," who recognized the defects of previous -.I -f .. .I __ odd us.euen texture samples sample size sG x sG all second-order statistics method that achieves exact results.Section 5 describes an easily implemented principle for the construction of pairs of black and white texture images that appear to be spontaneously discriminable and whose second-order statistics are guaranteed to be exactly identical.Pairs of one-dimensional textures (bar codes) and of twodimensional textures created by that technique are shown in Figs. 4 and 5, respectively. This construction principle may prove useful in other contexts because it provides an easy way by which one can create pairs of black and white images whose Fourier power spectra are exactly identical (and of course remain identical if both images are blurred by a common pointspread function, like that of the eye).counterexamples and used linear programming methods in an attempt to create better ones.He succeeded in constructing quite easily discriminable texture samples whose second-order statistics differ by only approximately 2%-identical for all practical purposes and certainly close enough for one to justify the conclusion that the secondorder Julesz conjecture is false.However, Gagalowicz's texture construction algorithm is quite complicated and does not guarantee strict identity between second-order image statistics.It seems worthwhile to seek a simpler Similarity Julesz's second-order conjecture appears to be false empirically, and one can now see that a third-order version of the conjecture is irrefutable in principle and thus true but vacuous.However, there is a variant of the third-order conjecture that is not tautological and can be tested: the hypothesis that discrimination between textures becomes increasingly difficult as their third-order statistics become more similar.Offhand, that idea does not seem promising.One type of negative evidence is provided by the odd and coin-toss textures.If the odd and coin-toss algorithms are used in the generation of texture samples, the third-order statistics of those samples are not at all similar when the samples are small, but the same statistics become more nearly alike as the size of the images (the number of pixels) increases.Perceptually, however, things go the other way: small samples of odd and cointoss textures are weakly discriminable, and discrimination becomes easier as the samples become larger. Figure 6, in conjuction with Fig. 1, illustrates that point. Figure 6(a) shows small odd-versus-coin-toss images (10 X 10 pixels) and a scatterplot that compares their thirdorder statistics.For images as small as those, the odd and coin-toss textures are not spontaneously discriminable, but the third-order statistics of the images differ greatly.Figure 6(b) compares the same third-order statistics for the 50 x 50-pixel samples of odd and coin-toss texture shown in Fig. 1.For images of that size the odd and coin-toss textures are easily discriminable, but their thirdorder statistics are now much more similar.In other words, as their global third-order statistics become more nearly alike, odd and coin-toss texture samples look less alike.That result is the opposite of what one would expect if spontaneous discriminability depended on a comparison of global third-order image statistics.(Section 4 describes another counterargument based on matchedversus-nonmatched samples of odd and even texture.) H. Contents Section 2 defines the image statistics that figured in the Julesz conjecture.Section 3 describes triple correlation functions and their uniqueness properties and uses them to prove that every finite-sized image composed of discrete colors is uniquely determined by its third-order statistics, which is the main new result of this paper.Section 4 examines the statistical properties of odd, even, and cointoss texture samples.Section 5 deals with the problem of relation, and Ref. 16 includes a broad spectrum of recent papers on higher-order autocorrelation functions.The first application of those functions to vision research is due to Klein and Tyler, 7 who referred to them as generalized autocorrelation functions. A. Black and White Images The statistics of various orders that figured in the Julesz conjecture were defined in terms of a hypothetical experiment performed on an image composed of discrete colors: "The n-gon statistic [or nth-order joint probability distribution] of an image can be obtained by randomly throwing n-gons of all possible shapes on the image and observing the probabilities that their n vertices fall on certain color combinations." 8The n-gons here are geometrical objects: points (1-gons), line segments (2-gons, or dipoles), triangles (3-gons), etc. Random throwings of these objects lead to the first-, second-, and third-order statistics of an image.Most research has focused on twocolor images, e.g., black and white.I begin there, taking black to be the foreground color and white to be the background color.Suppose that the function f : -' {0, 11 represents such an image, with f(x, y) = 1 if image point (x, y) is black and f (x, y) = 0 otherwise (f is assumed to be locally integrable).One can think of the image as occupy- ing some finite rectangular region of the plane that has area A and imagine that outside that region the rest of the plane is white [i.e., f (x, y) is defined to be zero outside the image region]."Randomly throwing" a single point onto the image means picking a point (x, y) at random in the image region, so the point is a random vector (x, y) with probability-density function 1/A throughout that region and with zero value outside it.The first-order statistic of image f, sj f, is the probability that the randomly thrown point lands on black, P[f(x, y) = 1], which is simply the proportion of the image region that is black, i.e., Sif = ff f(x,)dxdy. ( A 2 [The integration here could be carried out over just the image region instead of the entire plane, but because f m 0 outside the image region, Eq. ( 1) gives the same result.]The second-order statistics of the image are values of a function s 2 , f: R' -[0,1] whose arguments represent horizontal and vertical separations between pairs of image points.The second-order statistic for a given separation vector (hv) is the probability that when a point (x, y) is chosen at random within the image, both (x, y) and the The axes are the same as those in Fig. 2, with coin-toss statistic values on the y axis.point (x + h, y + v) are black.Thus S2,f(hv) = ff f(x,y)f(x + h,y + v)dxdy. (2)ote that even in an all-black image these probabilities will always be less than 1 (except when h = v = 0) because the point (x + h, y + v) sometimes falls outside the image region.]Third-order statistics are values of a function S3, f: R4-[0,1] whose arguments represent pairs of separation vectors, (h,,v,) and (h 2 ,v 2 ).For a given pair the third-order statistic is the probability that when a point (x, y) is chosen at random in the image, the three points (x, y), (x + hi, y + vl), and (x + h 2 ,y + v 2 ) are all black.Thus = A-f| f(x,y)f(x + hi,y + vl)f(x + h 2 ,y + V2)dxdy. (3) The third-order statistics can be thought of as the probabilities, for all triangles, that when a given triangle is dropped at random onto the image (preserving its orientation), all three vertices land on black.Higher-order statistics are defined analogously: fourth-order statistics have arguments that are triples of separation vectors, and so on.In general, statistics of order n include all statistics of lower orders; e.g., the third-order statistics include the second-order statistics as the special case in which one of the two separation vectors is (0, 0). In addition to nth-order statistics defined in terms of all-black coincidences, one could also define statistics based on white-white and black-white combinations. However, those statistics would be redundant because their values are already determined by the all-black statistics.For example, the black-white second-order statistic for argument (hv) would be B. Multicolored Images For images composed of more than two colors, nth-order statistics are defined by a natural extension of the twocolor definitions.Suppose that a finite-sized rectangular image f(x, y) with area A is composed of L discrete colors or gray levels, labeled 0,1, .. ,L -1, where 0 is the back- ground color.(Outside the image region the assumption is made that the plane has the background color.)The image can be thought of as the sum of L indicator functions fi:lY -> {0,1}, i = 0,...,L = 1, where fi(x,y) = 1 if point (x, y) has color i and fi(x, y) = 0 otherwise.Then f(x, y) = 1 lwj fi (x, y), where wi is the numerical value of color i (w 0 = 0).There are L first-order statistics sl,i,f, i = 0,1,... , L -1, which correspond to the probabilities that a point chosen at random within the image has each of the L possible colors.For the L -1 foreground colors these probabilities are Siijf A fixy)dxdy, (4) and slo, f is equal to 1 minus the sum of the rest.Secondorder statistics must now be defined for all possible colorings of a pair of points, (x, y) and (x + h, y + ).For example, the second-order statistic S2, ij, f (h, v) is the probability that when a point (x, y) is chosen at random in the image, (x, y) has color i and (x + h, y + v) has color j.Except for the case in which both i and j are the background color 0, that statistic is S 2 , ,j,f(h,V) = Ajfa(XY)f(x + h,y + v)dxdy, (5) and 2 ,0,0,f(h,v) is defined to be 1 minus the sum of all the other s 2 ,ijj(h,v). Third-order statistics S3,i,jk,f are defined for all possible colorings of triples of points; they are the probabilities that when a point (x, y) is chosen at random within the image, (x,y), (x + h,y + v), and (x + h 2 , y + v 2 ) have the colors i, j, and k, respectively.Except for the special case i = j = k = 0, those probabili- ties are given by = Afj 2 h(x,y)fj(x + h,y + vl)fk(x + h 2 , + V2)dxdy, (6) and s3,0,0,0of(hl,vl,h 2 ,v 2 ) is equal to 1 minus the sum of all the other third-order statistics for the same set of arguments. C. Discrete Image Statistics Statistics of order n are defined above in terms of integrals of functions with continuous arguments-functions that describe colorings of a continuous surface.Such definitions are appropriate for physical images viewed by the eye.Computationally, images are represented by discrete arrays of pixel values: arrays of the form (p(c, r); c = 0,1,..., C -1; r = 0,1,...,R -1).The nth-order statistics of such arrays are defined by sums; e.g., for a C(olumn) R(ow) array of binary pixels, with p(c, r) = 1 for the foreground color, the discrete first-, second-, and third-order statistics are defined as respectively, where the arguments are all integers.[These definitions assume that p(c, r) 0 unless 0 c C -1 and 0 r R -1.The upper limits of the sum in Eq. ( 8) could equivalently be set to C -1 -n and R - 1m, and in Eq. ( 9) they could be set to C -1max(nl, n 2 ) and R -1 -max(ml, M 2 ).]Sometimes it is useful to renormalize the secondand third-order statistics so that their expected values for sample images of stochastic textures become independent of their arguments.For that purpose the statistics x [R -max(ml, m 2 )]} (9') are defined.The statistical comparisons in Figs. 2 and 3 are based on the renormalized statistics defined by Eqs.(8') and (9'), as are the comparisons made in Section 4. For odd, even, and coin-toss texture samples, the expected values of those statistics are always 1/2 (when all the arguments equal zero), 1/4 (for statistics based on pairs of distinct pixels), or 1/8 (for statistics based on triples of distinct pixels). The relationship between the discrete-domain statistics of a binary pixel array (p(c, r)) and the continuous-domain statistics of a black and white physical image f (x, y) can be expressed by means of delta functions.Mathematically, a display device converts an array of binary pixel values (p(c, r)) into a black and white image f(x, y) by an operation of the form where * denotes a convolution, is the Dirac delta, and d(x, y) is a 0-1 function representing a single screen pixel; e.g., d = 1 inside an open unit square centered at the origin (screen distances are measured in units of pixel width and height), and d = 0 otherwise.The discrete secondand third-order statistics can be translated into functions in continuous space by the operations respectively. The relationship between the discrete statistics of the pixel array and the continuous statistics of the image can now be stated concisely: S2,f = ad * S2,p, where ad(h,v) is the autocorrelation function of the pixel shape function d(x, y), and S3, f = td * S3,p, where td is the triple correlation function of d(x, y).(Section 3 defines the autocorrelation and triple correlation functions.)Thus, when two binary pixel-value arrays with the same second or third-order discrete statistics are displayed as viewable images, those statistical identities are preserved.And the secondand third-order statistics of an image contain the discrete statistics of its pixel array (as their values for integer arguments), so identical continuous statistics for two images in the physical domain imply identical discrete statistics inside the computer. Multicolored pixel arrays can be decomposed into binary arrays, one for each color value, and the relationship between their nth-order statistics and those of the corresponding multicolored physical image can be expressed as S2,i,j,k,f = ad * S 2 ij,,p and S3,i,jk,f = td * Sij,k,p. A. Autocorrelation and Second-Order Statistics It has always been recognized' that there is a close connection between second-order image statistics and the ordinary autocorrelation function-the function af: -2 gR defined for any (integrable) function f 2 '-gt by af(h,v) = fJ f(x,y)f(x + h,y + v)dxdy. (10) one compares af in Eq. ( 10) with the function S2, f defined in Subsection 2.A by Eq. ( 2), it is apparent that the second-order statistic function of a black and white image is its autocorrelation function divided by its area.The area A can be recovered from the second-order statistics [A = S 2 , f(0, 0)/s ,f 2 , where S 2 ,f is the Fourier transform of S2,f], so the second-order statistics of a black and white image completely determine its autocorrelation function and thus its power spectrum [since the Fourier transform of af is IF(a,3)1 2 , where F is the transform of f].Con- versely, if the autocorrelation function of a black and white image is known, its second-order statistics are completely determined up to the (somewhat arbitrary) area parameter A. In this sense the second-order statistics of a black and white image are equivalent to its autocorrelation function.For images containing more than two colors, that equivalence breaks down: in that case two images can have the same autocorrelation function but different second-order statistics.A simple example is that in which one image consists of three adjacent pixels with luminance levels 4, 4, and 1 and another consists of three pixels with luminances 2, 5, and 2. Calculation shows that the autocorrelation functions of these images are the same, but, since the images have no luminances in common, their second-order statistics are obviously different. {The Fourier transform F(a, /) of an integrable function f (x, y) is defined here as F(a,,B) = f Jf2 f (x, y) = F(a,,83)F(a 2 ,13 2 )F(-a1 -a2,-1,81 -32), (12) which proves endlessly useful.For example, setting all arguments to zero in Eq. ( 12) gives [F(0, 0)] 3, and setting a 2 = 132 = 0 yields F(0, 0)IF(a1, p ) 2 .So for nonnegative functions, such as images [where F(0, 0) cannot vanish unless f = 0], the bispectrum determines the power spectrum, and thus the triple correlation determines the autocorrelation.As another example, Eq. ( 12) shows immediately that tf.g = tf * tg; i.e., the triple correlation of the convolution of two functions is the convolution of their triple correlations.Thus, if two images have the same triple correlation, that identity is preserved when both are blurred by the same point-spread function. Comparing Eqs. ( 3) and ( 11), one can see that for a black and white image [f(x,y) = 0 or 1], the third-order statistic function S3,f is the triple correlation function tf divided by the image area A. The area can be recovered from the third-order statistics by means of the facts that S3Jf(0,0,0,0) = (1/A)F(0,0) (since f3 = f) and S3Jf(0,0,0,0) = (1/A)[F(0,0)]' [from Eq. ( 12)].Consequently, the third-order statistics of such an image completely determine its triple correlation function, and its triple correlation determines its third-order statistics up to the factor 1/A. Thus two black and white images that have identical third-order statistics have identical triple correlation functions. C. Uniqueness Properties of Triple Correlation Functions The triple correlation function has aroused interest in large part because of its potential for the recovery of the phase spectrum of images.Ordinary autocorrelation is, of course, phase blind: the autocorrelation function of an image determines the amplitudes of all its Fourier components but none of their phases.A consequence is that the autocorrelation function of an image is unaffected by translations [i.e., f(x, y) -* f(x + a, y + b)].Triple cor- relations have the attractive property of also being invariant under image translations but otherwise not phase blind: the triple correlation function of any finite-size image completely determines both its amplitude spectrum and (except for an absolute location term) its phase spectrum.Consequently, the triple correlation function of a finite image uniquely determines that image up to a translation; that is, two finite-sized images have identical triple correlations if and only if they are identical images, which differ at most in their absolute location in the plane. The uniqueness theorem for triple correlations is valid for all functions that one can reasonably regard as representing finite-sized monochromatic images.In Ref. 11 the theorem is proved (in two different ways) for the class of all image functions with bounded support, where an image function is any nonnegative locally integrable function f: : 2 -* Qi. [Such a function is thought of as representing an image in the sense that its integral over any region of the plane specifies the total amount of light in that region (hence the requirement of local integrability).Two image functions are equal if their integrals agree for all regions, so that a photometer cannot distinguish them.] An image function has bounded support (represents an image of finite size) if it is identically zero outside some finite rectangle.Every image function f with bounded support has a well-defined triple correlation function given by Eq. ( 11), and using Eq. ( 12 The proof of this theorem seems too long to be given here in complete detail, but the basic ideas can be quickly sketched as an indication of what is involved.To simplify matters, one can consider one-dimensional image functions (the extension to two dimensions is straightforward).The proof makes use of well-known facts from probability theory.Suppose that two image functions f(x) and g(x) have the same triple correlation.Then they have the same bispectrum, and Eq. ( 12) implies that their Fourier transforms F and G satisfy the relationship F(a)F(,6)F(-a -3) = G(a)G(,8)G(-a -3).(13) I show that Eq. ( 13) implies that G(a) = exp(i27Tnca)F(a) for some real constant c, and thus g(x) = f(x + c).Note first that Eq. ( 13) implies that F(0) = G(O).If F(0) = 0, both f and g are the zero image (f = g = 0 a.e.); if not, there is no loss of generality in the assumption that both F(0) and G) equal 1.0.{One can always divide both sides of Eq. ( 13) by [F(0)] 3 and show that G(a)/F(O) = exp(i2'7rca)F(a)/F(O), which proves the claim.}Then the image functions f and g are probability-density functions, and F and G are their characteristic functions.All characteristic functions are continuous 9 and equal 1.0 at the origin, so there is a neighborhood of the origin in which both F and G are nonvanishing.Within that neighborhood the functions in Eq. ( 13) can be divided freely, and after appropriate divisions [and with the use of the fact that Eq. ( 7) implies that IF(a)l = IG(a) 2 for all a], Eq. ( 13) becomes G(a)G(P)/F(a)F(P3) = G(a + 3)/F(a + 3), (14) which is valid for all a and 3 in some neighborhood of the origin.With the substitution H = G/F, Eq. ( 14) is a complex functional equation: where the function H is continuous and equals 1.0 at the origin (so it is nonvanishing in a neighborhood of the origin).Using ideas from functional equation theory, 20 one can easily show that all solutions to Eq. ( 15) take the form H(a) = exp(i27rca), where c is a real constant.So, in a neighborhood of the origin, G(a) = exp(i2'mca)F(a); i.e., the characteristic function of g(x) agrees with that of f (x + c) for some c.But for every c, f(x + c) is a probability-density function with bounded support, and consequently its characteristic function is uniquely determined for all a by its values in a neighborhood of the origin (specifically, by its derivatives at the origin, which determine the function everywhere through a power se-ries 9 ).It follows that G(a) = exp(i2irca)F(a) for all a, which proves the theorem for one-dimensional image functions.The proof for two-dimensional images is exactly analogous.Two more proofs based on different arguments can be found in Refs. 10 and 11.One proof in Ref. 11 is constructive, in the sense that it shows how, in principle, any finite image can be reconstructed from its triple correlation function. The TCU theorem does not hold for images of infinite size, and counterexamples can be quite simple; e.g., the band-limited one-dimensional integrable functions sinc 2 (x)(1 ± cos 6x) have identical triple correlations" [sinc(x) = (sini x)/irx].(The critical difference between the finite and infinite cases is that the transforms of finite images have only isolated zeros, while the transforms of infinite images can vanish over intervals.Reference 11 discusses that difference in detail.)But for images of finite size the scope of the TCU theorem can be readily extended from the specific class of image functions defined above (functions proportional to probability-density functions on gR 2 ) to the more general class of all measures on 2 that are proportional to probability distributions with bounded support in the plane.(The characteristic function argument given above goes through in that case in exactly the same way.)That class would appear to be sufficiently general that it could describe all images of finite size, so it seems fair to say that all such images are uniquely determined up to translation by their triple correlation functions.(In particular, images defined on discrete domains, such as the lattice of two-dimensional integers, can be identified with probability distributions concentrated on discrete points in the plane, and the TCU theorem holds for all finite images of that sort.) D. Uniqueness Theorem for Third-Order Statistics The TCU theorem implies that two physically distinct black and white texture samples (for example, a pair of images generated by the odd and even texture algorithms, such as the left and right sides of Fig. 1) can never have identical third-order statistics: if two finite-size black and white images have identical third-order statistics, they must have identical triple correlation functions and thus be physically identical.Consequently, the odd and even textures cannot provide a demonstration of visual discrimination of textures with identical third-order statistics.And, in general, no such demonstration is possible in principle. The truth of the last assertion for black and white images is apparent from what has been said so far, but to prove it for images composed of more than two colors requires a bit more argument.Suppose that f and g are two finite images composed of L discrete colors labeled 0,1,... , L -1, where 0 is the background color.Each image is described by L indicator functions f and g, i = 0, 1, . ., L -1, where f (x, y) = 1 if the point (x, y) has color i in image f and f(x, y) = 0 otherwise, and g is defined analogously for image g.It is assumed that all these indicator functions are locally integrable.The third-order statistics of f and g are defined by Eq. (6).Suppose that all the third-order statistics of g are the same as those of Comparing Eqs. ( 6) and ( 11) (and recalling that image areas are uniquely determined by third-order statistics), one can see that the identities 3, i i i g = S3, i i, i f, i = 1, 2, ... , L -1, imply that each pair of foreground color indicator functions, gi and fi, has identical triple correlation functions, and the TCU theorem thus implies that for each i, gi(x,y) = fi(x + ai,y + bi) for some pair of constants ai, bi.In other words, each of the indicator functions gi for image g is identical to the corresponding indicator function fi for image t; except for a possible translation (a 1 , bi).It remains to show that the vectors (a 1 , bi) are all the same, so that the entire image g is a bodily translation of image f.The identity =3,iilg s3,iilfimplies that ff gi(xy)gi(x + h,,y + vl)gl(x + h 2 ,y + v 2 )dxdy =-.f f(Xy)fi(x + h,,y + vl)f,(x + h 2 ,y + v 2 )dxdy (16) for each i = 2, .. ., L -1.Taking Fourier transforms of both sides of Eq. ( 16) yields Gi(ai,, )Gi(a 2 ,13 2 )G(-a, -a,,-131 -132) = Fi(a,,G3,)Fi(a 2 ,1)F,(-a, -a 2 ,-16 -132), (17) where Gi and G, are the transforms of gi and g,, respectively, and Fi and F, are the transforms of fi and fl, respec- tively.Because the indicator functions fj are nonnegative, Fj (0,0) # 0 unless fj = 0 (in which case gj does also, and both can be ignored); and, because they are locally integrable functions with bounded support, their transforms are continuous and hence nonzero in some neighborhood of the origin.Since gj (x, y) = fj (x + aj, y + b ) for each j, Gj (a,,13) = exp[i2vr(aj a + bi,3)]Fj (a,13), and substituting those expressions for G, and Gi on the lefthand side of Eq. ( 17) and dividing out the common F, and Fi factors, one finds that, within a neighborhood of the origin, exp{i2ir[(ai -al)(al + a 2 ) + (bi -b,)(13, + 132)]} = 1 (18) for all values of a,, a 2 , 131, and 132.That relation is possible only if a 1 = a, and bi = bi.Consequently, all the translation vectors (ai,bi) are the same, and g(x,y) = f(x + a, y + b).Hence the following is proved: Uniqueness Theorem for Third-Order Statistics Two finite-sized images f(x,y) and g(x,y) composed of the same set of discrete colors have identical third-order statistics if and only if f and g are identical up to a translation [i.e., g(x, y) = f (x + a, y + b) for some pair of constants a, b]. STATISTICAL COMPARISONS OF STOCHASTIC TEXTURES The uniqueness theorem of Section 3 guarantees that physically different texture samples cannot have identical third-order statistics, but it says nothing about how close those statistics can be.The relationship between physical similarity and third-order statistical similarity (that is, triple correlation similarity) is not at all straightforward.For example, rather small finite segments of the one-dimensional image functions sinc 2 (x)(1 + cos 6rx) and sinc 2 (x)(1 -cos 6qrx) have triple correlation functions that differ by only a tiny amount, though the functions themselves are obviously quite different.So while we know that odd and even texture samples cannot have identical third-order statistics, it is still possible that their third-order statistics might be similar-close enough that one may justifiably call them identical on some scale.The same possibility exists for odd and even samples compared with samples of the coin-toss texture.Of course, the scatterplots in Fig. 2 show that when the complete set of third-order statistics of sample odd, even, and coin-toss images are compared, there is vast disagreement overall.But that comparison is imprecise because it lumps together large-triangle statistics, which depend on only a few pixels and consequently can be expected to have a large variance, and small-triangle statistics, which depend on many pixels and should provide a fairer picture of the actual structural similarity of sample images.This section takes a closer look at the statistics of sample images generated by the odd, even, and coin-toss processes.All these processes are probabilistic algorithms that color a rectangular array of square cells (pixels) with R rows r= ,1,...,R -1 and C columns c = 0,,...,C- 1 . Each pixel is black or white; the pixel value p(c, r) = 1 if (c, r) is black, and p(c, r) = 0 if (c, r) is white.A coloring algorithm, such as the odd texture, defines a stochastic process (p(c, r); c = 0,1,...,C -1; r = 0,1,..., R - whose random variables p(c, r) are 0-1.The natural sample space of such a process is the set of all 2 CR possible colorings of a C X R array; a given process is an assignment of probabilities to each of those colorings.A specific coloring will be called an image.As usual, an event such as {p(0,0) = 1} is the subset of all images in the sample space that have the property in question; the ensemble probability of that event, P{p(0, 0) = 1}, is the sum of the probabilities of the images in that subset. Our concern is the relationship between ensemble probabilities and the statistics of specific sample images generated by a process.First-, second-, and thirdorder statistics for discrete images were defined in Subsection 2.C.For a stochastic process (p(c, r)) those statistics are random variables whose values vary from image to image.Interest centers on their means (the ensemble statistics) and variances.Here it is natural to focus on the renormalized statistics defined by Eqs.(8') and (9'), whose means have a simple relationship to ensemble probabilities.The third-order statistic function S 3 ,p includes the first-order statistic slp (when all its arguments are zero) and the second-order statistic function S2,p [when one of the ni, mi) is (0,0)], so comparisons of the third-order statistics of sample images include the firstand second-order statistics as well.All the textures considered here have identical third-order ensemble probabilities, and all three are third-order ergodic in the limit, as image size becomes infinite.The question is, How close are their third-order sample statistics for images of visually meaningful sizes?For the coin-toss texture it is clear in advance what will happen, since the mathematics is straightforward.But it is still useful to take an empirical look at the statistics of coin-toss samples, as a baseline for the comparison of samples of the odd and even textures, where it is not so obvious what to expect. A. Coin-Toss Textures Here one creates an image by coloring each pixel independently black or white, each with probability 0.5.All 2 CR possible colorings can occur and all are equally likely, so each image in the sample space has probability 1/ 2 cR.The first-, second-, and third-order ensemble probabilities are, respectively, P[p(c, r) = 1] = 1/2 if (c, r) is inside the array and P[p(c, r)] = 0 otherwise; P[p(c, r) = i, p(c + n, r + m) = j ] = 1/4 for all possible combinations ij = 0, 1, as long as the pixels are distinct and inside the array, and that probability equals zero otherwise [or 1/2 if (n, m) = (0, 0) and (c, r) is in the array]; and P[p(c, r) = i, p(c + n, r + ml) = j, p(c + n 2 , r + M 2 ) = k] = 1/8 for all values of i, j, k = 0,1 when all three pixels are in the array and that probability equals zero otherwise (or 1/4 or 1/2 in the obvious special cases).Consequently, the expected value of the renormalized third-order statistic, E[S3,,(nl, ml, n 2 , M 2 )], is 1/2 (when all the arguments are zero), 1/4 [when one of the (ni, mi) is (0,0), or (n, ml) = (n 2 , 2 ) (0, 0)], or 1/8 (for all other arguments). The third-order statistics of specific images will vary around those expected values, but as array size increases, that variance should decrease for fixed values of the arguments n, mi because the coin-toss process is third-order ergodic in the limit: with probability 1, In other words, for every fixed set of arguments ni, mi, the variance ofthe sample statistic S 3 ,p(n,, ml, n 2 , M 2 ) shrinks to zero as sample images become infinitely large. (Diaconis and Freedman 7 show that all their -matrix textures are third-order ergodic; the coin-toss texture is one special case, and the even texture is another.) However, an image size increases, so does the number of possible third-order statistics, since S3,p can be nonzero for larger arguments.The variance of the newly arriving statistics is always large, since they depend on only a few pixels.Consequently, comparisons of the complete statistics of two sample images always show considerable disagreement.Figure 7 illustrates those points.It shows scatterplots that compare the third-order statistics of two sample coin-toss images.[If a given statistic has the same value for both images, its data point falls on the diagonal line.If all the statistics equaled their expected values, all the data points would fall at (1/2, 1/2), (1/4,1/4), or (1/8,1/8).]In Fig. 7(a) the array sizes are 10 10, and the graph compares all the third-order statistics of the two images.Figure 7(b) compares the same set of statistics (0 n, mi 9) for 50 50 arrays.The law of large numbers is clearly doing its job.Figure 7(c) compares the complete set of third-order statistics for the 50 x 50 images. B. Odd and Even Textures The odd and even algorithms 6 " 3 both begin by coloring all the pixels of row 0 and column 0 by independent tosses of a fair coin.For the creation of an odd image the remaining pixels are then colored recursively, under the rule that every 2 2 block of four adjacent pixels must contain an odd number of blacks.For the creation of an even image, the remaining pixels are colored so that every 2 x 2 block contains an even number of blacks.Equivalently, for the even texture, one could color each pixel (c, r) outside the zero row and column so that it satisfies the constraint p(c, r) = p(c, ) + p(0, r) + p(0, ) (mod 2), (20) and for the odd texture, one could use the constraint p(c, r) = p(c, ) + p(0, r) + p(O, ) + cr (mod 2).(21) Every initial coloring of row 0 and column 0 creates a unique odd and even texture sample.All 2 C+1-such colorings are equally probable, so every odd image and every even image has probability 1/ 2 c+R-.(When the initial coloring of row zero and column zero is all black or all white, the resulting even image is solid black or solid white.Intuitively, those two images are obviously nongeneric even images, but their ensemble probabilities are the same as all the others.) The definitions in Eqs. ( 20) and ( 21) can be restated verbally in a helpful way: to create an even texture, one should color row 0 and column 0 by coin tosses and then, for each row r > 0, color the entire row the same as row 0 or the opposite, according to whether the column 0 pixel in row r has the same color as the column 0 pixel in row 0 or the opposite.To create the odd texture for the same initial coloring, one should reverse the even image coloring in every pixel whose coordinates are both odd numbers.That description makes apparent the one-to-one correspondence between odd and even texture samples: each even image is matched to a unique odd image, which is created by a reversal of the coloring of all its odd-odd pixels. 2 ' [As a curiosity, the top of Fig. 8(a) shows the solid black even image and its odd match.]Statistical and visual comparisons between odd and even texture samples need to take that point into account; they can involve either matched or unmatched pairs. Visually, matched odd and even samples seem just as highly discriminable as unmatched samples.[The odd and even samples in Fig. 1 are a matched pair.Figure 8(b) shows an unmatched pair.]That observation is interesting, because, in a matched pair, three fourths of the pixels are colored identically.Statistically, matched pairs have a strong odd-even correlation between statistics whose arguments are all even numbers [shown below in Fig. 9(c)]-a correlation that is absent for unmatched pairs (cf.Fig. 10 below).The presence versus absence of this correlation apparently goes unnoticed by the eye, thus providing another bit of evidence (in addition to the point about the odd and coin-toss textures mentioned in Section 1) that texture discriminability is not governed by global third-order statistical similarity. The strong correlation between even-argument statistics of matched odd and even texture samples can be understood from the fact that if (p 0 (c, r)) and (pe(C, r)) are a matched pair of odd and even images, then 2p,(c, r) - 1 = (-1)cr(2p, -1) for all pixels.If one sets o(c, r) = 2p 0 (c, r) -1 and e(c, r) = 2pe(c, r) -1, that relationship implies that for even-numbered arguments the secondorder statistics of the arrays (o) and (e) are perfectly correlated, since >, 2, (l1) 2 cr+ 2 cm+ 2 rne(c, r)e(c + 2n, r + 2m) c=O r=O and (_ 1 )2r+2cm+2rn e 1. Consequently, for even arguments the difference between the second-order statistics of matched odd and even images depends only on the difference between the total number of black pixels in the two images, which tends to be small, since three fourths of the pixels are always initially identical, and the law of large numbers averages out the rest.Similar considerations explain why the other (i.e., the truly third-order) statistics will be highly correlated for all-even arguments. Using a recursive argument based on Eqs. ( 20) and ( 21), Julesz et al. 6 proved that both the odd and even textures have the same third-order ensemble probabilities as those of coin-tossing. One could also reach that conclusion by using the verbal restatement of the even algorithm to show that its third-order probabilities are the same as those of coin tossing (by considering the possibilities for representative triples of pixels) and then appealing to the matched-pair property to show the same for the odd process.One could also use that same property to show that since the even texture process is third-order ergodic in the limit, as image size becomes infinite the odd process is also.(The argument is straightforward but long winded; it is omitted here.)Figure 9(a) compares the complete set of third-order statistics of a matched pair of small (10 X 10) odd and even texture samples (shown in the inset).Figure 9(b) compares the same statistics (0 5 ni, mi ' 9) for matched 50 X 50 odd and even images (the images in Fig. 1), and Fig. 9(c) compares their statistics for arguments in the same range but now for even-numbered arguments only (showing the strong hidden correlation mentioned above).Figure 10 shows the same comparisons for the pair of unmatched odd and even samples from main overall difference is that the variance of odd and even sample statistics is greater than that of coin-toss statistics.Direct comparisons of odd and even images with cointoss images show this point clearly.Figure 6 compared the small-triangle (O • n, mi-9) statistics of 50 50 odd and coin-toss sample images, and Fig. 11 does the same for 50 x 50 even-versus-coin-toss samples (the images are those in Fig. 2).The horizontal elongation of the data point clouds in Figs.all-even arguments [compare Fig. 10(c) with Fig. 9(c)], the overall statistical similarity of unmatched odd-even pairs is nearly the same as that for matched pairs.In both cases, odd and even sample images are statistically quite a bit less similar than are independent coin-toss samples, though not in any qualitatively revealing way.The a coin-toss sample depend on CR independent random variables, so one can expect that their variance will decrease as 1/CR, while in odd and even samples there are only C + R -1 independent random variables (the pixels in row 0 and column 0), so the variance of their statistics will decrease at the much slower rate 1/(C + R -1).That point suggests that in scatterplots comparing the small-triangle statistics of coin-toss versus odd or even texture samples containing CR pixels, the data-point clouds should be roughly elliptical, with aspect ratios of the order of [CR/(C + R -1)] 1I2 As a rule of thumb, that is a fairly accurate description of what one finds (e.g., for 50 X 50 images that ratio would be 5:1). The overall conclusion of the preceding analysis is that, apart from the strong correlation between statistics with all-even arguments (shown in Fig. 9), the third-order statistics of sample odd, even, and coin-toss images as large as 50 X 50 pixels cannot be described as identical. (Extending the comparisons to 100 X 100 arrays still leads to the same conclusion.)If all the third-order statistics of matched odd and even texture samples showed the same strong agreement as those with all-even arguments, it might reasonably be argued that the odd and even algorithms create viewable images whose third-order statistics are essentially identical.But evidently that is not the case. COUNTEREXAMPLES TO THE JULESZ CONJECTURE A toss textures are +-matrix textures.)Consequently, Gagalowicz' 2 sought to create discriminable stochastic texture samples with second-order statistics constrained to be as close as possible, using linear programming.He succeeded in creating texture samples that are quite easily discriminable despite having second-order statistics that differ by only approximately 2%.That is not quite an identity, but it is surely close enough to dispose of the Julesz conjecture for all practical purposes.The only real drawback of Gagalowicz's counterexamples is the complexity of their construction: one would prefer images that are easy to create, and, of course, exact second-order statistical identity is better than near identity. The other class of counterexamples in the literature involves textures composed of micropatterns: a small pattern (e.g., a letter or an abstract shape) is replicated at numerous sites in the plane to create a texture sample.(Here again, all the examples involved black and white images.)The mathematics underlying this approach can be understood in terms of autocorrelation.Recall that if two black and white images have the same area and the same autocorrelation function (the same power spectrum), their second-order statistics are the same.Suppose that +(x, y) is a 0-1 function that describes a black (4) = 1) micropattern on a white (4) = 0) background.Replicating that pattern at N sites in the plane corresponds to convolving ¢> with a function of the form p(x,y) = Ei= 1 8(x -Xi)8(Yyi), where the locations (xi, yi) are chosen so that none of the replicas overlap.The result is a black and white image function f: a [0, 1] given by N f(x, y) = p(X, y) * 4)(X, y) = .4)(x-xi, y -yi). The autocorrelation function of f(x, y), af(h, v), is the convolution of the autocorrelation functions of the micropattern 4 and the replication function p: af(h,v) = ap(h,v) * a, (h,v).If another micropattern y(x,y) can be found whose area and autocorrelation function are the same as those of pattern +(x, y), and if one replicates that pattern at the same set of locations (xi, yi), creating the texture sample g(x, y) = p(x, y) * y(x, y), then, since a,(h,v) = ag(h,v), it follows that ag(h,v) = af(h,v); i.e., the two samples have the same autocorrelation function. Since their areas are also the same, the two texture samples have identical second-order statistics. The problem is to find micropatterns with identical areas and autocorrelation functions that yield spontaneously discriminable textures.The simplest way in which one can create such micropatterns is to take any pattern 4)(x,y) and rotate it 1800, turning it into 4)(-x,-y). Since 4)(-x,-y) and 4)(x,y) have the same power spectrum [i.e., cF(a,P3)cD(-a,-,3), where cD is the transform of 4], they have the same autocorrelation func- tion.So a texture sample composed of, say, inverted letters A has the same second-order statistics as those of a sample composed of upright letters A located at the same positions.One might expect that such textures would be easily discriminable for micropatterns that look quite different after a 1800 rotation.But typically that is not the case: one of the surprising facts about texture percep-tion that emerged from Julesz's research 2 "3 is that pairs of texture samples created in that way are usually not spontaneously discriminable.That fact.provided strong evidence in favor of the Julesz conjecture. In their search during the 1970's for counterexamples to this conjecture, Julesz and his co-workers did not succeed in finding pairs of micropatterns with identical autocorrelation functions that produced discriminable textures.However, they did present examples of discriminable textures based on micropatterns that do not themselves have identical autocorrelations but that would yield distinct textures with identical autocorrelation functions if they were replicated infinitely often throughout the plane, with each replica independently rotated by a random amount. 4' 5 ' 8 (Random rotation forces the power spectra of the textures to be circularly symmetric, erasing the angular differences that distinguish the spectra of the individual micropatterns.)That technique was referred to as the four-disk method (named after the prototypical micropatterns).Its drawback was pointed out by Gagalowicz 2: one cannot expect finite samples of texture that correspond to distinct micropatterns to have identical power spectra, because they do not contain enough randomly rotated micropatterns-the actual power spectra of samples do not achieve perfect circular symmetry, so, in practice, the spectral differences between samples composed of distinct micropatterns will not be erased. B. New Counterexamples to the Julesz Conjecture This subsection describes a simple technique for the creation of discriminable texture samples (e.g., Figs. 4 and 5) whose second-order statistics are exactly identical.The textures are created by the replication of black and white micropatterns that themselves have identical second-order statistics, so they can be said to have identical secondorder statistics locally as well as globally.The construction principle is the same in one and two dimensions.In the one-dimensional case (for the creation of bar-code textures like those in Fig. 4) the idea is as follows.Suppose that p(x) and q(x) are arbitrary real functions on the line, with Fourier transforms P(a) and Q(a), respectively.The transforms of the convolutionsp(x) q(x) andp(x) * q(-x) are P(a)Q(a) and P(a)Q(-a), respectively, so the power spectra of p(x) * q(x) and p(x) * q(-x) both equal IP(c)I 2 IQ(a)I 2 .Thus p(x) q(x) and p(x) *q(-x) have the same autocorrelation function.Then, if h(y) is any third function, and p, q, and h are all nonnegative, the functions h(y)[p(x) * q(x)] and h(y)[p(x) * q(-x)] represent two-dimensional images with identical autocorrelation functions, since both have the power spectrum IH(3) 121P(a) 2 1Q(a)1 2 .To use this principle to create black and white micropatterns, one supposes that p and q are both composed entirely of unit-amplitude delta functions located at selected integers, e.g., In that case the convolutions p(x) * q(x) and p(x) * q(-x) both consist entirely of unit-amplitude deltas located at integers.(The choices 4n 2 and 4n2 + n + 1, n = 0,1, ... ,4, are motivated by the need to prevent the deltas from piling up in the convolutions.They are not unique in that respect.)If the function h(y) = 1 for Yj < some constant L, then (x, y) = h(y)[p(x) * q(x)] and y(x, y) = h(y)[p(x) * q(-x)] both describe bar-code micropatterns composed of 25 vertical black lines of length L, distributed over 133 spaces (so that their areas are equal), and those micropatterns have identical autocorrelation functions.Figure 12 shows those micropatterns and also the patterns created by Eqs. ( 22) and (23) when the sums run from n = 0 to 3 instead of 4. Texture samples that one constructs by locating replicas of micropatterns 4) and y at the same sites inherit that identity and consequently have identical second-order statistics. To extend the principle to two-dimensional micropatterns, one supposes p(x, y) and q(x, y) are arbitrary functions on the plane.Then p(x, y) * q(x, y) and q(x, y) * q(-x, -y) have the same autocorrelation function, since each has the power spectrum JP(U'V)J 2 Q(U'V)J 2 .Now let p(x,y) be a sum of two-dimensional delta functions that represent a set of locations in the plane, say (XIYl)X -(XnYn) and let q(x,y) be a 0-1 function that represents a black and white pattern.The convolution p(x, y) * q(x, y) creates replicas of q(x, y) centered at the (xi, yt) locations, and p(x, y) * (q -xy) creates replicas of 1800 rotations of q(x, y) centered at the same locations.If none of the replicas overlap, the convolution patterns are black and white and have identical autocorrelations and hence identical second-order statistics.That identity will then be inherited by textures formed by the replication of those patterns at the same sites in the plane. Victor 22 suggested the application of that principle illustrated in Fig. 5: take p(x,y) to be a set of twodimensional delta functions arranged in a triangle (the micropatterns in Fig. 5 use six deltas), and let q(x, y) be Pair (a) and (b) are p(x) * q(x) and p(x) * q(-x), respectively, where p and q are defined by Eqs. ( 22) and (23), respectively.Pair (c) and (d) result from a change in the upper limits of the sums from 4 to 3. (The functions have been translated here so that the first bars of all the patterns fall at position 0.) some other shape that is altered by a 1800 rotation, such as the letter T. Textures created in that way appear to be quite easily discriminable. Fig. 1 . (a) 50 x 50 pixel samples of odd (right side) and even (left side) texture.Tick marks indicate the boundary between the two.(These are matched samples, as explained in Section 4.) (b) 50 x 50 samples of odd (right side) versus coin-toss (left side) texture. odd us.euen texture samplex sample size S x 5e all third-order statistics Fig. 2 . Fig. 2. (a)Scatterplot that compares all the third-order statistics [Eq.(9')] of the odd and even texture samples shown in Fig.1.The x axis corresponds to even texture statistic values, and they axis corresponds to odd texture values.Both axes run from 0 to 0.6, with tick marks at 0.25 intervals.If the odd and even samples had identical third-order statistics, all the points would lie on the diagonal line.If all the statistics equaled their expected values (i.e., if the sample statistics equaled the ensemble statistics), all the points would fall at either (1/2,1/2), (1/4,1/4), or (1/8,1/8).The two other scatterplots compare the third-order statistics of the odd versus coin-toss texture samples (b) and the even versus coin-toss samples (c) from Fig.1.In (b) and (c) they axis corresponds to coin-toss statistic values.But the matter does not quite end there.The graphs in Fig.2compare all the third-order statistics of the texture samples of Fig.1.It can be argued on both statistical and perceptual grounds that a more meaningful comparison Fig. 3 . Fig. 3. (a) Scatterplots that compare all the second-order statistics [Eq.(8')] of the texture samples from Fig. 1.The axes are the same as those in Fig. 2. (a) Odd versus even, (b) odd versus coin-toss, (c) even versus coin-toss. Fig. 4 . Fig. 5. Two-dimensional counterexample to the Julesz conjecture, based on the construction principle described in Section 5.The texture samples on the left and the right have identical second-order statistics. coin-toss us.odd texture samples sample size 10 x 6 (a) coin-toss us.odd texture samples sample size 5o x So *~ ~ ~~~i I (b) Fig. 6.(a) Scatterplot that compares all third-order statistics ol small (10 X 10) samples of odd-versus-coin-toss texture (shown in the inset).(b) Scatterplot'that compares the same third-order statistics for 50 X 50 samples of odd-versus-coin-toss texture. exp[-i27r(ax + y)]dydx.}B. Triple Correlation and Third-Order StatisticsIn exactly the same sense, the third-order statistics of a black and white image are equivalent to a natural generalization of the ordinary autocorrelation function, called the triple correlation, which is not yet widely known in visual science but over the past decade and a half has attracted considerable attention in other fields.' 5 "6 The triple correlation function of an integrable function f R2 R is the function tf: &4 -* R, defined by tf(hl,,v, h 2 ,V 2 ) = f f(x,y)f(x + hi,y + vi)f(x + h 2 ,y + v 2 )dxdy.(11)a2 A straightforward calculation shows that the Fourier transform Tf of the triple correlation tf (commonly known as the bispectrum) is related to the image transform F through the equation Tf(a,, (,1, a 2 ,(32) ) one can prove the following theorem (Ref.11, Theorem 1'): Triple Correlation Uniqueness [TCU] Theorem If f is an image function with bounded support, and another image g has the same triple correlation function as that of f then g(x, y) = f (x + a, y + b) for some pair of constants a and b. 2 independent coin-toss texture amples aple ize e x 1 allZFig. 7 .Fig. 8 . Fig. 7. (a) Scatterplot that compares all third-order statistics for two independent 10 X 10 samples of coin-toss texture.The axes are the same as those in Fig. 2. (b) Comparison of the same statistics for independent 50 x 50 coin-toss samples.(c) Comparison of all third-order statistics for the same 50 50 samples. Fig. 8 .matched odd us. euen texture samples saple size 5e x 50 thirdorder statistics for n.m < 18 Apart from the absence of the strong correlation for statistics with matched odd us.even texture samples sample size 1 x e all third-order statlstlic: 18 allFig. 9 . Fig. 9. (a) Scatterplot that compares all third-order statistics of 10 10 matched samples of odd and even texture (shown in the inset).The axes are the same as those in Fig. 2. (b) Comparison of the same third-order statistics (O n, m, 9) for a matched pair of 50 50 odd and even samples (the samples in Fig.1).(c) Comparison of third-order statistics in the same range as that in (b) but only for statistics whose arguments are all even numbers.For this subset of statistics, matched odd and even samples are strongly correlated. Fig. 12 . Pairs of bar-code micropatterns with identical secondorder statistics.The numbers are the positions of the bars. cesses whose third-order ensemble statistics are different, which implies the opposite.(To satisfy the identical statistics condition, one construes A and B as samples from a stochastic process whose sample images are only A and B, with arbitrary probabilities assigned to each.Then A and B are samples from the same process, so they share the same nth-order ensemble statistics for all n.To satisfy the different-statistics condition when the images A and B themselves have nonidentical nth-order statistics, one construes A as a sample from a process that generates only A, and B as a sample from a process that generates only B. If A and B have identical nth-order statistics, one construes A as before and B as a sample from a process whose sample images are B and any other image C whose nth-order statistics are different from those of B, where B and C each are assigned probability 0.5.)
16,581.6
1993-05-01T00:00:00.000
[ "Mathematics", "Physics" ]
On Cosmological Implications of Gravitational Trace Anomaly We study the infrared effective theory of gravity that stems from the quantum trace anomaly. Quantum fluctuations of the metric induce running of the cosmological constant and the Newton constant at cosmological scales. By imposing the generalized Bianchi identity we obtain a prediction for the scale dependence of the dark matter and dark energy densities in terms of the parameters of the underlying conformal theory. For certain values of the model parameters the dark energy equation of state and the observed spectral index of the primordial density fluctuations can be simultaneously reproduced. Introduction The evidence of the cosmic acceleration presumably driven by dark energy (DE) with negative pressure [1] and precise measurements of the cosmic microwave background radiation [2,3] have triggered a renewed interest in the cosmological constant [4]. Among different approaches to study the cosmological constant a convenient framework is based on the effective field theory [5,6]. This theory is a long distance realization of quantum gravity reduced to the general theory of relativity supplemented by the quantum field theory in curved space [7,8]. The cosmological constant Λ and the Newton constant G, being parameters in the Einstein-Hilbert action, receive contributions from quantum loops and become running constants, i.e., functions of the running scale [9][10][11]. Unfortunately, the running scale, intuitively expected to be of the order of typical momenta of the particles in loops, cannot be fixed unambiguously. It has been shown [12] that a consistent infrared modification of gravity due to the quantum trace anomaly implies the presence of additional terms in the low energy effective action. Quantum fluctuations of the metric modify the classical metric description of general relativity at cosmological scales and provide a mechanism for a screening of the cosmological constant and the inverse Newton constant. The effective theory of the conformal factor induced by the quantum trace anomaly has a non-trivial infra-red (IR) dynamics owing to the existence of a non-trivial IR stable fixed point [13]. A number of issues based on this idea were addressed in the literature. 1 A formulation if IR quantum gravity in curved space was suggested [15] and the logarithmic corrections to scaling relations in the IR regime were studied [16]. The IR dynamics of the conformal factor was also investigated in four-dimensional quantum gravity with torsion [17] and a possible curvature induced phase transitions in IR quantum gravity was suggested [18]. In this Letter we investigate the effective low energy gravity supplemented by the Bianchi identity constraint and its implications for cosmology. We obtain a model in which, besides a scale dependent cosmological constant, dark matter (DM) has a noncanonical cosmological scale dependence. We find that, under reasonable assumptions in the cosmological context, the running of the DM particle mass is phenomenologically acceptable. We demonstrate that a reasonable cosmology is obtained for a range of parameters required to fit the observed spectral index of the primordial density fluctuations. We organize the Letter as follows. In Section 2 we briefly discuss the large scale effects of the trace anomaly. In Section 3 we introduce a cosmological model based on a generalized Bianchi identity. Finally, in Section 4 we summarize our results and give concluding remarks. Trace anomaly and effective low energy gravity In this section we summarize the basic ideas and results of Antoniadis, Mazur, and Mottola [19] concerning the gravitational trace anomaly. The effects of the trace anomaly in the conformal sector of gravity may be studied by making use of the conformal parameterization , whereḡ μν is a fixed fiducial metric. The total low energy effective action is is the classical Einstein-Hilbert action, S matt is the action that contains the matter fields and S anom is the anomaly induced effective action [13,[20][21][22] (4) The differential operator 4 defined as is the unique conformally invariant 4th order operator. The parameters b and b are the coefficient that multiply respectively the square of the Weyl tensor and the Gauss-Bonnet term that appear in the most general expression for the 4-dimensional trace anomaly [23]. These parameters depend on the matter content of the theory coupled to σ . If only the contribution of free massless fields are taken into account, b and b take on the values [21] where N S , N F , and N V are the numbers of scalar, Weyl fermion and vector fields, respectively. The integers −8 and −28 come from the spin-0 metric and ghost fields while b grav and b grav are the contributions of the spin-2 metric fields. In the following we treat Q 2 as a free parameter since the contributions beyond the Standard Model are not well known and the spin-2 contributions in (8) and (9) depend on the model of quantum gravity and are still an open issue. The one loop calculations in the Einstein theory and in the Weyl-squared theory give similar results [21]. However, it is not obvious that the Weyl-squared Lagrangian is appropriate to account for the contribution of traceless graviton degrees of freedom and the Einstein theory is plagued with several well-known difficulties. Although it follows from (9) that Q 2 > 0 for all free matter fields, we allow Q 2 to take on negative values. Negative sign contributions can be obtained, e.g., in some extended models of conformal supergravity [24] (for additional references, see also [14]). The scale invariant effective theory that gives rise to (4) has a non-trivial IR dynamics owing to the existence of a non-trivial IR stable fixed point [13]. The scale invariance in this theory persists even at the quantum level. The sectors of a theory, the scale invariance of which persists at the quantum level, have recently been dubbed "unparticle stuff" [25]. These sectors, if coupled to the Standard Model sector, seem to cause novel observable effects which could perhaps be detected in the future experiments at TeV scale. In particular, one could also expect that the unparticle stuff gives additional contributions to Q 2 . The quantum fluctuations of the conformal factor are responsible for a screening of the cosmological and inverse Newton coupling constants [12,19]. The anomalous dimension of an operator with the classical conformal weight ω is given by the quadratic equation [19,26] (10) with the solution where p = 4 − ω is the classical conformal codimension. The full scaling dimension p is related to the classical dimension by [27] The operators S 0 and S 2 appearing in (3) acquire anomalous dimensions β 0 and β 2 , respectively, and scale with volume according to whereas the corresponding couplings scale inversely By similar considerations one finds the scaling laws for fermion and boson masses Another effect of the quantum fluctuations of the conformal factor concerns the departure of the fractal space-time dimension from its classical value [19]. It turns out that the volume V does not scale with geodesic distance l naively as V ∼ l 4 . Rather, it scales according to where d H is the Hausdorff dimension which classically equals 4. The calculation based on the quantum gravity distance defined by the heat kernel of the operator 4 yields the Hausdorff dimension expressed in terms of the parameter Q 2 [19] where β 8 and β 0 are given by (11). In the course of the cosmological evolution the physical geodesic distance l scales as l ∼ a and hence the volume scales with a as As has been emphasized in [19], the scaling relations (14) and (15) of dimensionful quantities are not directly physically relevant since the units in which the volume is measured have not been specified. A physically meaningful scaling relation is obtained when a product of powers of two quantities is formed so that its naive dimension is zero. In this way one of the quantities is measured in units of the other. Combining (14), (15), and (18), the net effect is a cosmological scale dependence of the dimensionless quantities [19] where the exponents μ and ν are given by In the limit of large Q 2 one finds Thus, for positive Q 2 the cosmological constant decreases, the fermion masses increase, and the boson masses remain constant with increasing cosmological scale a, when these quantities are measured in units of the Planck mass. Cosmology by the generalized Bianchi identity A scale-setting procedure based on the implementation of the generalized Bianchi identity was established and successively applied [28] both to the effective field theory of gravity and matter, and to the nonperturbative quantum gravity [29]. Besides, it was found [30,31] that both theories are consistent with holographic dark energy [32], provided the running scale was identified with an infrared cutoff roughly equal to the inverse size of the system. It seems reasonable to assume that the quantum effects of the conformal factor which we have discussed in Section 2 do not alter the Einstein field equations at large distances, apart from the gravitational dressing of G and Λ due to these effects. Then, the contracted Bianchi identity of the Einstein tensor yields the conservation law where the energy-momentum tensor takes the usual perfect fluid form T μν = (p + ρ)u μ u ν + pg μν . In the comoving frame with FRW metric, Eq. (24) becomes where the cosmological scale a satisfies the Friedmann equation in flat space In (25) we implicitly assume scale dependent ρ Λ and G and the scale dependence of the DM density ρ need not be canonical a −3−3w . Furthermore, we assume that matter is nonrelativistic, i.e., that the matter energy density can be written as ρ = i n i m i , where n i are the particle number densities and m i are the masses of the particle species. If nonrelativistic matter consists of l fermionic and k bosonic species, we have l + k + 2 equations (Eqs. (19)- (21) and (25)) for 2 + 2l + 2k quantities. Although we do not expect k and l to be much larger than 1, additional assumptions are needed in order to solve the equations uniquely. Postulating (25), the scale behavior of ρ depends on the scaling of Λ and G. Eq. (19) gives the scaling of the dimesionless product ΛG and does not determine the scaling of the dimensionful parameters G and Λ separately. However, we are allowed to choose the units of measurement such that one chosen dimensionful parameter is fixed. We may, e.g., investigate three obvious choices when one of the three quantities G, Λ, or ρ Λ is fixed. A more general case that includes the above mentioned choices is obtained if we allow G and Λ to vary as powers of a restricted only by Eq. (19). Hence, we set where α is an arbitrary parameter. With this ansatz, ρ may be expressed in terms of the fixed dimensionful parameter where f (a) = 8π(G 0 Λ 0 ) 2α/μ−1 ρ * (a) is a dimensionless function of a, ρ c is the critical density at present, and the constant Ω Λ which may be fixed from observations is of the order of the present fraction of DE density. Plugging (27) and (28) in (25) and neglecting the DM pressure we obtain a differential equation for f with the solution The integration constant C is for small μ and α roughly C (1 − Ω Λ )/Ω Λ so that ρ fits the present DM density. Although the parameter α is arbitrary, the small values |μ| 1 and |α| 1 are phenomenologically desirable since the variation of Λ and G should not be too large to spoil the observational constraints. Hence, Eq. (30) implies a slight modification of the DM density. However, it may be easily seen that the effective DM density in the Friedmann equations remains canonical. Using (27), (28), and (30) the Friedmann equations may be written in the form where the effective DM and DE densities are given by and the effective DE pressure is In (33) and (34) Eqs. (31)- (35) show that the models satisfying (19) and (27) closely mimic the cosmology with standard cold DM and dark energy with a constant equation of state (34) (XCDM cosmologies [33]), at least at the level of the global evolution of the universe. There is an obvious twofold implication of this result. First, in the analysis of the observational data, especially the SN Ia observations, one should bare in mind that a good fit to an XCDM cosmology may be interpreted as a signal for the dynamics given by (27). Second, the observational constraints on XCDM cosmologies can be used to estimate the parameters of the models defined by (19) and (27). The values of α equal to 0, μ, and μ/2 correspond to fixed G, Λ, and ρ Λ , respectively. Although the simplest result is obtained for α = μ, the natural choice is α = 0, i.e., fixed G, since a variation of G is subject to the most restrictive observational constraints. 2 In this case we obtain the DE equation of state The observational constraint on the equation of state [3] w = −0.97 ± 0.07 yields (38) −0.3 μ 0.12, 2 For an extensive discussion of models with variable Λ and fixed G, see [34]. which in turn constrains the parameter Q 2 . Using (23) we obtain (39) where Q 2 < 0 would imply a phantom cosmology. Another constraint on Q 2 comes from the primordial density fluctuations [26]. It is usual to assume that the two point correlation function of the primordial density fluctuations δρ, should behave like where the exponent n, called the spectral index, needs not be constant over the entire range of wave numbers. According to Harrison and Zel'dovich [35] the primordial density fluctuations should be characterized by a spectral index n = 1. In other words, the observable giving rise to these fluctuations has naive scaling dimension p = 2. This naive scaling dimension reflects the fact that the density fluctuations are related to the metric fluctuations by Einstein's equations δR ∼ Gδρ in which the scalar curvature is second order in derivatives of the metric. Hence, the two point spatial correlations δR(x)δR(y) should behave as |x − y| −4 or |k| 1 in Fourier space. More generally, as a consequence of conformal invariance the two-point correlation function of an observable O with dimension is given by at equal times in three-dimensional flat space. In k-space this becomes Thus, the spectral index of an observable of dimension is defined by Hence, the Harrison-Zel'dovich spectral index n = 1 corresponds to the classical dimension = 2 of the primordial density fluctuation. If the conformal fixed point behavior [13] described in Section 2 dominates at cosmological scales then the scaling dimension p of an observable with classical dimension p is given by (12) as required by the conformally invariant fixed point for gravity. As a consequence of (43), with 2 instead of = 2, a deviation from the Harrison-Zel'dovich spectrum is obtained. For large Q 2 one finds [12,26] (44) A comparison of n thus calculated with recent observations yields a constraint |Q 2 | > 80. The favored WMAP value seems to be n = 0.95 which requires a large negative Q 2 −80. With this value, we obtain dark energy of the phantom type with w −1.02, which is consistent with SN Ia and WMAP observations. This result justifies a relaxation of the allowed range for the parameter Q 2 . The above considerations show that using a single value of the parameter Q 2 in our model it is possible to satisfy the observational constraints for two essentially unrelated phenomena: the present accelerated expansion of the universe (37) and the spectral index of primordial density fluctuations (44). The required negative value of Q 2 cannot be easily accommodated within the framework of the present theory, but the phenomenological potential of negative Q 2 is a clear incentive to search for new mechanisms which could bring Q 2 into the negative realm. The ansatz (27) is not the most general and does not cover all interesting possibilities. For example, it does not include a natural starting assumption that the total energy density of nonrelativistic matter scales canonically with a. With this assumption, Eqs. (19) and (25) fully determine the evolution of Λ and G. However, the canonical scaling of the matter energy density, ρ ∼ a −3 , combined with the scalings (20) and (21), implies that the particle number densities n i no longer vary as a −3 and hence, the particle numbers of individual species are not conserved. Although the assumption of canonical scaling of the matter density yields a mathematically consistent model, we find a strong disagreement with observations in a wide parametric range. In particular, for negative μ we obtain a maximal allowed value of the redshift of the order of 1 which is clearly not acceptable. Another interesting model, not covered by (27), is obtained if one assumes that all relevant particle species are fermions and that the corresponding particle number densities scale canonically, i.e., n i ∼ a −3 . In this case one can express the mass in terms of G and a from (20), Λ in terms of G and a from (19) and solving (25) obtain an evolution equation for G as a function of a. However, numerical solutions of this equation show that the resulting scaling of G with a is too strong to satisfy observational bounds, even for small values of parameters μ and ν. Conclusion We have studied some DE and DM aspects of the low energy effective theory of gravity. This theory is a modified general relativity in which the Einstein-Hilbert action is supplemented with the nonlocal terms induced by the trace anomaly of massless fields. These nonlocal terms do not decouple for scales E M Pl and therefore become increasingly important for the present and future time cosmology. The part of the action that stems from the trace anomaly contains all infrared relevant terms which are not contained in the local action. The testing of the theory vs. observational astrophysics and cosmology is a long term project, which includes the use of T (anom) μν as a dynamical source for Einstein's equations. However, we believe that the effective theory with running G and Λ supplemented with the generalized Bianchi identity may be successfully confronted with cosmological observations. The effective low energy gravity in the conformal sector predicts a cosmological scale dependence of various dimensionless quantities (Eqs. (19)-(21)). The scale dependence of DM and DE densities, being dimensionful quantities, depends on the choice of a fixed dimensionful parameter. In particular, fixing the Newton constant G yields a cosmological constant scaling with a as Λ ∼ a μ and a noncanonical scaling of the DM energy density given by (30) with α = 0. The effective DM and DE densities yield a reasonable cosmology if the exponent μ is small and restricted by the constraint (38). This constraint in turn yields a constraint (39) on Q 2 consistent with the observational bounds on the spectral index of the primordial density fluctuations. In our approach the parameter Q 2 that appears in the effective action (4) induced by the gravitational anomaly is treated as a free parameter. This parameter could, in principle, be calculated if one had a complete information on the conformally invariant sector. Unfortunately this sector is yet not well known so a precise theoretical prediction for Q 2 remains an open problem.
4,561.2
2007-07-26T00:00:00.000
[ "Physics" ]
Fast and Precise Soft-Field Electromagnetic Tomography Systems for Multiphase Flow Imaging : In the process industry, measurement systems are required for process development and optimization, as well as for monitoring and control. The processes often involve multiphase mixtures or flows that can be analyzed using tomography systems, which visualize the spatial material distribution within a certain measurement domain, e.g., a process pipe. In recent years, we studied the applicability of soft-field electromagnetic tomography methods for multiphase flow imaging, focusing on concepts for high-speed data acquisition and image reconstruction. Different non-intrusive electrical impedance and microwave tomography systems were developed at our institute, which are sensitive to the local contrasts of the electrical properties of the materials. These systems offer a very high measurement and image reconstruction rate of up to 1000 frames per second in conjunction with a dynamic range of up to 120 dB. This paper provides an overview of the underlying concepts and recent improvements in terms of sensor design, data acquisition and signal processing. We introduce a generalized description for modeling the electromagnetic behavior of the different sensors based on the finite element method (FEM) and for the reconstruction of the electrical property distribution using the Gauss–Newton method and Newton’s one-step error reconstructor (NOSER) algorithm. Finally, we exemplify the applicability of the systems for different measurement scenarios. They are suitable for the analysis of rapidly-changing inhomogeneous scenarios, where a relatively low spatial resolution is sufficient. Introduction In the process industry, the analysis of multiphase mixtures and flows is an important and challenging task for apparatus and process design, as well as for process monitoring and control. This is the basis for safe, efficient and economical production of, e.g., oil and gas, chemicals or pharmaceuticals. For accurate volume and mass flow measurements, as well as for apparatus and process design, the knowledge of the spatial material distribution of multiphase flows in a certain measurement domain is beneficial or even essential. Tomography systems enable one to reconstruct this distribution from which the flow regime and process parameters can be derived. Several different tomographic methods have been applied for multiphase flow analysis. These include electrical impedance, magnetic induction, microwave, ultrasound, optical, magnetic resonance, X-ray and gamma-ray tomography methods [1,2]. Hard-field electromagnetic methods-e.g., X-ray or gamma-ray tomography-are based on high-frequency electromagnetic waves, which propagate along straight lines through the material in the measurement domain independent of its distribution. In contrast, soft-field methods utilize low-frequency electromagnetic fields or waves, which strongly interact with the material. • A non-intrusive sensor that is robust against high pressure, corrosive chemicals and large variations in temperature and that allows for efficient signal coupling, high isolation between the sensor ports, as well as accurate and computationally-efficient modeling of the electromagnetic behavior; • Data acquisition with a high measurement rate in conjunction with a high measurement precision, accuracy and dynamic range; • Accurate and efficient modeling of the electromagnetic behavior of the sensor for a wide range of electrical properties; • A method for fast reconstruction of the material distribution, which is robust against stochastic and deterministic system errors. This paper provides an overview of the underlying concepts and presents recent improvements and results. In Section 2, we introduce a generalized description for modeling the electromagnetic behavior of EIT and MWT sensors based on the finite element method (FEM) and for the reconstruction of the electrical property distribution. To achieve fast reconstruction, we utilize Newton's one-step error reconstructor (NOSER) algorithm, which is derived from the Gauss-Newton method. In Section 3, the sensor design concepts, as well as the data acquisition and signal processing methods are described. The applicability of the developed systems for multiphase mixture and flow analysis is investigated in Section 4. Firstly, the phase distribution is reconstructed for different static dielectric phantoms modeling multiphase mixtures. Secondly, the galvanically-and capacitively-coupled EIT systems are utilized to monitor a mixing process of water and sodium chloride (NaCl) and an oil-gas two-phase flow, respectively. In Section 5, the main results, as well as the capabilities and limitations of the modalities are discussed, and further research topics are described. Imaging Method EIT and MWT are methods for imaging of the electrical property distribution, typically in a cross-sectional plane. Multiple measurement ports-e.g., electrodes, antennas or waveguides-are arranged at the surface of the measurement domain. Based on the measurement of the port quantities, which are voltages V and currents I (EIT) or equivalent wave quantities a and b of incident and reflected waves (MWT), the distribution of the electrical properties can be reconstructed. In the case of linear materials, the port quantities are linked by the admittance matrix Y and the scattering matrix S, respectively, The matrices depend on the geometry of the imaging domain and the measurement ports, as well as on the distribution of the total complex-valued conductivity: which describes the electrical properties of the materials combining the electrical conductivity σ el and permittivity ε. ε 0 and ε r denote the permittivity of a vacuum and the relative permittivity of the material, respectively. In MWT, the electrical properties of the materials are often described by the total relative permittivity, which includes the electrical conductivity: The determination of the matrices Y and S for a specified material distribution is denoted as the forward problem. This can be solved based on accurate modeling of the electromagnetic behavior of the measurement domain (forward model), which is described in Section 2.1. The utilized method to reconstruct the material distribution based on the measured port quantities and the forward model is presented in Section 2.2. For the following explanations, we use the EIT-related notation of the electrical properties (total conductivity) and the port quantities (admittance matrix) from which the MWT-related notation can be derived. Modeling the Electromagnetic Behavior Assuming non-magnetic materials, the electromagnetic behavior of the imaging domain can be modeled by the following three-dimensional partial differential equation for the electrical field E: where the vector field J e describes the (externally) excited current density and µ 0 denotes the magnetic permeability of a vacuum. Figure 1 shows a typical measurement setup of the electromagnetic tomography for multiphase flow imaging in pipes. The measurement ports are evenly distributed along the circumference of the pipe. Assuming a uniform material distribution along the z-axis, this setup allows one to approximate the electromagnetic behavior in R 3 by a series of equations in R 2 and to reconstruct the two-dimensional distribution of the electrical properties in the main measurement plane (z = 0). The electric field can be assumed to be symmetric with the main measurement plane with a finite extension H and, thus, be described by a cosine series: Applying Equation (6) to Equation (5) leads to: where the series components of the excited current density J e,n are defined equivalent to E n . Microwave Tomography In the case of MWT, the electromagnetic behavior is usually described in terms of the wave number κ instead of the conductivity. Equation (7) can be rewritten as: where the series components of the wave number are defined as: Electrical Impedance Tomography Due to the low frequency of the electrical fields used in EIT, the electrostatic approximation is valid, and the electric field can be expressed by the scalar potential field Φ: Thus, Equation (5) can be rewritten as: where Φ n are the series components of the potential field. Solving the Forward Problem In order to solve Equations (8) and (12), respectively, we utilized the finite element method (FEM). The main advantages of the FEM over other numerical methods-e.g., finite difference method (FDM) or finite volume method (FVM)-are the ease and quality of approximation of complicated (round) structures. Furthermore, the discretization leads to sparse systems of linear equations, which can be efficiently solved using GPUs. Besides parallel computing using GPUs, there are other methods to accelerate the computation process, which are based on improvements in terms of either the problem formulation or the numerical solution process as described in [12] and the references therein. However, many of them are not applicable or advantageous in the presented case due to the utilized irregular mesh and the relatively small size of the problem. Using the FEM, a 2D model of the measurement domain (sensor) is generated by a discretization using triangular elements with piecewise constant electrical properties. The model includes the measurement domain, as well as the measurement ports and must consider the boundary conditions. Applying the FEM to Equations (8) and (12) leads to a linear system of equations for the discretized electric field and potential, respectively, which are solved iteratively using the biconjugate gradient stabilized (BiCGSTAB) method [13]. Further details are described in [14]. Image Reconstruction The reconstruction of the spatial conductivity distribution σ t,rec based on the measured port quantities represents a highly non-linear and ill-posed inverse problem: This problem can be solved using different algorithms, which can be divided into linear and non-linear algorithms. The latter includes direct methods as contrast source inversion [15] and iterative methods as Gauss-Newton [16], conjugate-gradients [17] or Landweber [18]. We utilized the Gauss-Newton method and the derived linear NOSER algorithm [19] due to its efficient implementation and improved stability. An overview of the available algorithms for soft-field tomography is given in [20]. The inverse problem can be expressed in terms of an optimization problem for the conductivity distribution: with the error function: Y meas is the measured admittance matrix and Y cal is calculated for a specified distribution using the forward model. Applying the Gauss-Newton method to Equation (14) leads to an iterative equation for the approximate solution of the conductivity distribution: where the current solution σ t,n is updated after each iteration. The step size ∆σ t is calculated based on the linearization of the error function F at the solution of the previous step: The calculation of the iterative solution is computationally demanding and time-consuming since the forward problem has to be solved for each iteration step. To achieve a sufficiently high image reconstruction rate for flow imaging, linear methods are often used to solve the inverse problem. The NOSER algorithm is a linear direct method, which is derived from the Gauss-Newton method and determines a first-order deviation from a reference distribution σ t,ref : As a second step, the admittance matrix Y cal calculated using the forward model is replaced by Y meas,ref , which represents the measured port quantities in the case of the reference distribution: This leads to a reduction of additive systematic errors, as well as increased reconstruction stability. The algorithm offers a high performance implementation since the forward problem has to be solved only once for the reference distribution. To improve the reconstruction stability, which is often insufficient due to the limited measurement precision and accuracy, a priori information about the material distribution has to be included. By applying the Tikhonov regularization [21] to Equation (14), the solution of the inverse problem depends on an additional constraint: with the penalty function: For the linear differential operator L, we use a Laplacian filter, the application of which leads to a smooth spatial conductivity distribution, i.e., high spatial frequencies are suppressed depending on the regularization factor λ. To achieve a high image reconstruction rate of up to 1000 frames per second, the implementation is optimized for parallel computing using a graphics processing unit (GPU) as described in [14]. Systems In this section, we describe the sensor and data acquisition concepts of the three soft-field tomography systems-galvanically-coupled EIT, capacitively-coupled EIT and MWT-which were developed to overcome the challenges related to real-time multiphase flow imaging. Designing soft-field tomography systems, it has to be taken into account that the reconstruction accuracy and stability depend on the number of independent measurements, as well as on the measurement precision and accuracy. On the one hand, an increased number of ports leads to an increased number of possible independent measurements and, thus, a potentially higher spatial resolution and reconstruction stability. On the other hand, it results in an increased complexity of the system and a decreased measurement rate. Furthermore, the measurement precision can decrease due to a reduced sensor sensitivity, which is related to the size of the ports. EIT The EIT is used to determine the spatial distribution of the complex-valued conductivity inside a given measurement domain. Electrodes are placed at discrete positions along the circumference of the imaging domain. The complex-valued conductivity distribution can be calculated from measured voltages and currents at all electrodes. Two main aspects of an EIT system for process visualization are the measurement rate and the dynamic range. To achieve good reconstruction results, a high dynamic range and minimization of the noise level are required, because geometrically small inhomogeneities have a low impact on the measured signals, due to the large wavelengths of the excitation signals in relation to the geometric dimensions of the environment. Additionally the measurement signals at electrodes with a large distance from the excitation electrodes are very small, and thus, the measurements are strongly affected by electronic noise. Brown and Seagar [22] found that the voltages between adjacent electrodes are typically within a dynamic range of 32 dB. Accordingly, a measurement dynamic range greater than 92 dB is required to resolve variations of 0.1% of the electrical conductivity. Furthermore, a large measurement rate is required to assure that no significant changes of the conductivity distribution occur during a single measurement. In common EIT systems, the excitation of different electrodes and the associated voltage or current measurements at all other electrodes are performed sequentially using a time-division multiplexing (TDM) procedure. To increase the measurement rate, the measurement time has to be decreased. Consequently, the dynamic range of these systems is decreased. To realize a high measurement rate in combination with a high dynamic range, we adapted the frequency-division multiplexing (FDM) technique for EIT [23]. Multiple excitation signals with orthogonal frequencies are used at different electrodes at the same time. The superimposed measurement signals can be separated and assigned to the different excitation signals. In this way, the measurement rate can be increased without decreasing the measurement time. Therefore, a data acquisition unit (DAU) is required, which allows for parallel excitation and measurement. We developed a galvanically-and a capacitively-coupled EIT system with a parallel hardware architecture intended for real-time analysis of multiphase mixtures and flows. The sensor design and modeling, as well as the data acquisition and signal processing will be explained for both systems in the following subsections. Sensor and Modeling For the EIT, there are generally two different methods for coupling and decoupling electrical signals to and from the measurement domain: galvanic and capacitive coupling. In the case of the former method, an electrode ring with equidistant electrodes is mounted at the circumference of the domain. Each electrode is screwed through a non-conductive wall; thus, the end faces of the electrodes are flush with the wall, and an electrically-conductive connection between the electrodes and the measurement domain is realized. For the capacitive coupling, metallic plates are circularly distributed on the outer surface of a non-conductive pipe. Electric fields with a sufficiently high frequency are coupled through the wall into the measurement domain. Horizontal cross-sections of both coupling variants are shown in Figure 2. Each variant has certain advantages and drawbacks and is suitable for different measurement scenarios. The electrodes of the galvanically-coupled sensor are usually significantly smaller compared to the metallic plates of the capacitively-coupled sensor, and thus, more electrodes can be mounted around the measurement domain. This leads to an increased spatial resolution and reconstruction stability. However, galvanically-coupled systems are only suitable for conductive background materials, and the electrodes are susceptible to contamination or abrasion due to the direct contact with the materials. The capacitive coupling method is more robust, since the metallic plates are separated from the measurement domain, and suitable for non-conductive materials. The developed galvanically-coupled sensor [24] consists of a polymethyl methacrylate (PMMA) pipe, with an inner diameter of 180 mm and a height of 500 mm, and an electrode ring with 36 gold-plated electrodes, which is mounted at the circumference. Eighteen electrodes are used for current excitation and the other 18 electrodes are for voltage measurement. Each electrode has a diameter of 5 mm. The capacitively-coupled sensor consists of a zirconium dioxide ceramic pipe, with an inner diameter of 53 mm and a height of 400 mm, and 16 metallic plates, which are 9 mm wide and 100 mm high. The ceramic pipe has a large relative permittivity of 28, which allows for effective signal coupling through the wall. Additionally, it is resistant to abrasive fluids, almost all acids, high pressures (up to 2000 MPa) and high temperatures (up to 900 • C in air). To minimize stray capacitances, a second layer of metallic plates is mounted on the first layer, separated by an acrylic glass pipe with a wall thickness of 3 mm. These electrodes are used in a driven shield configuration [25]. The electromagnetic behavior of both sensors is described by Equations (6) and (11). The effective height used in these equations is chosen as 300 mm and 100 mm for the galvanically-and capacitivelycoupled sensor, respectively. The number of used series components n was determined based on a comparison of simulation and measurement results. For the galvanically-coupled sensor, it has been shown that for n ≥ 20, the relative root mean square (RMS) error reached a constant minimum [14]. For the capacitively-coupled sensor, n = 1 was determined. Data Acquisition and Signal Processing Both EIT systems comprise parallel excitation, measurement and signal processing units in order to be able to apply the FDM measurement procedure. Thereby, the systems achieve measurement rates with up to 1000 complete measurement cycles per second in combination with a measurement dynamic range of 120 dB. In Figure 3, a simplified block diagram of the developed galvanically-coupled system is shown. The system consists of the sensor, nine parallel excitation sources, 18 parallel voltage measurement channels and a digital system control unit, as well as a personal computer (PC) with a GPU for fast image reconstruction. The time-harmonic carrier signals in the frequency range of 200 to 300 kHz are generated by using direct digital synthesizers (DDS). Controlled floating current sources have been realized to ensure constant excitation amplitudes independent of the changing load impedance. The parallel measurement unit is based on 18 synchronously sampled analog-to-digital converters (ADC) with an amplitude resolution of 24 bit and a spurious free dynamic range of 120 dB. Each ADC digitizes the preamplified differential voltage signal between two electrodes. The digital signal processing (DSP) is performed on field programmable gate arrays (FPGA). In contrast to the galvanically-coupled system, the excitation and measurement ports are not separated in the case of the capacitively-coupled system. Here, 16 identical signal processing units are used for all 16 measurement ports. A block diagram of one signal processing unit is shown in Figure 4. Each channel can be used in excitation or receive mode. In excitation mode, time-harmonic carrier signals in the frequency range of 450 to 550 kHz are generated using a DDS. The carrier signal is amplified to an amplitude of 50 V to compensate the attenuation of the pipe wall and thus to maximize the signal strength at the receive channels. In receive mode, the excitation signal is switched off, and the signal path is terminated by the resistance R 1 . The port current and voltage are measured by two ADC. An FPGA controls the DDS and performs the parallel DSP. MWT Microwave tomography is an imaging method reconstructing the distribution of the complex-valued relative permittivity based on measurements of the scattered electromagnetic fields at the boundary of the imaging domain. Although the utilized method determines the distribution-based single-frequency measurement data, a large frequency range of the sensor and the DAU is beneficial due to the following reasons. The selection of the reconstruction frequency can be adapted to the measurement scenario. In the case of low-contrast materials or small changes in the material distribution, the scattering parameters change only slightly compared to the reference scenario. Measurement data at high frequencies should be utilized for reconstruction since the sensitivity of the sensor often increases with increasing frequency. This leads to an improved signal-to-noise ratio assuming a limited, but frequency independent measurement precision. In the case of a large change in the material distribution or high-contrast materials, the reconstruction frequency should be significantly reduced to limit the deviation of scattering parameters and maintain reconstruction stability [26]. Furthermore, the reconstruction results can be improved by using measurement data at multiple frequencies as shown in [26]. The upper frequency limit is determined by the following issues. Firstly, the attenuation of microwaves in conductive materials, i.e., salt water, increases with increasing frequency. Secondly, the complexity of the forward model and the computational effort increase while the achievable accuracy decreases due to multimode wave propagation and a necessary reduction of the mesh element size at high frequencies. The lower frequency limit depends on the maximum acceptable size of the measurement ports and, thus, the minimum number of ports. The complexity of the data acquisition unit and system calibration increases with increasing bandwidth. The maximum system bandwidth is limited by the bandwidth of the measurement ports. The sensor design and the derived forward model, as well as the DAU are presented in the following subsections. Sensor and Modeling The application of MWT for multiphase flow imaging requires a special sensor design, which addresses the challenges related to multiphase flow analysis described in Section 1. Since most published MWT sensor concepts [5] are not suitable for measurements in metal or ceramic pipes, we developed a new MWT sensor concept [27]. The sensor consists of a metal pipe, with an inner diameter of 53 mm and a height of 250 mm, and wedge-shaped dielectric windows, which are made out of a technical plastic (PEEK). The microwaves are guided from the excitation port through a rectangular waveguide and a dielectric window into the measurement domain as depicted in Figure 5. The sensor is fed from coaxial lines using rectangular waveguides with a broadband transition [28]. The selection of the frequency range is a trade-off between microwave attenuation, geometrical dimensions of the sensor, the number of sensor ports and the complexity of the data acquisition unit. We realized an eight-port sensor that is usable in the frequency range 0.7 to 5.5 GHz. The width of the dielectric windows in the z-direction is w DW = 152 mm. The electromagnetic field distribution inside the waveguides and the dielectric windows is a superposition of the multiple waveguide modes. Due to the symmetry of the waveguides, only odd transverse electric modes TE 2m+1,0 are excited and propagable, and the corresponding electric fields have a maximum at the main measurement plane. Furthermore, it can be assumed that the fundamental mode (m = 0) is the dominating wave mode inside the windows and the measurement domain. The forward model includes the measurement domain, the dielectric windows and the metal frame of the sensor, as well as the wave ports for excitation and measurement, which are located at the interfaces between the dielectric windows and the rectangular waveguides. The electric fields at these interfaces can be calculated from the measured scattering parameters [27]. The electromagnetic behavior can be calculated using Equations (6), (8) and (9), where n = 2m + 1 = 1 and the height H depends on the position in the measurement plane. Inside the dielectric windows, the height is exactly known and equals the width of the windows (H = w DW ). Inside the measurement domain, the electromagnetic behavior can be accurately modeled using an effective height of H eff = 175 mm. Data Acquisition and Signal Processing The input parameters of the image reconstruction algorithm are derived from the measured scattering parameters. To achieve a sufficiently high measurement rate in combination with a high dynamic range and a high measurement bandwidth at reasonable costs, a parallel-detecting custom-design data acquisition unit (DAU) is required, because commercial vector network analyzers are expensive and bulky. Most reported MWT systems utilize commercial or custom-designed DAUs based on the continuous wave (CW) network analysis method and the heterodyne principle [29]. These systems offer a high measurement precision and dynamic range, but the system error reduction of multiport MWT systems is complicated and the achievable measurement rate low in the case of a high number of required frequency samples. To overcome these drawbacks, we examined the applicability of the frequency-modulated continuous waves (FMCW) network analysis technique [30] for multiport MWT systems [31], which offers robust and computationally inexpensive system error reduction, resulting in an improved imaging accuracy and reconstruction stability. We recently designed an eight-port parallel-detecting FMCW DAU using the heterodyne principle, which allows for a higher flexibility in terms of measurement setup and parameters compared to our previously-developed homodyne prototype system presented in [31]. A block diagram of the MWT system including the DAU and a photograph of the eight-port sensor are shown in Figure 6. x II x I 8 z I,a,i z I,b,i 8 8 Figure 6. Block diagram of the MWT system including the data acquisition unit (DAU) and a photograph of the eight-port sensor. FMCW, frequency-modulated continuous wave. The synthesizer is one of the key components of the system, and its implementation is similar to that presented in [32]. It generates two signals x I and x II , whose frequencies are linearly varied from the start to the stop frequency, e.g., 0.7 to 5.5 GHz, and differ by a small offset, e.g., ∆ f = 200 kHz. The signal x I is guided through an eight-port switch and a coupler to the current excitation port of the MWT sensor. The incident and reflected waves are coupled towards an eight-port receiver, where they are mixed down with the signals y II,j into the intermediate frequency (IF) range ( f IF ≈ 200 kHz) and measured simultaneously. Each measurement port includes two measurement channels. To obtain the full scattering matrix, one measurement cycle includes eight steps during which the excitation port is varied. The DSP is performed on four FPGA, which allow for a parallelized computation of the wave quantities from which the scattering parameters are derived. A PC including a GPU is used for the reconstruction of the permittivity distribution. The developed DAU allows for broadband measurements in the entire frequency range of interest. In the case of a data acquisition time of 1 ms for a single excitation, which corresponds to a theoretical measurement rate of 125 cycles per second (neglecting the DSP time), and a bandwidth of 1.5 GHz, the developed DAU offers a relative measurement uncertainty and a dynamic range of approximately 10 −3 and 90 dB, respectively. This significantly exceeds the performance of previously-published FMCW DAUs [33,34]. All imaging examples presented in Section 4 are based on measurement data using a two-port prototype of the heterodyne DAU in conjunction with a 2 × 8 switching matrix. Imaging Results In this section, we examine the applicability of the developed systems for multiphase flow imaging, reconstructing the material distribution in the case of static phantoms (MWT), a mixing process (galvanically-coupled EIT) and a two-phase flow (capacitively-coupled EIT), respectively. For image reconstruction, we utilized the NOSER algorithm resulting in a first-order deviation of the electrical property distribution from a reference scenario. The electrical properties of the reference material are shown in green, whereas positive and negative contrasts are indicated in red and blue, respectively. All images are scaled to the maximum absolute deviation from the reference value. Static Multiphase Mixture One possible application of the capacitively-coupled EIT and MWT systems is oil-gas-water flow imaging, e.g., for flow regime identification. In a first step, we tested our systems with different static phantoms, which have similar electrical properties. As materials for these phantoms, we utilized air, polypropylene (PP) and tap water (TW) with relative permittivities of ε r,air ≈ 1, ε r,PP ≈ 2.2 and ε r,TW ≈ 78 − j6, respectively. Figure 7 shows the actual material distribution for two static mixtures modeling oil/water-in-gas and gas/water-in-oil scenarios, respectively. The corresponding real part of the reconstructed permittivity distribution is shown in Figure 8. The linearization of the problem leads to a qualitative approximation of the electrical property distribution with a smooth transition from the maximum to the reference permittivity. The utilized regularization method results in a suppression of high spatial frequencies as described in Section 2.2. However, in both scenarios, the positions of the objects are accurately reconstructed, and the materials can be clearly identified due to their electrical properties, which indicates the applicability of the system for imaging of oil-gas-water mixtures. Experiments using the capacitively-coupled EIT system led to similar imaging results. The reconstruction of multiphase flows is described in Section 4.3. Mixing Process In this section, a visualization of a simple mixing process using the galvanically-coupled EIT system is presented. During this experiment, 2 g of sodium chloride were mixed into 13 L of tap water. The sodium chloride was poured into a concentrated spot in the measurement domain filled with tap water. The mixture was stirred in a clockwise direction by a rotating magnetic stirrer. Figure 9 shows a series of four reconstructed conductivity distributions based on measurements with a time offset of 2 s. An increase in conductivity based on the dissolving process of the sodium chloride and the movement of the mixture can be clearly observed. Two-Phase Flow For the evaluation of the capacitively-coupled EIT system, we installed it in our multiphase flow loop, as depicted in Figure 10, and studied different intermittent two-phase flows. As one example, the system was used to analyze the transient behavior of an oil-gas slug flow. A vertical cut (x = 0) of the imaginary part of the reconstructed conductivity distribution as a function of time is presented in Figure 11. The amount of gas bubbles, as well as their size, length and position along the vertical cut can be evaluated based on this measurement series. Discussion and Conclusions In this paper, we presented an overview of three different soft-field electromagnetic tomography systems-a galvanically-and a capacitively-coupled EIT, as well as an MWT system-intended for real-time imaging of multiphase mixtures and flows. The developed systems offer an as of yet unachieved combination of measurement precision, accuracy and dynamic range, as well as a measurement and image reconstruction rate of up to 1000 frames per second. The selection of the appropriate soft-field tomography modality for the analysis of a certain process depends on the requirements of the application in terms of process integration, electrical properties of the materials involved and costs. The spatial resolution is similar for each modality, depends on the number of measurement ports and is in the order of one tenthof the pipe diameter. MWT systems can be used for conductive and non-conductive materials, and their sensitivity range can be adapted by the selection of the measurement frequency. Furthermore, FMCW-based systems offer robust system error reduction. However, compared to EIT systems, the complexity of the data acquisition unit is significantly higher, and modeling the electromagnetic behavior is more complex. Galvanically-coupled EIT systems are suitable for applications where direct contact between the measurement ports and the materials is acceptable and the background material has a sufficiently high electrical conductivity. Compared to capacitively-coupled EIT and MWT systems, the ports are significantly smaller, which results in a lower instrumentation size. Furthermore, the number of ports can be higher, which potentially leads to an increased spatial resolution. In contrast, capacitively-coupled EIT systems are suitable for applications involving conductive and non-conductive materials, as well as in the case of chemically-aggressive or abrasive materials. In our opinion, the presented systems are beneficial tools for the analysis and optimization of processes with rapidly-changing inhomogeneous material distributions, where a relatively low spatial resolution-compared to hard-field tomography modalities-is acceptable. In current and further research, we investigate sensor, modeling and data acquisition concepts to increase the flexibility of the systems and thus simplify the integration into the process. This will allow examining the suitability for an extended range of applications. Furthermore, we study methods to derive process parameters from the reconstructed distribution of the electrical properties. Finally, a combination of two modalities may improve the achievable reconstruction quality, which will be investigated in future work.
7,511
2018-05-09T00:00:00.000
[ "Mathematics" ]
Internet of Things: A Beginners’ Précis and Future Scope Background/Objectives: The chief objective of this paper is to enlighten the society, especially youth and beginners, about the Internet of Things paradigm and its applications in various scenarios. Methods/Statistical analysis: The methodology adapted to study the objective and the various incorporating technologies is through extensive literature survey. Several journals, research papers, handbooks and published articles had been referred to during the course of elaboration of this paper. The Internet has been a valuable asset for the study of the emerging terminologies. Findings: The major findings from this study are mentioned Applications of IoT: The Internet of Things finds its applications in various scenarios ranging from individual level, like as house security, utilities, and personalization to global level, like automated machines and systems, traffic monitoring system, smart rescue facilitation systems. Advantages and disadvantages of employment of IoT: The employment of IoT features the advantages like ubiquitous network, connected computing, reduction of efforts, and economy of energy and currency, whereas the disadvantages are privacy issues, compatibility issues, security threats and complexity. Challenges and Future scopes: The developments in IoT paradigm still face challenges like security of data, data management, and network architecture. The existence of these challenges breed the future scopes in the development of IoT based systems, like increased security, efficient data management, and enhanced architectures. The exposure to these findings shall spark new ideas towards overcoming the challenges faced in development and support the development of new IoT based systems. Application/Improvements: The purpose of this paper is also to cater these findings for future references for knowledge addition, and to give insights towards development of smart systems for a sustainable environment. Introduction "Necessity is the mother of invention" -Technology has been advancing 1 ever since the intelligence of the human race has achieved new levels. The evolution of mankind has witnessed a thirst of knowledge -curiosity. The world around us is getting smarter now. Providing more sophisticated information and handy features has been the matter of concern for the modern era. Each of the past three centuries has witnessed a novel technology. Information gathering, processing, and distribution were the key technologies 2 during the 20th century. The advancement in the exchange of information gave rise to the networking technology 3 . Human efforts fic, she calls home asking her mother to prepare a warm cup of coffee. And, by the time she has parked her car in the garage and steps into the home, she finds her coffee ready to be slurped… But, what would happen if she called her mother and found that her mother had went shopping with her mates. Who would prepare her coffee now? She now wishes if there was a way that she could tell the coffee machine at home to prepare her coffee and by the time she would reach home, she would find the coffee ready. Sounds funny, right? But, this is certainly possible! To refer such technology, the term Internet of Things 1,5 , is the call. The Internet of Things (abbreviated as IoT) is, actually, a network of physical objects or things, like devices or gadgets, automobiles, and even the big buildings that are embedded with electronic devices, guided and managed by an operating software, sensors to collect information about surroundings, and a network connectivity that enables these objects to collect and exchange data, and perform a set of activities or a task. Conceptualization Since the creation of first electronic telegraph in the 19 th century to the creation of first web page in 1991 until the year 1999, significant discoveries and inventions have been made in the era of computing. It was the year 1999, when the assistant Brand Manager of Procter and Gamble, Kevin Ashton, coined the term, "Internet of Things". The idea behind, emerged from the very technology of RFID that Kevin had been working with, at P&G. An object embedded with an RFID tag enables itself to be sensed and identified by an RFID reader 6 . Thus, the technology can be engaged to identify and group, similar type of objects, and hence this technology became very popular and earned wide acceptations in supply chain industry. Additionally, the fact that the cheaper prices and the range of detection of an object by the use of the RFID technology is broader as compared to the Bar-code and others, verified it as more preferable, and has found its applications in various business sectors like inventory management, logistics and transportation, electronic surveillance systems and many more 7 . However, the technology has its roots back since the World War II, in the form of radar systems. Consequently, it can be said that the trait of the technology -to identify an object, and a close acquaintance with the RFID systems at P&G would have sparked the idea of a network of things, which can be detected, in Kevin's mind. An article published in June, 2009, "That 'Internet of Things' Thing" 8 , clarified Kevin's vision and the idea behind the Internet of Things. It voiced -"The problem is, people have limited time, attention and accuracy-all of which means they are not very good at capturing data about things in the real world. And that's a big deal. We're physical, and so is our environment. Our economy, society and survival aren't based on ideas or information-they're based on things... If we had computers that knew everything there was to know about things-using data they gathered without any help from us-we would be able to track and count everything, and greatly reduce waste, loss and cost. " The inference deduced from Kevin's words reveal that he envisioned an environment with physical things that could collect data implicitly rather than ceaseless human interactions to provide every minute detail; as he stated that 'people have limited time, attention and accuracy'. He aspired of computer systems that could automatically and intelligently interpret data and produce services or perform certain tasks guided by minimum number of instructions that would solve our daily life problems, efficiently and economically, and would lead to the reduction of waste, loss and cost. Components of IoT In the coffee machine scenario illustrated in the beginning section, the user would notify the coffee machine by a message or a voice mail or any alternate means, so that by the time she had reached home, the coffee was ready. Now, to implement such a scenario in real life, the backend process must be broken down into a set of distinct events that would occur during the whole process, as followed -1. Generation of request by the user. 2 .Propagation of request through a channel, to the other end. 3. Interpretation of request and generation of response by the machine (or things). 4. Propagation of the generated response back to the user. 5. Notification to the user. As the things are said to exchange different kind of information they must possess the ability to sense and to Vol 9 (47) | December 2016 | www.indjst.org Amit Joshi, Gurpreet Singh and Gagandeep Singh be detected as well as must be connected to a channel or a medium to exchange the information with the user. Also, the things must be intelligent enough to interpret and process the information effectively, in real time, and generate a response or perform the task, with high efficiency 1 . So, after analyzing the set of events that occurred in the scenario, the main components, whose association will constitute to a system based on the Internet of Things, are -1. The 'things' 2. Sensors 3. Transmitter and receiver devices 4. Communication channel The 'things' As stated in the words of the father of IoT, Kevin Ashton, "The problem is, people have limited time, attention and accuracy-all of which means they are not very good at capturing data about things in the real world. " 8 He mentioned that people's time is limited and to capture details and explicitly provide them to the different things to accomplish a task is a significant problem of the real world, because it would consume time and the process would not be economical. In simple words, in the earlier scenario, if Stacy's machine did not have the capability to take instructions and interpret them, then she might not get her cup of coffee ready by the time she would reach home, rather she would have to drive all the way back home and prepare the coffee herself, which is very less desirable by an individual after a hectic day at work. Kevin's words reveal that in order to timely and quickly, economically, and the most prominent, smartly, accomplish our daily tasks, the things around us should be capable to gather information automatically, without our help, and plan the future tasks accordingly 5 . For an instance, suppose if your shower knew what music you like to listen to while showering yourself; if your refrigerator could estimate and prepare a list of the grocery items of requirement, the daily efforts of an individual could be reduced to an extent and the routine could be managed efficiently. The things can be anything -a washing machine or a microwave oven, a vault or a wristwatch, a car or a public transportation bus, or even a house too. Any physical object that can be connected to a network to exchange information can be regarded as the things in Internet of Things. Sensors Like the human eyes perceive the objects in the real world, the ears listen to the sound present in the environment, and the nose smells and make distinction between two different odors, similarly, the things in an IoT infrastructure shall require its sensory parts that enable the system or the things to sense the information from its surroundings, and to be sensed or detected. Sensors come into role here. The sensors are responsible to collect the different type of information 11 , required to perform a set of activities to accomplish the desired task. For example, Mark, a health-conscious person, goes for a morning walk every day. When he returns home from the walk, he wants the door to be unlocked and his glass of milk that he kept in the microwave oven before leaving, to be warm. A sensor present on the window will detect whether the person entering the premise is Mark, and thus the system shall immediately unlock the door and inform the microwave to turn on the heating. Such is the role played by a sensor -to detect the analog signals (physical parameters) from the surroundings and convert them to digital signals that can be measured electrically 12 . To sense the information present in different forms in the surroundings, following are the broad categories of sensors - Temperature Sensor As the name suggests, temperature sensor is a component of an electronic circuit that senses the information about the thermal energy of any physical object or a body. Thermal expansion stands out to be the most widely accepted and the simplest phenomenon to measure temperature of a body, such as in a mercury-thermometer 13 . The electrical sensing of temperature employs various methods like resistive sensing, thermoelectric, semi-conductive 14 , acoustic, and piezoelectric detection 15 . A small portion of the object's thermal energy is transmitted to the sensor, which in turn converts this energy into electrical signals and hence sensing of temperature is achieved. Although this data may be imperfect or erroneous due to the pragmatic conditions, it can be made faultless by employing two methods of signal processing: predictive and equilibrium. 13 The thermal energy or heat flows between two bodies by the means of conduction, convection and radiation. Based on these modes, temperature sensing is classified into following categories -1. Contact sensing -requires the physical object or the body being sensed to be in direct physical contact with the sensor, e.g. measuring body temperature by thermometer. 2 .Non-contact sensing -senses the radiant energy emitted by a heat source in the infrared portion of the electromagnetic spectrum. A temperature sensor can be employed in a case where the data regarding the thermal energy of a body or the surroundings is useful to trigger an action. For example, an RTD (Resistive Temperature Detector) can be used in a passive-open mode, in a smart cook-top appliance, to keep track of the temperature of the gravy being cooked. Hence the information collected can be further utilized to notify the end user when the gravy reaches an alarming temperature and is ready to be served. Pressure Sensor The pressure sensor is another electronic component that determines the amount of force exerted per unit area of application. The range of accurately measureable pressure extends from fraction of inch of water (low level pressure) to hundreds of thousands of PSI (extreme level pressure) 16 . Two types of pressure measuring techniques have been thus employed in order to measure the varying range of pressure, which are: direct and indirect pressure measurement. The indirect pressure measurement techniques comprise of pressure measurement using electrical sensors. Many pressure measuring principles are used today in electrical pressure measurement. Most of the methods use measurement of displacement or force to determine pressure. Piezoresistive pressure sensors are most widely accepted for the same 17 . The application of a pressure sensor can be found in a smart security system. A pressure sensor can be enforced simply on a door or a door-lock to determine the value of pressure tolerated by the door or the door-lock. In case of an extreme or intolerable measured value, signifying a break-in, the security or nearby police department and the guardian can be thus notified of the collected information to take control of the situation and avoid the unbearable circumstances. Position and Displacement Sensors Position is the location or the coordinates of an object with respect to a defined point of reference, while displacement is defined as the change in position of the object. A position sensor is an electronic component that determines the distance of a physical object from a point of reference, while a displacement sensor involves the detection of movement. Determination of the position and displacement of an object or the distance between two objects can pose many applications in different aspects, such as robotics, traffic control, security systems and many others 18 . These sensory devices are classified into many categories (based on their working principles), of which proximity sensor is widely in use in smart phones, presently. The proximity sensor is more of a threshold sensor that determines a limiting distance to trigger an action 19 . However; there exist other position sensors too, varying by their modus operandi, classified, basically, into contact and non-contact devices 20 . To realize the applications of a position or displacement sensor, one can envision an automated environment of automobiles where the auto-pilot motion of two automobiles shall be guided based on the output signals generated by the proximity sensors and the other distance and position sensors embedded on them. The distance between two parallel moving automobiles shall determine one's future action, like overtaking the other or slowing up own speed to let the other one pass by. Biosensor The biosensor is an integrated sensing device. A bioreceptor and a transducer together constitute a biosensor. The object to be sensed or the target of a biosensor is called an analyte. The bioreceptor is a bio-molecule that is responsible for the recognition of the analyte, while the transducer transforms the event so recognized, into a measurable signal 21 . This signal can be further used to perform various tasks, such as to trigger an activity based on a threshold value of the signal. The biosensor has found its applications in various aspects in health care (e.g. blood sugar level tests, hemoglobin tests), industrial process management and home automation (face recognition or finger print based security systems) 22 . An IoT based smart door-lock system can be designed incorporating a biosensor which, when a person enters a house's premises or reaches the door and is Vol 9 (47) | December 2016 | www.indjst.org Amit Joshi, Gurpreet Singh and Gagandeep Singh about to knock, will recognize whether the individual is an authentic person or not, and further shall instruct to unlock the door, in the former case. Acoustic Sensor An acoustic sensor is, fundamentally, a device that is similar to that of the pressure sensor, in terms of working principle (i.e. both operate on mechanical pressure waves), but differs by the approach of collecting information (an audio sensor would not detect a constant or very slow changing pressure). Audio sensors are chiefly termed 'microphones' , nonetheless a microphone is often used for infrasonic and ultrasonic sound too 23 . Additionally, a microphone is a transducer that transforms the input sound waves into electrical signals. An acoustic sensor has a similar structure as that of a pressure sensor, since an acoustic wave is also a mechanical pressure wave. It is constituted by a moving diaphragm and a displacement transducer that converts the diaphragm's movement into an electrical signal. Microphones are further classified into different categories, as mentioned in 24 . The acoustic sensor plays a vital role in the construct of an IoT based system. One may find an acoustic sensor very useful in a smart voice-commands system, where the system would be solely based upon the recognition of voice commands. A user may ask the smart system to turn ON/OFF the lights or change the ambience of the room, or simply command the AC to stir up the cooling or ask the telly to turn up/down the volume. Now, having discussed some common varieties of the sensors to effectively construct an IoT based system (however, there are many other classifications of sensors like flow sensor, velocity sensor, level sensor) 25,26 , it is necessary to make the choice of the appropriate sensor. The selection of an aptly fitting sensor is based on the following parameters 27 Transmitter and Receiver Devices Once the information is collected by the sensor(s), the information is needed to be exchanged in order for the process to move to the next phase and perform the successor event. This exchange of information is carried out by transmitting and receiving devices. A transmitter is generally an electronic circuit that produces or emits the information into its surroundings, with the help of an antenna, while the receiver is an electronic circuit that converts the incoming digital signals to analog signals, such as sound or light that can be perceived by humans. The transmitter and receiver devices operate on radio frequency, generally, for long distance communication. The alternatives could be infrared frequency or Bluetooth based transmitters and receivers 28,29 . Bluetooth is powering different industries with quintessential low cost and flexible development architecture. It is highly expected for Bluetooth technology to see a growth in wireless medical devices, wearables, home automation and retail 10 . The Bluetooth based transmitting and receiving devices can operate within a range of 10 to 100 meters and the infrared transmitters and receivers are scalable within an even smaller range of 1 mm. Hence the radio frequency transmitters and receivers are more preferable. Communication Channel There can be different types of communication channels for the propagation of information; they are further broadly classified into the following two categories -1. Guided media (wired media): These media require a physical medium to transfer the information from one node/end to the other. Some popular guided media are coaxial cables, twisted pair cables and optical fiber cables. Unguided media (wireless media): These media do not require any physical medium for the propagation of the information from one end to the other; rather they operate on different frequency spectrums. Since, the key feature of the IoT technology is the exchange of data over a large network of things that are generally connected over wireless media. Hence, the communication channel that aptly fits into the scenario is unguided media, and the spectrum of frequency over which the information is propagated is the radio fre-quency spectrum. The medium through which the data is transferred complies with IEEE standard 802.11 b/g/n. Further the exchange of data over the channel takes place with the help of a set of protocols and services enforced thusly 5 . Applications With the robust features of the technology, the Internet of Things exhibits a broad field of applicability. The technology can be employed in the field of environmental monitoring, infrastructure management, medical and healthcare systems, media, or home automation. The technology finds different use-cases in these different fields of application. Following are some fascinating applications of the IoT, categorized into three levels according to the scalability - Smart City The idea of smart city is not only confined to building attractive infrastructures and fancy technology equipped architectures; rather the main concern is to transform a city into more human friendly and eco friendly, more economic than before (in terms of energy consumption, time, costs and wastes), and more of a place with a sustainable environment. Various measures can be taken into account to transform a city into a smart city 5 . IoT finds applicability in many expanses of the real world to ease many lives as well as making them safer and optimum. To throw light at the applicability of IoT in developing a smart city, consider a scenario: Imagine Sky Civic, a developing new smart city. The mayor introduces a new development scheme for the city that is composed of - A Smart Rescue Utility (SMU) 2. Smart Power Readers (SPRs), and 3. Smart Traffic Optimizers (STOs) The Smart Rescue Utility (SMU) is a service that will be notified in case of an emergency, like a road accident or a fire. On the detection of an emergency, the SMU will direct a rescue wagon towards the rescue site, containing all the first-aids to relieve the victims at once. The Smart Power Readers (SPRs) constitute a system that records the monthly power consumption readings from every house of a society and dispenses electronic bills to the users. A Smart Power Reader, on completion of a month, automatically dispatches the records to Sky Civic's electricity control stations where the SPR systems generate the electricity bills which are further dispatched to the users. The Smart Traffic Optimizers (STOs) system is a traffic management system that analyses the traffic at various locations and manages the traffic by scheduling an optimum waiting time at a traffic light, and also guiding the different vehicles with an optimal path. A Smart Traffic Optimizer is a traffic light that analyses the traffic at the point and schedules an optimum waiting time and notifies the adjacent STOs of the scheduled time in order to brew optimization on a larger scale. Also, it guides the driver of a vehicle by showing on the map, a path with less traffic, an optimal path. Smart Home The concept of smart homes is an association of various home automation applications that together constitute a fully automated house. An individual can personalize her/his home and automate it accordingly based on her/ his daily activities 1,5 . Now, consider a scenario where Mark and Stacy move in together into their new 3BHK flat. One fine day, they are watching the Discovery channel. The show showcases modern technologies for transforming your home into a Future home. Mark and Stacy decide to turn their flat into a fully automated home. They setup a Smart Refrigerator that notifies for all the grocery that is running out, a Smart fire alarm that notifies Mark and Stacy and the fire department as well, in case of a fire emergency at home; the Smart Home Guard detects Mark's face and enables him to enter the house without even Stacy unlocking the door. Mark and Stacy are happy with their new IoT equipped Smart home. Smart Gadgets and Wearables Smart gadgets and wearables are the key tools to personalize one's life 1,9 . A smart wristband, a smart wearable, helps you track your fitness, activity, logs heart rate, schedule alarms, acts as a pedometer and has many more interesting features. The following scenario will make the idea more clear -After finally moving into their new home and, after turning their new home into a Smart home, Mark and Stacy, now, also have their new Smart wristband. Mark Vol 9 (47) | December 2016 | www.indjst.org Amit Joshi, Gurpreet Singh and Gagandeep Singh continues his regular morning exercise. He, now, wears his Smart wristband to track the distance he covered and his fitness level. But one day, on his way back home, Mark meets an accident. Fortunately, he is wearing his wristband that detects his lowered heartbeat rate and notifies the nearby hospital to send an ambulance at the point of accident, as well as Stacy about Mark's condition. The ambulance arrives in the right time and saves Mark from the consequences. There are numerous other applications of the IoT technology that can be contained and demonstrated in this paper 1,5 . One of the future scopes includes the rapidly growing technology in IoT, Raspberry pi. The Raspberry pi is a very small computer (that of the size of a creditcard) and is a very cost effective technology. One can plug it into a computer monitor or TV and use standard keyboard and mouse to operate it. It enables people of any age to explore computing and learn high-level programming languages like Python and Scratch. Advantages and Disadvantages The features advocating the advantages of the Internet of Things are as under -1. Ubiquitous network: Personal wifi connectivity on every mobile and other portable devices result to a magnificent network around the globe. 2. Connected computing: It enables different devices like mobiles, tablets, televisions, music players, vehicles to keep track of our activities and helps these devices to support to carry our daily tasks. 3. Intelligence at the periphery of a network: According to Jim Gray (10 years ago), "Intelligence is moving to the periphery of the network. Each disk and each sensor will be a competent database machine. " which is apparently very certain. 4. Reduction of human efforts on daily life activities and contemporary life issues, to a greater extent. 5. It is an economical technology. Adopting the IoT technology, one can reduce the costs of carrying out different activities and reduce the wastes to great extents. 6. Reinforcement of this technology in health care and medical field can help save many lives. Although the Internet of Things finds many advantages in the real world, but as it is said, "every coin has two faces", there are several points leading to the disadvantages of the technology too -1. Privacy issues: With all the data being transmitted, the risk of losing privacy increases. For instance, do one's neighbors or colleagues need to know all the information of the individual? 2. Compatibility issues: Apparently, there are no international standards defined for the tagging and monitoring devices, which may raise compatibility issues. 3. Security threats: Imagine if a hacker alters the medical prescription of an individual, in the database and the medical store ships false medicines to which the user is allergic, the consequences are unimaginable. 4. Complexity: The opportunities of failure rise with the complexity of the system. For instance, if you and your spouse, both receive a notification regarding a need of several grocery items or the expiry of milk. As a result, both may end up buying the listed items and there may be excess stock. Challenges and Future Scope In near future, billions of data spouting devices will be connected to the Internet. The applications of Internet of Things span a wide range of domains, like homes, cities, environment, retails, logistic, industry and health. But with increase in number of devices there are lots of challenges in IoT that require attention: 1. Security: Since a lot of data is generated and transferred through IoT devices, so security becomes the prime concern especially in wireless networks. 2. Data management: Managing data is extremely difficult task, especially in case of IoT where devices are heterogeneous and they have different way of data representation and semantics. Moreover larger IoT devices lead to explosion of full scale data collection. This will give birth to another problem, i.e. scalability problem. With increase in number of devices, indexing methods should be developed to find an item easily. 3. Network architecture: Finding a flexible, scalable and cross platform architecture is another major goal of IoT. Some architecture are proposed for application specific domains like meter reading, industry specific applications and healthcare but these areas require still more attention. Summary To draw the summary of the paper, following points can be taken into consideration -• The Internet of Things, on the very basic level, is a network of smart devices and gadgets (termed things). • The IoT technology enables the end devices or things to collect and exchange information to solve the real life problems. • The term Internet of Things was coined by Kevin Ashton in the year 1999 while working with the RFID technology at Procter and Gamble. • The key components that constitute a fully functional IoT based systems, are -1. The things 2. Sensors 3. Transmitter and receiver devices 4. Communication channel • The Internet of Things technology finds large number of applications in day-to-day life, varying according to the scalability of problems. • There are many advantages of the technology, such as ubiquity of the network, enhanced communications and connectivity, and intelligence of devices at the periphery. • Also, there are facts flashing the demerits of the technology, like privacy and compatibility issues, security threats, and complexity. But if one focuses on the positive aspects of the technology, i.e. focusing on the advantages and overcoming the disadvantages (considering that problems are a way of leaning), great milestones can be achieved.
6,967.8
2016-12-12T00:00:00.000
[ "Computer Science" ]
Quantifying cell cycle regulation by tissue crowding The spatiotemporal coordination and regulation of cell proliferation is fundamental in many aspects of development and tissue maintenance. Cells have the ability to adapt their division rates in response to mechanical constraints, yet we do not fully understand how cell proliferation regulation impacts cell migration phenomena. Here, we present a minimal continuum model of cell migration with cell cycle dynamics, which includes density-dependent effects and hence can account for cell proliferation regulation. By combining minimal mathematical modelling, Bayesian inference, and recent experimental data, we quantify the impact of tissue crowding across different cell cycle stages in epithelial tissue expansion experiments. Our model suggests that cells sense local density and adapt cell cycle progression in response, during G1 and the combined S/G2/M phases, providing an explicit relationship between each cell cycle stage duration and local tissue density, which is consistent with several experimental observations. Finally, we compare our mathematical model predictions to different experiments studying cell cycle regulation and present a quantitative analysis on the impact of density-dependent regulation on cell migration patterns. Our work presents a systematic approach for investigating and analysing cell cycle data, providing mechanistic insights into how individual cells regulate proliferation, based on population-based experimental measurements. Introduction The coordination of cell proliferation across space and time is crucial for the emergence of collective cell migration, which plays a fundamental role in development, including tissue formation and morphogenesis, and also at later stages for tissue regeneration and homeostasis.Cells adapt their division rates in response to mechanical constraints within tissues [1,2], allowing cell populations to self-organise and eventually form and maintain tissues and complex structures.Moreover, disruptions in the control of cell proliferation often result in tumour formation [3][4][5].Although significant experimental efforts have been devoted to understand the mechanical regulation of cell proliferation [6] and its interplay with collective cell migration, existing mathematical models have failed to describe these constraints and how they affect cell cycle progression [7][8][9]. In order to understand cell proliferation regulation, numerous experimental studies have explored how spatial and mechanical constraints within tissues affect different stages of the cell cycle.The cell cycle consists of four main stages, namely: the G1 phase, where cells grow and prepare for DNA replication; the S phase, during which DNA synthesis occurs; the G2 phase, characterised by further cell growth and preparation for mitosis; and finally, the M phase, where cell division takes place.Cells can also exit the cell cycle and enter G0, where they become quiescent.The experimental visualisation of cell cycle stages can be achieved via the widely used FUCCI cell-cycle marker [10], which consists of red and green fluorescent proteins that are fused to proteins Cdt1 and Geminin, respectively.Cdt1 exhibits elevated levels during the G0/G1 phase and decreased levels throughout the remaining cell cycle stages, whereas Geminin shows high expression during the S, G2, and M phases; allowing thus to distinguish between these different stages -see Fig. 1.Several extensions of the FUCCI system exist now [11]; for instance FUCCI4 allows for the simultaneous visualisation of the G1, S, G2, and M phases [12]. Experimental studies of cell migration are often performed in epithelia due to their strong cell-cell adhesion which gives rise to collective and cohesive motion.Moreover, they play a fundamental role in multicellular organisms as they serve as protective layers for various body surfaces and organs.Epithelial cell proliferation is regulated by mechanical forces, which can accelerate, delay, arrest, or re-activate the cell cycle.In particular, extensive research has focused on the G1-S boundary, revealing that intercellular tension can favour this transition [13], while tissue pressure can halt progression based on crowding [2]. More generally, the extracellular regulation of switches from G0 and G1, and within substages of G1, has been well-known for many years [14].However, and contrary to initial assumptions, cells also have the ability to regulate progression through stages of the cell cycle following the G1-S transition in response to external cues.These external signals might involve not only mechanical forces [15], but also nutrients and growth factors [16,17].In epithelia, this question was explored recently by Donker et al. [18], revealing a mechanical checkpoint in G2 which controls cell division.In particular, this checkpoint allows cells to regulate progression through G2, via sensing of local density, explaining why dense regions in epithelia contain groups of cells that are temporarily halted in G2. Experimental studies employing FUCCI and variations of it have thus successfully linked mechanical constraints to cell cycle progression.These studies have employed qualitative analysis, direct measurements of cell cycle stage durations < l a t e x i t s h a 1 _ b a s e 6 4 = " 9 1 o z F 3 / i h f 7 C E a 0 P a 3 q g 4 0 o i s S g = " > A A A B 7 X i c b V B N S w M x E J 3 U r 1 q / q h 6 9 B I v g q e w W U Y 9 F L x 4 r 2 A 9 o l 5 J N s 2 1 s N l m S r F C W / g c v H h T x 6 v / x 5 r 8 [19] at different time points (adapted).Initial tissue diameter ∼ 3.4 mm.Scale bars correspond to 1 mm.(C) Segmented data showing G1 (red), S/G2/M (green), and post-mitotic (gray) cells.Note that in the model we combine post-mitotic cells and cells in G1. (D) Zoomed-in segmented data at the tissue edge and centre, corresponding to the black squares in (C).(E) Fraction of cell-cycle state cells in the tissue centre and in the tissue edge -defined as regions extending ∼ 200 µm from the tissue center and tissue edge, respectively.(F) Density profiles in polar coordinates at t = 40 h, showing cells in G1, S/G2/M, and post-mitotic cells.(E) and (F) show the average of eleven independent tissue expansions with the same experimental initial condition, with shaded regions indicating one standard deviation with respect to the mean.[18], or metrics associated with cell cycle progression, such as cell area [2], and Geminin/Cdt1 or EdU signals [20,21].However, these approaches omit a quantitative comparison between model and data, hence limiting the depth of mechanistic insights that can be derived. Here, we present a quantitative investigation into the mechanical regulation of cell cycle progression by sensing of local tissue density First, we construct a mathematical model of cell cycle dynamics that accurately captures the impact of tissue crowding on cell cycle progression.By combining minimal mathematical modelling, Bayesian inference, and recent experimental data [19], we provide further evidence, consistent with previous experimental studies [2,18], that density-dependent effects operate throughout the cell cycle and together serve as a regulating mechanism for the growth of epithelial tissues.Our work thus constitutes a systematic approach towards the quantification of densitydependent effects regulating cell cycle progression.Moreover, the obtained parameter estimates reveal an explicit relation between the duration of different cell cycle stages and tissue density, which is consistent with the experimental measurements of Donker et al. [18]. Methods Mathematical models of cell cycle dynamics.We build on the model proposed by Vittadello et al. [7] to describe two cell populations, ρ 1 (x, t) and ρ 2 (x, t), in different stages of the cell cycle.We represent by ρ 1 the density of cells that are in G0/G1, while ρ 2 gives the density of cells in the S/G2/M phases of the cell cycle -see Fig. 1.In the model, cell motility is described via linear diffusion, with a diffusion constant D > 0 for both cell populations [22].In order to effectively capture density-dependent effects controlling cell cycle progression, we assume that the transitions between different cell cycle stages are regulated by two crowding functions, f (ρ) and g(ρ), which depend on the total cell density ρ = ρ 1 + ρ 2 .In particular, the transition rate from G1 to S is given by k 1 f (ρ), while the division rate (from S/G2/M to G1) is given by k 2 g(ρ), where k 1 , k 2 > 0 are intrinsic rates of cell cycle progression.With this, the model reads where the factor of two in the equation for ρ 1 represents cell division into two daughter cells, and ∆ = d i=1 ∂ 2 xi is the Laplacian operator in dimension d.These equations are solved first in polar coordinates (assuming radial symmetry in two spatial dimensions, d = 2) to describe epithelial tissue expansion experiments, and then in one spatial dimension (d = 1) to study travelling wave behaviour, and the impact of tissue crowding on cell migration phenomena. In order to accurately capture density-dependent effects regulating cell cycle progression, we assume that f and g are non-increasing functions of the total density ρ.Again, this is motivated by the experimental observations of Streichan et al. [2] and Donker et al. [18].Furthermore, we assume f (0) = g(0) = 1, so that k 1 and k 2 represent density-independent transition rates.Note that setting f = g ≡ 1 gives rise to an exponential growth model (i.e.no dependence on density).On the other hand, choosing f ≡ 1 and g(ρ) = (1−ρ/K) + we recover the Vittadello et al. model [7].Here, we assume that f (ρ) and g(ρ) decrease linearly with the total cell density so that where K 1 , K 2 > 0 are constants controlling the duration of G1 and the S/G2/M phases, respectively, and (z) + = max(z, 0).The specific form of these crowding functions is chosen here for simplicity, although other functions sharing the same properties show similar qualitative behaviour.We follow a Bayesian approach [8,[23][24][25] to calibrate the model given in Eqs.(1).In particular, given experimental measurements of the cell densities {ρ D k (x i , t j )} i,j for k = 1, 2, and a vector of model parameters θ = (D, k 1 , k 2 , K 1 , K 2 ), we estimate the posterior probability distribution p(θ|ρ D ), which gives the probability density for the model parameters taking specific values.The posterior distribution, thus, can be used to quantify the uncertainty associated with specific parameter values, given the experimental observation.We refer the reader to the Supplementary Information for more details on Bayesian inference. Results Tissue expansion experiments.We compare our model predictions to the experiments performed by Heinrich et al. [19] studying the expansion and growth dynamics of a single circular epithelial tissue -see Fig. 1B.In these experiments, MDCK cells expressing the FUCCI markers are cultured in a silicone stencil for 18 hours and, after stencil removal, the cell population is allowed to freely expand for 46 hours.Given that the average cell cycle duration for MDCK cells is around 16 hours, this enables each cell to potentially undergo 2-3 cell divisions during the experiment.Local densities are then quantified by segmenting the fluorescence images in ImageJ and counting the number of nucleus centroids -Fig.1C.Note that post-mitotic cells do not fluoresce and appear dark, which makes the FUCCI system unreliable for cell counting.To quantify the density of post-mitotic cells, Heinrich et al. used a convolutional neural network to identify nuclei from phase contrast images [26] -see [19] for more details.Moreover, and in line with previous work [24], the model takes as initial condition the quantified density profile ten hours after stencil removal, so that the impact of the stencil on the dynamics is reduced.Note that after this time, cell densities near the tissue centre are relatively high (∼ 3500 cells/mm 2 , which corresponds to around 50-70% of Squares represent the estimated cell density obtained by averaging eleven experimental realisations, which we use to calibrate the model.Shaded regions denote one standard deviation with respect to the mean -see Supplementary Fig. 2 for confidence intervals in the model predictions.Numerical simulations in polar coordinates were obtained by using the posterior modes as parameter values, and no-flux boundary conditions -for details on the numerical scheme we refer to the Supplementary Information.In order to minimise the effects of the stencil removal on cell behaviour, the initial condition corresponds to the experimental density profile ten hours after stencil removal. the maximum saturation density for MDCK [19,24]) and a fraction of cells in this region are likely to be found in a quiescent state due to contact inhibition of locomotion and proliferation [27]. The experiments by Heinrich et al. [19] reveal a higher density of cells in G0/G1 at the centre of the tissue, where the total cell density is also higher -see Fig. 1D-E.The tissue edge, in contrast, is characterised by a larger number of cells which are preparing to divide (green) or are directly post-mitotic (gray).This agrees with previous observations of epithelial cells, which are known to control progression from G1 to S in response to spatial constraints [2].Note, however, that the density of cells in S/G2/M in the tissue centre is low but non-zero, even at later times in the experimentsee Fig. 1D-F -as observed also by Donker et al. [18]. For the sake of simplicity, here we consider post-mitotic cells (gray in Fig. 1) and cells in G0/G1, as one single cell population.Quantifying post-mitotic cell density is crucial in order to estimate both K 1 and K 2 in Eqs.(2), given that these parameters are measures of contact inhibition of proliferation, typically associated with regions of higher cell density [28]. To calibrate the model, we fit to the estimated cell density obtained by averaging eleven experimental realisations.We show the univariate marginal posterior distributions corresponding to the model parameters in Fig. 2A, confirming that all model parameters are practically identifiable.In particular, all marginal posteriors show well-defined and unimodal distributions, with a relatively narrow variance.For more details on model calibration we refer to the Supplementary Information (Supplementary Fig. 1). The posterior distributions in Fig. 2A are not only useful to inform further model predictions, but also give insights into the fundamental mechanisms underlying cell proliferation.In particular, given the intrinsic transition rate from G1 to S, k 1 , and the constant K 1 in Eqs.(2), we can estimate the average duration of the combined G1/post-M phase, for a given fixed density ρ, as Analogously, the estimated average duration of the S/G2/M phases is given by 1/k 2 g(ρ) = 1/(k 2 (1 − ρ/K 2 ) + ).Note, however, that these are only estimates of the timescales associated with different cell cycle stages.Put together, these estimates predict for a range of densities between 4000-4500 cells/mm 2 , a population doubling time of 14-20 hours.In Fig. 2B we plot these timescales as a function of the density ρ, observing how the duration of the different cell cycle stages increases with density.These results confirm again, in line with previous experimental measurements [2,18], that cell cycle dynamics are tightly regulated by density-dependent effects.In particular, these estimates are consistent with the experimental measurements of Donker et al. [18] -taking into account that the initial cell densities in our datasets are around ρ ∼ 3500 cells/mm 2 .At very low densities, however, our estimates predict a relatively short cell cycle duration.This suggests that the shape of the crowding functions f and g might be closer to a constant function in this regime. In Fig. 2C we show numerical solutions of the model (Eqs.( 1) and ( 2)), taking the posterior modes as parameter values.These confirm that the model can describe cell cycle dynamics inside expanding epithelial tissues.Notably, the model captures the tissue expansion speed, as well as the S/G2/M density peak near the edge of the tissue, which results from density-dependent effects regulating the cell cycle.We also note that this type of density profile is possible in the model when crowding-dependent effects are stronger in the early stages of the cell cycle (G1/post M) and weaker in the latter ones (S/G2/M).In terms of Eqs.(2) this requires having K 1 < K 2 , which is correctly identified from the data. We observe that the model overestimates the experimental density for early times of the experiment and, as a result of the model fit, underestimates it at later times.This is likely due to the transient behaviour that cells exhibit immediately after stencil removal [29,30], which could have an impact on cell behaviour even after the first ten hours of expansion, as suggested also in previous studies [24].However, we emphasise that tissue edge motion can be well described by the model. A similar behaviour is reported when the model is compared to a second set of experiments performed by Heinrich et al. [19].In this case, we use the obtained parameter estimates to describe the expansion of initially smaller epithelial monolayers (initial diameter ∼ 1.7 mm).We highlight that the mathematical model can capture the expansion dynamics near the tissue edge as well as the expansion speed (see Supplementary Fig. 3), even though model parameters were inferred from the large tissue expansions. Tissue colonisation experiments.Our model, together with the experiments of Heinrich et al. [19], reveals the intrinsic connection between tissue crowding and cell cycle progression, showcasing how this interplay can give rise to spatiotemporal patterns of cell proliferation in growing tissues.Next, we show how the model can be used to study and describe similar patterns observed in several other experimental studies using FUCCI and variants of it. Streichan et al. [2] show, using a tissue barrier assay, how the cell cycle can be reactivated by allowing cells to migrate and colonise free space -see top row in Fig. 3A.These experiments are initialised by growing MDCK-2 FUCCI cells in the G0/G1 phase within a removable barrier.After barrier removal, the tissue quickly colonises the available space, and cells behind the barrier, which were initially in G0/G1, reactivate their cycle by entering S phase.On the other hand, cells located further behind the barrier remain at high density and do not progress through the cell cycle. By solving numerically Eqs. ( 1) on a one-dimensional domain -see bottom row in Fig. 3A -we immediately observe how a model accounting for density-dependent regulation predicts similar behaviour to that observed experimentally 1 .In particular, and as inferred from the experimental data of Heinrich et al. [19], the calibrated model predicts that crowding-dependent effects have a greater impact at the G1-S transition, compared to the S/G2/M phases.In terms of the model and the choice of crowding functions (Eqs.( 2)), this once again requires Cell cycle regulation and cell migration.Given that assuming K 1 < K 2 seems necessary in order to obtain biologically realistic model predictions, what role does tissue crowding play in shaping cell migration patterns?We explore this question by varying the values of K 1 and K 2 in Eqs.(2) -Fig.3B.First, we observe that the density of S/G2/M peaks near the tissue edge when K 2 > K 1 > 0 and remains low in the tissue bulk as long as K 2 ≫ K 1 .However, for K 2 ∼ K 1 , the height of this peak decreases and the fraction of S/G2/M cells in the tissue bulk increases.On the other hand, when we assume a higher influence of density during S/G2/M relative to G1/post-M (K 2 < K 1 ), we observe that the tissue centre shows a higher fraction of cells in S/G2/M, in contrast with previously reported observation of contact inhibition of proliferation [27]. The numerical solutions in Fig. 3B suggest that lowdensity initial conditions lead to travelling wave solutions in one spatial dimension: ρ 1 (x − ct), ρ 2 (x − ct), with c > 0 being the wave speed, and x denoting the spatial coordinate.Standard arguments -see Supplementary Information -predict the existence of a minimum travelling wave speed in terms of only three model parameters Interestingly, this suggests that the invasion speed is inde- 1) for different values of K1 and K2 and at time points t = 50, 100, 150, 200 h.Units of K1 and K2 are cells/mm 2 .Initial conditions: ρ1(x, 0) = ρ2(x, 0) = 500 cells/mm 2 for x < 850 µm, and ρ1(x, 0) = ρ2(x, 0) = 0 cells/mm 2 otherwise.In all cases, all parameters except for K1 and K2 are fixed (taken from posterior modes). pendent of cell cycle regulation, and only depends on cell motility (D), and the intrinsic, density-independent growth rates (k 1 and k 2 ).However, we highlight that, as shown in the figure, crowding constraints play an important role in shaping collective migration patterns. The expression for the minimum travelling wave speed facilitates a comparison between the two-stage model proposed here (Eqs.(1)), and conventional single-population models of cell migration of the form where F is a non-increasing function satisfying The intrinsic growth rate of the population, r, is related to the intrinsic rates of cell cycle progression, k 1 and k 2 , via r 3) can be approximated by which agrees with the prediction of the well-known Fisher-Kolmogorov-Petrovsky-Piskunov (FKPP) equation (F (ρ) = 1 − ρ/K for a maximum cellular density K > 0) in one spatial dimension.Using the estimated parameter values we obtain 4r/(k 1 + k 2 ) ∼ 0.98, and in this case Eq. ( 3) predicts a minimum travelling wave speed of c min ∼ 33 µm/h, while the FKPP approximation yields c min ∼ 26 µm/h; both of them within the measured values by Heinrich et al. [19]. A better comparison with the two population model can be obtained by setting rF (ρ) = λ(ρ), where λ(ρ) is the dominant eigenvalue of the growth matrix , as given by Eqs.(1).In this case, , agreeing with the prediction from Eq. (3). More generally, we noted that c min does not depend on the choice of crowding functions f and g; however, crowding constraints have an impact on the observed migration patterns (Fig. 3B).To understand how growth and cell cycle regulation lead to the patterns observed experimentally, we investigate travelling wave solutions in a simplified version of our model, utilising the same parameters as in Eqs. ( 1) and ( 2) (see Supplementary Information).In particular, we set f (ρ) = H(K 1 − ρ), and g(ρ) = H(K 2 − ρ), where H(•) denotes the Heaviside function.This reduced model does not accurately approximate the model presented in Eqs. ( 1) and ( 2), but nonetheless it captures the same qualitative < l a t e x i t s h a 1 _ b a s e 6 4 = " B 4 e q U b 9 < l a t e x i t s h a 1 _ b a s e 6 4 = " z o t f t 7 z 6 O T v p A R A 4 N J q V d u R 7 N 6 w = " > A A A C A X i c d V D L S g N B E J y N r x h f U S + C l 8 E g e A q 7 q 9 H k I A S 9 e I x g H p D E M D u Z T Y b M P p j p F c M S L / 6 K F w + K e P U v v P k 3 z i Y r q G h B Q 1 H V T X e X E w q u w D Q / j M z c / M L i U n Y 5 t 7 K 6 t r 6 R 3 9 x q q C C S l N behaviour, and hence we expect that the relevant phenomena show similar dependence with respect to model parameters (see Supplementary Fig. 4).The analysis of travelling wave solutions for this simpler model suggests that the density of S/G2/M cells in the tissue bulk, ρ bulk 2 , only depends on the ratio of cell cycle progression rates, κ = k 1 /k 2 , and on the ratio of densities associated with crowding constraints, K 1 /K 2 , (Fig. 4).In particular, we obtain where α(κ) = 2/( √ κ 2 + 6κ + 1−κ−1).A similar dependence with respect to the model parameters is observed numerically for the model given by Eqs. ( 1) and (2) (Supplementary Fig. 4).For our estimated parameters, the expression above predicts ρ bulk 2 /K 2 ∼ 0.3, which is consistent with experimental observations.We also highlight that, as long as K 1 < K 2 , and k 1 and k 2 are of a similar order of magnitude, this expression predicts that the number of cells in S/G2/M in the tissue bulk will be small in comparison to the number of cells in G1/post-M (Fig. 4B).In particular, note that ρ bulk Interestingly, the travelling wave analysis also reveals that, when ρ bulk 2 > 0, the difference in S/G2/M cell density between the tissue bulk, ρ bulk 2 , and the tissue edge, ρ edge 2 , depends only on the difference of densities associated to crowding constraints at the G1-S and G2-M boundaries (Fig. 4C), where For our estimated parameters, we obtain ρ edge 2 − ρ bulk 2 ∼ 500 cells/mm 2 , again consistent with the experimental observations.These analytical expressions confirm the impact of density-dependent effects on cell migration and suggest that differences in the regulation of cell cycle stages contribute to the emergence of cell proliferation patterns. Density-dependent effects and experimental design. We have demonstrated that our model can be calibrated to experimental data collected by Heinrich et al. [19] to provide confident estimates of all parameters and, from there, used to extract and quantify crowding constraints regulating the cell cycle.An obvious question to ask is whether the model parameters could also be confidently estimated from other datasets, in particular where the cell density remains much lower and the impact of tissue crowding is reduced.To explore this question, we attempt to estimate the model parameters (including K 1 and K 2 ) using data from a lowdensity scratch assay with 1205Lu melanoma cells -see Fig. 5.We highlight that melanoma cells are highly metastatic and often display uncontrolled and invasive migration, in contrast to the highly collective and regulated movement exhibited by epithelial cells.In this experiment, tissues are seeded at an initial density of ∼ 400 cells/mm 2 (5% of the theoretical maximum packing density [7]), and data is collected every 16 hours, over two full days, allowing cells to potentially undergo 1-2 cell cycles.The posterior distributions obtained for the different model parameters reveal estimates for D, k 1 and k 2 that are consistent with previous studies [8].However, the low experimental densities do not allow for the quantification of density-dependent effects -the parameters K 1 and K 2 cannot be estimated with any degree of confidence (see Supplementary Fig. 5).This non-identifiability of K 1 and K 2 suggests the use of a simpler model, which assumes that cell cycle progression is independent of density-dependent effects (f (ρ) = g(ρ) = 1) and hence is only valid in the low-density regime.Indeed, when calibrated to data from the low-density scratch assay it x (µm) Figure 5: Absence of densitydependent effects in a low-density scratch-assay experiment (1205Lu melanoma cells).In this case, an exponential growth model can reproduce the experimental data.Density is normalised by using the theoretical maximum density corresponding to hexagonal close packing of cells [7].Top row is adapted from [8], with scale bars corresponding to 200 µm.Numerical solutions of Eqs. provides accurate parameters estimates (Supplementary Fig. 6), and an excellent agreement with the experimental data -see Fig. 5.This result clearly illustrates both the key role that mathematical modelling can play in the experimental design process, and the importance of considering parameter identifiability in the process of model construction. Discussion In this work, we have presented a new mathematical model of cell migration with cell cycle dynamics which captures and quantifies cell cycle regulation by sensing levels of tissue crowding.In line with previous experimental studies, by combining minimal modelling and Bayesian inference, we confirm that cell cycle progression is monitored via crowding constraints [2,18], and present a systematic approach towards the quantification of interactions regulating cell proliferation.Our model is capable of quantifying cell cycle data from experiments using the FUCCI system, and enables the extraction of mechanistic insights into how individual cells regulate proliferation based on population-level measures. The model presented here offers several applications to further our understanding of cell-cell interactions in cell proliferation.In particular, our model presents a systematic way to quantify the impact of drugs and gene knockouts/knockdowns interfering with cell proliferation.By using parameter estimation techniques, applied to different experimental datasets, we can gain insights into the regulatory roles of specific genes in the cell cycle.Another possible application concerns the study of cell migration in biomaterials incorporating cadherin proteins, which have recently been shown to slow down cell cycle dynamics [20].Furthermore, generalisations of the FUCCI system could allow for a finer representation of the different cell cycle stages -for instance, FUCCI4 [12] allows for the simultaneous visualisation of the four stages of the cell cycle.In line with these methodologies, extensions of our model (Eqs.( 1)) to multi-stage cell populations are straightforward, and could enable a more exhaustive explanation of the role of spatial constraints across all four cell cycle stages [18]. In the case of Heinrich et al.'s experiments [19], the excellent imaging quality allowed us to perform an accurate quantification of the cellular density profiles.This, in turn, facilitated model development and the subsequent inference of model parameters from the data, with the estimated parameters showing a low uncertainty.While the parameter identifiability of such mathematical models can be evaluated a priori under the assumption of infinite ideal data [31,32], biologically realistic datasets are finite, and often contain a significant level of noise, which can, in certain instances, constrain the ability to confidently estimate model parameters.More generally, and as we have illustrated, practical constraints in the experimental data often relate to the level of model complexity which can be inferred from experiments and the confidence in model parameter estimates. Continuum models are a widely adopted approach for describing cell migration.However, these models come with limitations: they tend to neglect local structure, especially in situations involving multiple cell populations.Such local structure can be observed in Fig. 1; (C) and (D) show some degree of local correlation in the cell phases, however this phenomenon is lost when averaging radially to obtain the density profiles in (E).Agent-based models [21,33,34] can help mitigate some of these issues, by providing more understanding of the generation and maintenance of spatial structure, but at the cost of increased computational times for simulation and inference, additional model parameters, and limited analytical tractability.We emphasise, however, that cell cycle dynamics appear to be globally desynchronised, as observed in previous studies [35], and so our differential equation-based model remains appropriate for this study, where the data is generated by averaging over a number of experimental replicates. The model presented here is minimal in the sense that it assumes that cell movement is random, and it ignores basic cell-cell interactions which are typical of epithelial cell migration such as cell-cell adhesion.While local cell density is likely to have an impact on cell motility [19], previous work shows that for individual expanding epithelial tissues, the linear diffusion model provides a good approximation [24].Note, however, that it is important to account for population pressure and its impact on cell movement when considering tissue-tissue interactions [36].Additional research is needed to determine whether more complicated models [37,38], incorporating cell-cell adhesion and other basic interactions offer deeper mechanistic understanding.Moreover, the model given in Eqs. ( 1) assumes that at low densities, the duration of each of the cell cycle stages follows an exponential distribution.While this assumption contradicts experimental observations [39,40] and can be mitigated by representing the cell cycle as a multi-stage process [9,41], such models break the cell cycle into a very large number of stages, limiting the potential for calibration to experimental data.Additional investigation is required to understand the extent to which more complicated models can provide further insights into how cells coordinate proliferation and migration to give rise to complex collective behaviours.For example, in the context of the cell cycle, an option is to explicitly incorporate cell cycle stage via the use of an age-structured model [42] that includes density-dependent regulation.Our results indicate that adopting a quantitative approach [43], that carefully examines quantitative data through the lens of mathematical modelling and Bayesian inference, can help provide answers to this question. S1 Bayesian parameter estimation All experimental datasets [2,8] consist of direct measurements of the density of cells in the G1/post-M, and S/G2/M phases of the cell cycle.We denote these measurements by {ρ D 1 (x i , t j ), ρ D 2 (x i , t j )} i,j .The next step in order to estimate the different model parameters is to assume a so-called error model, which relates experimental measurements with the model predictions given by the solutions of the model: ρ 1 (x, t), ρ 2 (x, t).For simplicity, here we assume that the residuals are independent and normally distributed where σ 1 and σ 2 are parameters to be estimated from the data.The white noise assumption has the advantage that simple likelihood-based methods can be used for inference.In particular, the log-likelihood of observing the data, given specific model parameters θ, can be written as While we assume a simplistic noise model to perform the parameter inference, model misspecification and the temporal resolution of the measured data are likely to introduce correlations between residuals [3].In particular, some degree of correlation might be expected given the model parameters are very well-determined with a relatively small variance -see Fig. 2 in the main text.More recently, a binomial measurement error model has been suggested in order to mitigate some of the inconsistencies of the white noise assumption [9].Other commonly used error models assume different forms of multiplicative noise, which preserve the positivity of the data [5,6].A more comprehensive study of the error model is left as a subject for future investigation. Maximising the log-likelihood function would give a set of parameters θ * that we could use to generate further model predictions.However, this approach does not give any information on the associated uncertainty, which here is of particular interest given that the parameters are estimated from noisy experimental data.To explore parameter identifiability, we follow a Bayesian approach, in which uncertainty associated with model parameters (θ) is quantified in a posterior distribution . This posterior distribution can be calculated from Bayes' theorem where P (ρ D | θ) = exp D (θ) is the likelihood of observing the measured data, and π(θ) is the prior distribution of the parameter vector θ.For the tissue expansion experiments [2], we assume a log-uniform prior on D, k 1 , k 2 , with bounds: 10 1 µm 2 /h < D < 10 4 µm 2 /h, This assumption allows us to consider a broad range of orders of magnitude for these parameters, although simpler uniform priors could also be used.For the parameters K 1 , K 2 , σ 1 , σ 2 , uniform priors with the following bounds were used: For the scratch assay data, we follow [8] and assume σ 1 = σ 2 = σ, and a uniform prior in all model parameters with the following conservative bounds: We use a Metropolis-Hastings MCMC (Markov chain Monte Carlo) sampler with adaptive proposal covariance to infer the posterior distributions.This is implemented in the parameter estimation toolbox pyPESTO [7].In the MCMC algorithm, a Markov Chain starts at position θ and accepts a potential move to θ * with probability q = min{1, P (θ | ρ D )/P (θ * | ρ D )}.In this way, the Markov chain tends to move towards high values of the posterior distribution, while still allowing for transitions to regions of lower probability in order to move away from local maxima.Figures S1 and S4 show typical MCMC iterations for both sets of experimental data and the corresponding data.We show the obtained stationary posterior distributions in the main text, and in Fig. S4. S2 Outline of the numerical scheme We briefly explain the numerical scheme used to solve our model in polar coordinates.For the tissue expansion experiments we assume solutions with radial symmetry: ρ 1 (x, t) = ρ 1 (r, t), ρ 2 (x, t) = ρ 2 (r, t), where r denotes the distance from the tissue centre.Hence, we can write We use a finite-volume scheme [2] and discretise the domain into a small circle C 0 of radius r 1/2 = δr/2, and concentric annuli C i with inner radii r i−1/2 = (i − 1/2)/δr, for i = 1, 2, . . ., N .If x ∈ C i , we approximate where |C i | denotes the volume of C i .In particular, by integrating the equation for ρ 1 over C 0 we obtain where |C 0 | = πr 2 1/2 denotes the area of C 0 .The first integral can be calculated exactly to obtain The last two integrals can be approximated to obtain where ρ 0 = ρ 0 1 + ρ 0 2 .Similarly, we integrate the equation for ρ 1 over C i , i ≥ 1, to obtain where An analogous set of equations can be obtained for ρ 2 following the same arguments.Finally, we approximate the derivatives ∂ r ρ as We solve the resulting differential equations using a fourth-order Runge-Kutta method implemented in the scipy.integrate.odeclass in Python. S3 Minimum travelling wave speed We look for travelling solutions in the model given by Eqs.(2) in the main text, in one spatial dimension.We assume that the crowding functions f (ρ) and g(ρ) are non-increasing with ρ, and non-negative.In the comoving reference frame, we can write: ρ 1 (x, t) = U 1 (z), ρ 2 (x, t) = U 2 (z), where z = x − ct, and c ≥ 0 denotes the wave speed.By denoting where the primes indicate differentiation with respect to z. The set of steady states of system (1) consists of the origin (U 1 , V 1 , U 2 , V 2 ) = (0, 0, 0, 0) and any state of the form (α, 0, U * − α, 0), with f (U * ) = g(U * ) = 0 and 0 ≤ α ≤ U * .Note that whenever f (ρ), g(ρ) > 0 for all ρ ≥ 0, the latter does not exist.As usual with linear diffusion models, the stability of the origin gives a lower bound on the wave speed c.In particular the Jacobian of system (1) at the origin reads The eigenvalues λ i of the linearized system about this point satisfy the polynomial equation By defining the roots of this quartic polynomial can be expressed as We seek biologically realistic solutions with U 1 , U 2 ≥ 0, and hence the eigenvalues must be real.In particular, this demands γ ± ≥ 0, which establishes the minimum travelling wave speed found in [10] By writing we observe that when 4k 1 k 2 /(k 1 + k 2 ) 2 1, the minimum travelling wave speed can be approximated by which agrees with the minimum speed predicted by the Fisher-Kolmogorov-Petrovsky-Piskunov (FKPP) equation [4]. S4 Study of travelling wave solutions A commonly used approach to obtain approximate solutions for travelling waves is the socalled Canosa's method [1].This procedure is a standard singular perturbation technique, and consists of a transformation y = −z/c, where D/c 2 := ε is treated as a small parameter.The first-order perturbation in ε approximates, within a small error, travelling solutions of the well-known FKPP equation, even though in this case ε is not necessarily small [4]. In our case, by using the esimated parameters and a wave speed of 30 µm/h, we obtain ε ∼ O(1).We highlight, however, that the lowest order approximation in ε provides an excellent approximation of the travelling wave -see Fig. 1.By using the transformation y = −z/c, we can write system (1) as Observe that, given the sign of the transformation y = −z/c, we need to impose the following 1) approximation (dashed lines), obtained from solving the ordinary differential equations ( 5) and (6).Model parameters are taken from posterior distribution modes.Although the analysis as ε → 0 looks like a singular perturbation problem, setting ε = 0 gives a valid first-order approximation.This is due to the fact that the nonlinear terms in Eqs. ( 3) and (4) vanish at both boundaries [4].Hence, we can look for a regular perturbation expansion in both U 1 and U 2 .By denoting the order O(1) solutions as u 1 and u 2 we obtain where u = u 1 + u 2 .In the figure above (Fig. 1), we compare the approximate solutions obtained by solving this system with the full travelling wave solutions.We highlight that the lowest order approximation provides an excellent approximation of the travelling wave shape. S4.1 A simplified model In order to make analytical progress we set f and g to be Heaviside functions: f (u) = H(K 1 −u) and g(u) = H(K 2 −u).This model is not an approximation of the model presented in the main text, but a simplification which preserves the same qualitative behaviour.Hence, we expect that the observed phenomena show similar dependence on the model parameters; this will be numerically confirmed later.In particular, note that this simplified model also describes two density checkpoints, at the G1-S boundary, and during the G2/M phases.The parameters K 1 and K 2 , respectively, quantify the cell density associated with these checkpoints.We rewrite Eqs. ( 5) and.(6) in terms of the variables (u, u 2 ), du dy = k 2 u 2 g(u) , Depending on the relative values of the total cell density, u, and the density checkpoints parameters K 1 , K 2 , we distinguish three possible cases.As inferred from the experimental data, we assume K 1 < K 2 . t e x i t s h a 1 _ b a s e 6 4 = " D Z H t D 5 S i r o Y X 8 G F M O j F E 9 F J u + 3 k = " > A A A B 8 X i c b V B N S w M x E J 3 4 W e t X 1 a O X Y B H q p e y K q M e i F 4 8 V 7 A e 2 S 8 m m 2 T Y 0 m y x J V i h L / 4 U X D 4 p 4 9 d 9 4 8 9+ Y t n v Q 1 g c D j / d m m J k X J o I b 6 3 n f a G V 1 b X 1 j s 7 B V 3 N 7 Z 3 d s v H R w 2 j U o 1 Z Q 2 q h N L t k B g m u G Q N y 6 1 g 7 U Q z E o e C t c L R 7 d R v P T F t u J I P d p y w I C Y D y S N O i X X S 4 6 j n R 5 W u H q q z X q n s V b 0 Z 8 D L x c 1 K G H P V e 6 a v b V z S N m b R U E G M 6 v p f Y I C P a c i r Y p N h N D U s I H Z E B 6 z g q S c x M k M 0 u n u B T p / R x p L Q r a f F M / T 2 R k d i Y c R y 6 z p j Y o V n 0 p u J / X i e 1 0 X W Q c Z m k l k k 6 X x S l A l u F p + / j P t e M W j F 2 h F D N 3 a 2 Y D o k m 1 L q Q i i 4 E f / H l Z d I 8 r / q X V f / +o l y 7 y e M o w D G c Q A V 8 u I I a 3 E E d G k B B w j O 8 w h s y 6 A W 9 o 4 9 5 6 w r K Z 4 7 g D 9 D n D 5 4 W k D s = < / l a t e x i t > k1f (⇢) < l a t e x i t s h a 1 _ b a s e 6 4 = " + D x P G e G M b a b 2 p r / 4 b b / 4 b s + 0 e t P X B w O O 9 G W b m B T F n 2 r j u t 1 N Y W 9 / Y 3 C p u l 3 Z 2 9 / Y P y o d H b S 0 T R W i L S C 5 V N 8 C a c i Z o y z D D a T d W F E c B p 5 1 g c p v 5 n S e q N J P i w U x j 6 k d 4 J F j I C D Z W e p w M 6 q N o l y 7 y e M o w D G c Q A V 8 u I I a 3 E E d G k B B w j O 8 w h s y 6 A W 9 o 4 9 5 6 w r K Z 4 7 g D 9 D n D 5 4 W k D s = < / l a t e x i t > k1f (⇢) < l a t e x i t s h a 1 _ b a s e 6 4 = " + D x P G e G M b a b 2 p r / 4 b b / 4 b s + 0 e t P X B w O O 9 G W b m B T F n 2 r j u t 1 N Y W 9 / Y 3 C p u l 3 Z 2 9 / Y P y o d H b S 0 T R W i L S C 5 V N 8 C a c i Z o y z D D a T d W F E c B p 5 1 g c p v 5 n S e q N J P i w U x j 6 k d 4 J F j I C D Z W e p w M 6 q N B D e r A 4 B G e 4 R X e P O 2 9 e O / e x 7 x 1 x c t n j u A P v M 8 f S X K O 8 Q = = < / l a t e x i t > ⇢ 1 l e 4 Q 0 p 9 I L e 0 c e i t Y D y m W P 4 A / T 5 A 0 r 2 j v I = < / l a t e x i t > Figure 1 : Figure 1: (A) Schematics of the FUCCI cell cycle marker system and model conceptualisation.Transitions in the model given by Eqs.(1) are regulated by the crowding functions f (ρ) and g(ρ), dependent on the total cell density ρ = ρ1 + ρ2.(B) FUCCI fluorescence images from the experiments of Heinrich et al. [19] at different time points (adapted).Initial tissue diameter ∼ 3.4 mm.Scale bars correspond to 1 mm.(C) Segmented data showing G1 (red), S/G2/M (green), and post-mitotic (gray) cells.Note that in the model we combine post-mitotic cells and cells in G1. (D) Zoomed-in segmented data at the tissue edge and centre, corresponding to the black squares in (C).(E) Fraction of cell-cycle state cells in the tissue centre and in the tissue edge -defined as regions extending ∼ 200 µm from the tissue center and tissue edge, respectively.(F) Density profiles in polar coordinates at t = 40 h, showing cells in G1, S/G2/M, and post-mitotic cells.(E) and (F) show the average of eleven independent tissue expansions with the same experimental initial condition, with shaded regions indicating one standard deviation with respect to the mean. Figure 2 : Figure2: Density-dependent effects regulate cell cycle dynamics in epithelial tissue expansion experiments[19].Parameter estimation and model-data comparison for the model given by Eqs.(1) and 2. (A) Univariate marginal posterior distributions for the model parameters.Posterior modes are given by (D, k1, k2, K1, K2) = (1300 ± 66 µm 2 /h, 0.612 ± 0.015 h −1 , 0.457 ± 0.011 h −1 , 4965 ± 38 cells/mm 2 , 5435 ± 45 cells/mm 2 ), where errors correspond to one standard deviation.(B) Estimated duration of the G0/G1/post M (red) and S/G2/M (green) phases, as well as the whole cell cycle (blue), as a function of cell densities.Solid lines correspond to posterior modes and shaded regions are obtained sampling from the posterior distribution.(C) Comparing data and model predictions.Squares represent the estimated cell density obtained by averaging eleven experimental realisations, which we use to calibrate the model.Shaded regions denote one standard deviation with respect to the mean -see Supplementary Fig.2for confidence intervals in the model predictions.Numerical simulations in polar coordinates were obtained by using the posterior modes as parameter values, and no-flux boundary conditions -for details on the numerical scheme we refer to the Supplementary Information.In order to minimise the effects of the stencil removal on cell behaviour, the initial condition corresponds to the experimental density profile ten hours after stencil removal. 2 ) 4 y 3 T 5 h 0 l v 8 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " D 1 3 n b 9 1 4 9 w w o n Q m C G l u o 4 d a y 9 F U l P M y L T Q S x S J E R 6 h A e k a F I g T 5 a W z o 6 f w y D h 9 G E b S P K H h z P 0 + k S K u 1 I Q H p p M j P V S / a 5 n 5 V 6 2 b 6 P D c S 6 m I E 0 0 E n i 8 K E w Z 1 B L M E Y J 9 K g j W b G E B Y U n M r x E M k E d Y m p 4 I J 4 e u n 8 H 9 o u 2 X n r O z c n J b q l 4 s 4 8 u A A H I J j 4 I A a q I N r 0 A Q t g M E 9 e A B P 4 N k a W 4 / W i / U 6 b 8 1 Z i 5 l 9 8 E P W 2 y f / x p D 0 < / l a t e x i t > ⇠ K 2 K 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " 4 1 1 z J 1 n i z X N O x E u 6 1 z 5 P z S 8 K I Z A = " > A A A B / X i c d Z D J S g N B E I Z 7 X G P c x u X m p T E I n s J M Y o z e g l 4 8 R j A L Z M a h p 9 O T N O l Z 6 K 4 R 4 x B 8 F S 8 e F P H q e 3 j z b e w s g o r + 0 Figure S1 : 2 Figure S5 : Figure S1: MCMC iterations for the large tissue expansions experimental data.Parameters D, k 1 , k 2 , K 1 , K 2 correspond to the model presented in the main text, and σ 1 , σ 2 are error model parameters.
12,454.4
2024-01-16T00:00:00.000
[ "Biology", "Mathematics" ]
Systematic Analysis of Metabolic Bottlenecks in the Methylerythritol 4-Phosphate (MEP) Pathway of Zymomonas mobilis Engineered microorganisms have the potential to convert renewable substrates into biofuels and valuable bioproducts, which offers an environmentally sustainable alternative to fossil-fuel-derived products. Isoprenoids are a diverse class of biologically derived compounds that have commercial applications as various commodity chemicals, including biofuels and biofuel precursor molecules. ABSTRACT Zymomonas mobilis is an industrially relevant aerotolerant anaerobic bacterium that can convert up to 96% of consumed glucose to ethanol. This highly catabolic metabolism could be leveraged to produce isoprenoid-based bioproducts via the methylerythritol 4-phosphate (MEP) pathway, but we currently have limited knowledge concerning the metabolic constraints of this pathway in Z. mobilis. Here, we performed an initial investigation of the metabolic bottlenecks within the MEP pathway of Z. mobilis using enzyme overexpression strains and quantitative metabolomics. Our analysis revealed that 1-deoxy-d-xylulose 5-phosphate synthase (DXS) represents the first enzymatic bottleneck in the Z. mobilis MEP pathway. DXS overexpression triggered large increases in the intracellular levels of the first five MEP pathway intermediates, of which the buildup in 2-C-methyl-d-erythritol 2,4-cyclodiphosphate (MEcDP) was the most substantial. The combined overexpression of DXS, 4-hydroxy-3-methylbut-2-enyl diphosphate (HMBDP) synthase (IspG), and HMBDP reductase (IspH) mitigated the bottleneck at MEcDP and mobilized carbon to downstream MEP pathway intermediates, indicating that IspG and IspH activity become the primary pathway constraints during DXS overexpression. Finally, we overexpressed DXS with other native MEP enzymes and a heterologous isoprene synthase and showed that isoprene can be used as a carbon sink in the Z. mobilis MEP pathway. By revealing key bottlenecks within the MEP pathway of Z. mobilis, this study will aid future engineering efforts aimed at developing this bacterium for industrial isoprenoid production. IMPORTANCE Engineered microorganisms have the potential to convert renewable substrates into biofuels and valuable bioproducts, which offers an environmentally sustainable alternative to fossil-fuel-derived products. Isoprenoids are a diverse class of biologically derived compounds that have commercial applications as various commodity chemicals, including biofuels and biofuel precursor molecules. Thus, isoprenoids represent a desirable target for large-scale microbial generation. However, our ability to engineer microbes for the industrial production of isoprenoid-derived bioproducts is limited by an incomplete understanding of the bottlenecks in the biosynthetic pathway responsible for isoprenoid precursor generation. In this study, we combined genetic engineering with quantitative analyses of metabolism to examine the capabilities and constraints of the isoprenoid biosynthetic pathway in the industrially relevant microbe Zymomonas mobilis. Our integrated and systematic approach identified multiple enzymes whose overexpression in Z. mobilis results in an increased production of isoprenoid precursor molecules and mitigation of metabolic bottlenecks. first MEP pathway intermediate DXP, suggesting that the first reaction in the MEP pathway of Z. mobilis is highly forward driven. Finally, we observed that the combined pool of MEP intermediates (excluding cofactors and nucleotide triphosphates) is less than 1/50th the size of the pool of glycolytic intermediates (10). Individual overexpression of MEP pathway enzymes results in large changes to the levels of MEP pathway intermediates. To identify bottlenecks and investigate carbon flow through the MEP pathway in Z. mobilis, we overexpressed MEP pathway enzymes individually (i.e., DXS2, DXP synthase; DXR, DXP reductoisomerase; IspDF, bifunctional enzyme MEP cytidyl transferase/MEcDP synthase; IspE, CDP-ME kinase; IspG, HMBDP synthase; and IspH, HMBDP reductase) and monitored changes in the levels of metabolic intermediates from the MEP pathway and primary metabolism (e.g., ED glycolysis, nucleotides, and cofactors) via LC-MS metabolomics. To generate the individual overexpression strains, MEP genes cloned from Z. mobilis were inserted into the isopropyl b-D-1-thiogalactopyranoside (IPTG)-inducible plasmid pRL814 (32) and conjugated into Z. mobilis (see Materials and Methods; see Table S2 in the supplemental material). MEP enzyme overexpression strains were grown anaerobically in defined minimal media to an optical density at 600 nm (OD 600 ) of 0.2 before induction with IPTG. Intracellular metabolites and samples for proteomics analyses were collected at mid-log phase (OD 600 , 0.5), and changes in metabolite and protein levels were measured against Z. mobilis harboring green fluorescent protein (GFP) in pRL814. To confirm the overexpression of target proteins in the individual overexpression strains, we performed label-free quantitative (LFQ) proteomics analyses via LC-tandem MS (LC-MS/ MS). Protein levels for DXS2, DXR, IspDF, IspE, and IspG increased 32-to 92-fold over basal levels, while the overexpression of IspH led to a lesser increase of 5-fold (see Fig. S1A in the supplemental material). We observed no major changes to the expression of other cellular proteins in these strains, including enzymes in the MEP pathway or ED glycolysis. We observed that the individual overexpression of some MEP enzymes resulted in large changes to the levels of MEP pathway intermediates (Fig. 3). Most notably, the overexpression of DXS2 led to significant increases to all MEP pathway intermediates, with the most substantial increases occurring in DXP (9.3-fold), CDP-ME (9.1-fold), and MEcDP (102-fold) levels, which suggests that DXS2 is a rate-limiting step in the MEP pathway of Z. mobilis (Fig. 3A). Interestingly, the buildup of MEcDP translated only to relatively minor increases in the downstream MEP metabolites HMBDP and IDP/DMADP (5.5-and 2.5-fold increase, respectively), suggesting a bottleneck at IspG. In contrast, when DXR was overexpressed, we measured only a modest decrease in DXP levels (0. 46fold) and no significant changes to the levels of downstream intermediates (Fig. 3A). These moderate responses in metabolite levels suggest that DXR alone is not a major rate-limiting step in Z. mobilis. Similar to other bacteria, the IspD and IspF activities are performed by the single bifunctional enzyme IspDF in Z. mobilis (33)(34)(35). The overexpression of IspDF resulted in a 78-fold increase in CDP-ME levels and an unexpected 3.8-fold increase in MEP levels but no significant changes in other MEP metabolites (Fig. 3A). These results suggest that the IspD domain of the IspDF enzyme complex is potentially more active than the IspF component in Z. mobilis and that IspDF is potentially another rate-limiting step in the MEP pathway. When IspE was overexpressed in Z. mobilis, we observed a significant reduction in CDP-ME levels (0.19-fold) but no significant changes in any other MEP pathway intermediate (Fig. 3A). Continuing to the next enzyme in the pathway, the overexpression of IspG led to a significant decrease in the levels of its substrate MEcDP (0.24-fold). We also observed some notable trends in other metabolites, such as a small increase in the IspG product HMBDP (2.2-fold) and the downstream intermediates IDP/DMADP (1.9-fold) (Fig. 3A). Similarly, while the overexpression of IspH induced no significant metabolite changes, notable trends included a minor reduction of 0.64-fold in HMBDP levels and a slight increase of 1.8-fold in IDP/DMADP levels (Fig. 3A). IspDF overexpression can greatly increase carbon flow into the MEP pathway of Z. mobilis. Dynamic changes in the levels of MEP pathway intermediates during enzyme overexpression reveal metabolic bottlenecks. To continue our investigation of MEP pathway bottlenecks in Z. mobilis, we overexpressed DXS2 individually and in various combinations with other MEP enzymes (Table S2) and monitored dynamic changes in metabolism. Engineered strains were grown anaerobically in minimal media, and intracellular metabolites were extracted when cells reached an OD 600 of 0.35, which represented the initial time point (0 min). Cultures were then induced with IPTG, and metabolites were collected at 7.5, 15, 30, 45, 60, and 120 min postinduction (Fig. 4). DXS2 overexpression resulted in a rapid accumulation of MEP intermediates, from DXP to HMBDP, starting at 7.5 min postinduction (Fig. 4A). DXP, MEP, CDP-ME, and CDP-MEP reached maximum levels (32-, 23-, 21-, and 11-fold, respectively) between 30 and 45 min postinduction. By 120 min, these metabolites settled to levels similar to those observed during the single-time-point DXS2 overexpression experiment (Fig. 3A). However, MEcDP continued to accumulate up to 120 min postinduction, reaching a 69fold increase, while the accumulation in downstream MEP intermediates remained relatively minor (,4-fold). Overall, the overexpression of DXS2 led to large accumulations in the intracellular concentrations of DXP and MEcDP (Fig. 4A). Thus, in agreement with our results from the single-time-point experiment (Fig. 3A), these time course data support the conclusion that DXS2 is a rate-limiting step and the first major enzymatic bottleneck in the MEP pathway of Z. mobilis. In addition, these observations also suggest that with heightened DXS2 activity, IspG and/or IspH become major bottlenecks in the MEP pathway, while DXR represents a less severe enzymatic constraint. The combined overexpression of DXS2 and IspG triggered an accumulation of all MEP intermediates starting at 7.5 min postinduction (Fig. 4B). Critically, when IspG was overexpressed alongside DXS2, the large buildup of MEcDP observed during DXS2 overexpression did not occur, and instead, we observed a large increase in HMBDP, which accumulated by 140-fold at 60 min postinduction. This increase in HMBDP levels extended to the downstream intermediates IDP/DMADP and GPP, which accumulated by 28-and 33-fold, respectively, at 30 min postinduction. Despite the large fold change increase in HMBDP levels, the absolute intracellular concentration of this MEP pathway intermediate remained lower than that of DXP (Fig. 4B). Overall, the combinatorial overexpression of DXS2 and IspG effectively mobilized carbon from MEcDP to HMBDP and to a lesser extent IDP/DMADP, which indicates that IspG is a rate-limiting step in the MEP pathway during DXS2 overexpression. The accumulation in HMBDP levels observed during DXS2 and IspG activation also indicated that the next bottleneck in this strain was IspH. In alignment with our expectations, when we overexpressed IspH in combination with DXS2 and IspG, HMBDP levels accumulated only by 3.3-fold at 30 min postinduction (Fig. 4C). Interestingly, we observed an unexpected buildup of MEcDP (25-fold by 45 min) in this strain that was significantly higher than the one in the DXS2_IspG overexpression strain; however, this accumulation was transient and was lower than that during individual DXS2 activation (Fig. 3A). Notably, the absolute intracellular concentrations of DXP and MEcDP both decreased during the overexpression of DXS2, IspG, and IspH ( Fig. 4C), indicating an effective mitigation in the MEcDP bottleneck. Overall, these data suggest that during DXS2 overexpression, IspG and IspH become major rate-limiting enzymes and that their overexpression can mitigate the accumulation of MEcDP observed during DXS2 activation. Glycolytic intermediates, nucleotide triphosphates, and cofactors remain largely unaffected by MEP pathway enzyme overexpression. In addition to MEP pathway intermediates, we also monitored changes to central carbon metabolites, energy molecules, and redox cofactors during MEP enzyme overexpression (see Fig. S2 and S3 in the supplemental material). With few exceptions, ED glycolytic intermediates, nucleotide triphosphates (NTPs), and redox cofactors remained largely unaffected during MEP enzyme overexpression. For example, time course experiments revealed a slight reduction in 6-phosphogluconate (6PG) levels and a modest accumulation in phosphoenolpyruvate (PEP) during individual DXS2 overexpression (Fig. S3A). However, these trends were not maintained in the DXS2_IspG or DXS2_IspG_IspH overexpression strains ( Fig. S3B and C). Other strains overexpressing individual MEP pathway enzymes (i.e., DXR, IspE, IspDF, IspG, and IspH) also did not display consistent changes to glycolytic intermediates, NTPs, or cofactors (Fig. S2). A. Introducing a MEP pathway carbon sink via the heterologous expression of isoprene synthase. To enhance carbon flow through the MEP pathway and to reduce the accumulation of DXP and MEcDP observed in the DXS2 and DXS2_IspG_IspH strains, we investigated the overexpression of isoprene synthase (IspS) ( Table S2). Isoprene synthase dephosphorylates DMADP to generate pyrophosphate and isoprene (Fig. 1A), which is a highly volatile compound that readily diffuses out of the cell and growth media (36)(37)(38)(39). Thus, isoprene can potentially act as a carbon sink in the MEP pathway and mobilize carbon away from upstream metabolites. Similar to the previous time course experiments, we overexpressed DXS2 and IspS with a combination of MEP enzymes and tracked dynamic changes in MEP pathway intermediates, central carbon metabolites, and cofactors at various time points post-enzyme overexpression ( Fig. 5 and Fig. S3). The combined overexpression of DXS2 and IspS led to similar changes in the levels of MEP pathway intermediates to those observed during individual DXS2 overexpression. Namely, we observed a large accumulation of DXP and MEcDP, with lesser accumulations in the other intermediates (Fig. 5A). Interestingly, we observed an unexpected accumulation in IDP/DMADP, which increased by 71-fold at 120 min postoverexpression. Despite this large fold change increase, however, the absolute intracellular levels of DXP and MEcDP remained far greater than IDP/DMADP levels, indicating that the severe bottleneck at IspG in the DXS2 overexpression strain was still present in the DXS2_IspS strain. Using an alternative chromatography method (see Materials and Methods), we separated the IDP/DMADP isomers and determined that during DXS2 and IspS overexpression, there was 25-fold more IDP than DMADP (see Fig. S4 in the supplemental material). This finding suggested that Z. mobilis cannot efficiently isomerize IDP to DMADP, which is consistent with the lack of genetic evidence for an endogenous IDP/DMADP isomerase (IDI) in this organism (40). Thus, we hypothesized that the heterologous expression of IDI (i.e., from E. coli) ( Table S2) could mitigate the observed IDP accumulation. As expected, IDI overexpression with DXS2 and IspS did not result in an accumulation of IDP/DMADP, whose levels did not rise above 1.8-fold (Fig. 5B). Notably, the combined overexpression of DXS2, IspS, and IDI also substantially decreased the accumulation of DXP and MEcDP compared to combined DXS2 and IspS overexpression (Fig. 5B). While the reason for this result is unclear, it is plausible that IDI expression increases carbon flow toward DMADP and its downstream products, which in turn facilitates carbon flow from the upstream intermediates DXP and MEcDP. We also overexpressed DXS2 and IspS together with IspG and IspH and found that the accumulation in DXP, MEcDP, and IDP/DMADP was lower than that observed during DXS2 and IspS overexpression, suggesting once again that IspG and IspH overexpression can effectively mitigate these metabolic bottlenecks (Fig. 5C). Finally, we overexpressed a combination of DXS2, IspG, IspH, IspS, and IDI, which resulted in the smallest accumulations in the intracellular concentrations of DXP, MEcDP, and IDP/ DMADP across all four strains (Fig. 5D), suggesting that this combination of enzymes is capable of mitigating the bottlenecks common to all strains overexpressing DXS2. Across the four IspS overexpression strains, we noticed some consistent trends in changes to glycolytic intermediates (Fig. S3), such as reductions in glucose 6-phosphate and 2-keto-3-deoxy-6-phosphogluconate (KDPG) levels. Despite the decrease in KDPG levels, we observed only minor reductions in GAP and pyruvate for the strains expressing IDI (Fig. S3E and G). However, we did not detect any discernible trends in changes to NTPs or cofactors in the IspS overexpression strains (Fig. S3D to G). To investigate if the resolution of the metabolic bottlenecks at DXP and MEcDP observed in our combinatorial IspS overexpression strains translated to increased isoprene production, we monitored isoprene production using a fast isoprene sensor (FIS) (41)(42)(43). IspS overexpression strains were grown anaerobically in unsealed flasks and induced with IPTG at an OD 600 of 0.35. Cultures were grown for an additional 3 h, at which point aliquots were transferred to sealed vials. Following 10 min of growth, the headspace of the sealed vial was sampled and injected into the FIS to measure gaseous isoprene levels (see Materials and Methods). Background levels of isoprene production (5.6 nmol isoprene Á mmol glucose 21 ) in Z. mobilis overexpressing GFP were consistent with results from previous studies that established the spontaneous dephosphorylation of DMADP to isoprene ( Table 2) (44). The heterologous overexpression of IspS alone in Z. mobilis did not have a significant effect on isoprene production, indicating that the sole overexpression of IspS is insufficient to redirect MEP pathway intermediates toward isoprene production. However, the combined overexpression of DXS2 and IspS significantly increased isoprene production by 5.9fold compared to background levels to 33.3 nmol isoprene Á mmol glucose 21 . Surprisingly, the additional overexpression of IDI in the DXS2_IspS_IDI strain decreased the isoprene production to 16.3 nmol isoprene Á mmol glucose 21 . The combined overexpression of DXS2, IspG, IspH, a Isoprene production represents the average of at least three biological replicates per strain 6 standard error. b Statistically significant difference in isoprene production compared to ZM4_GFP (P , 0.05). and IspS or the combined overexpression of DXS2, IspG, IspH, IspS, and IDI resulted in only 18.1 and 20.1 nmol isoprene Á mmol glucose 21 isoprene production, respectively, which were both lower than the isoprene production in the DXS2_IspS overexpression strain (Table 2). DISCUSSION This study presents an initial investigation of metabolic bottlenecks within the MEP pathway of Z. mobilis using enzyme overexpression strains. Altogether, our data indicate that the reaction catalyzed by DXS represents the first metabolic bottleneck and rate-limiting enzyme in the Z. mobilis MEP pathway and that the overexpression of this enzyme is a viable approach to increase carbon flow into this pathway. These findings are consistent with observations in other bacteria. Previous studies in B. subtilis, E. coli, R. sphaeroides, and Corynebacterium glutamicum have shown that the overexpression of DXS alone or in combination with other MEP enzymes can enhance MEP pathway activity and improve downstream isoprenoid (i.e., isoprene or carotenoids) generation (20,27,30,38,39,(45)(46)(47). The overexpression of DXS2 in Z. mobilis led to a large and rapid accumulation in MEcDP ( Fig. 3A and Fig. 4A), suggesting a second major bottleneck at IspG and/or IspH. Previous research in plants and other bacteria have observed similar patterns in MEcDP accumulation resulting from DXS overexpression (29,48) and have shown that the additional overexpression of IspG mitigates this bottleneck and improves downstream isoprenoid (i.e., lycopene) production (49). Consistent with these prior studies, the dual overexpression of DXS2 and IspG in Z. mobilis effectively reduced the accumulation of MEcDP and successfully mobilized carbon to the downstream metabolites HMBDP and IDP/DMADP (Fig. 4B). When we overexpressed DXS2 in combination with IspG and IspH, we observed only a minor accumulation in HMBDP (Fig. 4C), suggesting an effective mitigation of the bottleneck at IspH observed during DXS2 and IspG overexpression. While DXS2, IspG, and IspH overexpression still displayed an accumulation in MEcDP, the overall reduction in the combined pool of MEP intermediates suggests enhanced carbon flow through the MEP pathway in this strain. Consistent with DXS2 being the first rate-limiting step in the MEP pathway, the individual overexpression of IspG and IspH in Z. mobilis was insufficient to increase carbon flow into the MEP pathway (Fig. 3). Although we measured robust protein overexpression for IspG and other MEP pathway enzymes, IspH overexpression was not as strong (Fig. S1). In line with these observations, a previous study tracked dynamic changes in IspG and IspH protein levels in Z. mobilis overexpression strains postinduction and found that while IspG protein levels increased by 38-fold (compared to wild-type Z. mobilis) by 120 min postinduction, IspH levels increased only by 4.6-fold (50). The reason for this disparity in protein overexpression between IspH and the other MEP enzymes in Z. mobilis is unknown and requires further investigation; however, it is possible that the iron-sulfur (4Fe-4S) cluster in the active site of IspH is less stable and more susceptible to oxidative damage than the one in IspG, which could lead to rapid turnover of IspH and a failure to attain high overexpression levels (9). Similar to DXS2, our analyses revealed that the individual overexpression of the bifunctional enzyme IspDF in Z. mobilis can increase carbon flow into the MEP pathway (Fig. 3). However, when we overexpressed DXS2 and IspDF in combination, we observed two apparent bottlenecks at MEcDP and CDP-ME, with only a minor accumulation (2-fold) in IDP/DMADP (see Fig. S5 in the supplemental material). These results suggest that IspE could be a minor enzymatic constraint, but the large accumulation in MEcDP still supports the finding that during DXS2 overexpression in Z. mobilis, IspG and IspH become rate-limiting steps. Although the combined overexpression of DXS2 and IspS, resulted in measurable isoprene production, the additional overexpression of other MEP enzymes (i.e., IDI, IspG, and IspH) together with DXS2 and IspS did not further improve isoprene production (Table 2). While the reason for this finding is presently unclear and requires additional investigation, it is plausible that competing native isoprenoid synthesis pathways (i.e., ubiquinol, menaquinone, squalene, and hopanoids), which previous studies indicate are highly active in Z. mobilis (7,26), siphon IDP and DMADP away from IspS. It is therefore possible that the reduction in MEP metabolite levels we observed in the strains overexpressing DXS2 and IspS in combination with other MEP pathway enzymes correlates with an increase in downstream isoprenoids we did not monitor and that increasing isoprene production beyond the levels we observed would require knocking out or downregulating these competing pathways. In addition, the DXS2_IspG_IspH_IspS and DXS2_IspG_IspH_IspS_IDI strains exhibited heavily reduced growth and glucose consumption rates (see Table S3 in the supplemental material), which could have also contributed to their reduced isoprene production. Thermodynamics dictate reaction reversibility and enzyme efficiency in metabolic pathways (51)(52)(53). Although experimentally derived Gibbs free energies (DG) for MEP pathway reactions are unavailable, standard DG estimates for DXS, DXR, IspD, IspE, and IspH can be obtained computationally using the component contribution method (CCM; see Materials and Methods) (54). Standard DG values using the CCM could not be estimated for the IspF and IspG reactions due to the high uncertainty of the computationally predicted standard free energy of formation for MEcDP (54). To provide insight into the in vivo thermodynamics of the MEP pathway in Z. mobilis, we combined these computational standard DG estimates with our MEP pathway metabolite concentration data. We found that the MEP pathway in wild-type Z. mobilis appears to be highly thermodynamically favorable (see Table S4 in the supplemental material). The estimated in vivo DG values for DXS, DXR, and IspH were all highly energetically favorable, with DG values of ,229 kJ/mol, while IspD was slightly less favorable (DG, 212 kJ/mol). The large changes in MEP pathway metabolite concentrations that we observed throughout our overexpression strains did not result in thermodynamic bottlenecks, with the sole exception of the IspDF overexpression strain, which appeared to be thermodynamically constrained at IspD (DG, 22.7 kJ/mol) due to the large accumulation in CDP-ME (Fig. 3). Although standard DG estimates using the CCM could not be obtained for IspF, the natively high intracellular concentration of MEcDP in wildtype Z. mobilis and its significant accumulation postoverexpression of DXS2 suggest that this reaction must be highly thermodynamically favorable. Based on the findings from this study, future engineering efforts to improve isoprenoid production in Z. mobilis can focus on optimizing expression levels of rate-limiting enzymes. Tuning MEP pathway enzyme expression will be critical to maximize carbon flow through the pathway while minimizing the burden on the cell (45,55,56). Additionally, maximizing MEP pathway activity in Z. mobilis will likely involve increased activity/expression of auxiliary enzymes/proteins to the MEP pathway. Specifically, IspG and IspH contain [4Fe-4S] clusters, which are synthesized and assembled via the suf operon (9,57,58). IspG and IspH rely on accessory proteins to resupply electrons to the [4Fe-4S] clusters after each catalytic cycle (59,60). In plants (A. thaliana) and bacteria (E. coli), electrons are resupplied to IspG and IspH [4Fe-4S] clusters via ferredoxin (FdxA) and flavodoxin I (FldA), respectively (61,62). Oxidized FdxA/FldA proteins are subsequently reduced by flavodoxin/ferredoxin NADP 1 reductase (Fpr), which uses NADPH as the preferred electron donor (59,62,63). Thus, in addition to overexpression, IspG and/or IspH activity in Z. mobilis could be enhanced by increasing [4Fe-4S] biogenesis activity via the overexpression of the suf operon, overexpressing electron transfer accessory proteins (i.e., FdxA, FldA, and/or Fpr) (64), and/or increasing the supply of NADPH (65,66). Moreover, future efforts to improve isoprenoid production in Z. mobilis will likely need to consider allosteric regulation or posttranslational modification of MEP pathway enzymes. Our time course experiments revealed complex dynamic changes in the levels of MEP pathway intermediates during enzyme overexpression ( Fig. 4 and 5). Although the rapid increases in MEP pathway intermediates can be attributed to increased enzyme activity postinduction, it remains unknown as to why the levels of some of the MEP intermediates gradually decreased over time after their initial spike. It is plausible that feedback regulation may be responsible for these interesting dynamics. For example, previous research in plants has established that DXS is a major metabolic control point in the MEP pathway (29,67) and that DXS is subject to feedback inhibition from IDP and DMADP (68). Additionally, an in vitro study showed that recombinant IspF from E. coli can be feedforward activated by MEP and that IspF is subject only to feedback inhibition from the downstream isoprenoid FPP when IspF is complexed with MEP (69). The results of this study represent an initial step toward establishing Z. mobilis as a viable candidate for the industrial production of isoprenoids. By monitoring acute responses in metabolism to individual and combinatorial MEP enzyme overexpression, we revealed valuable insights into the bottlenecks of the Z. mobilis MEP pathway. Ultimately, these insights can be leveraged to inform metabolic engineering efforts in Z. mobilis to generate commodity molecules. MATERIALS AND METHODS Strains and growth conditions. Wild-type and engineered strains of Z. mobilis ZM4 (ATCC 31821) were streaked onto Zymomonas rich-medium glucose (ZRMG) plates (10 g/L yeast extract, 2 g/L KH 2 PO 4 , 20 g/L glucose, and 20 g/L agar) from 25% glycerol stocks and incubated in an anaerobic (5% H 2 , 5% CO 2 , 90% N 2 atmosphere, and ,100 ppm O 2 ) chamber (Coy Laboratory) at 30°C for 3 to 4 days. Single colonies were used to inoculate liquid ZRMG (10 g/L yeast extract, 2 g/L KH 2 PO 4 , and 20 g/L glucose) containing 100 mg/mL spectinomycin. Strains were grown overnight and then subcultured into 2 to 5 mL of Zymomonas minimal media (ZMM) [1 g/L K 2 HPO 4 , 1 g/L KH 2 PO 4 , 0.5 g/L NaCl, 1 g/L (NH 4 ) 2 SO 4 , 0.2 g/L MgSO 4 Á7H 2 O, 0.025 g/L Na 2 MoO 4 Á2H 2 O, 0.025 g/L FeSO 4 Á7H 2 O, 0.02 g/L CaCl 2 Á2H 2 O, 1 mg/L calcium pantothenate, and 20 g/L glucose]. The subcultured growth was incubated overnight and used to inoculate experimental cultures. All experimental cultures were inoculated at an initial OD 600 of 0.02 to 0.05, and all medium was kept anaerobic for a minimum of 16 h prior to inoculation. For the single time point and time course experiments, strains were grown in either 25 mL or 100 mL of ZMM in 125-mL or 500-mL Erlenmeyer flasks, respectively, enclosed with foil, and with a stir bar set at 120 rpm. For the single-time-point experiments, cells were grown to an OD 600 of 0.2 before induction with 0.5 mM IPTG. Intracellular metabolites were collected for three biological replicates when cells reached an OD 600 of 0.5. For the time course experiments, cells were grown to an OD 600 of 0.35 before induction with 0.5 mM IPTG. Intracellular metabolites were collected for three biological replicates before induction (time point 0 min) and 7.5, 15, 30, 45, 60, and 120 min postinduction. Generation of plasmid constructs and engineered strains. All genes were cloned into the plasmid pRL814 (Table S2), which was provided courtesy of Robert Landick (Professor, UW-Madison, Department of Biochemistry, Great Lakes Bioenergy Research Center). The pRL814 plasmid was generated by assembling a fragment derived from pIND4 (70) containing the lacI q gene, P T7A1-O34 , and a pRH52 (13) fragment containing GFP, the pBBR-1 broad host origin of replication and aadA for spectinomycin resistance (32). PCR primers were designed using the New England BioLabs (NEB) assembly tool (https://nebuilderv1.neb .com/) to amplify gene fragments from template DNA (genomic or plasmid DNA) and to overlap the plasmid backbone. Synthetic ribosome binding sites (RBSs) of 20 to 30 bp in length were generated for each gene using the RBS library calculator (https://salislab.net/software/) (71,72) and introduced into pRL814 via overlapping PCR primers. The plasmid constructs were assembled via Gibson assembly (73) using NEB HiFi DNA assembly master mix reagents. Each 25-mL Gibson assembly reaction contained 0.015 pmol total of linearized vector backbone DNA and 0.03 to 0.09 pmol total of gene fragment DNA, and reaction mixtures were incubated for 1 h at 50°C. Constructed plasmids were then introduced into E. coli DH5a cells via transformation and grown on LB plates (10 g/L tryptone, 5 g/L yeast extract, 5 g/L NaCl, and 15 g/L agar) with 100 mg/mL spectinomycin. Extracted plasmids from single colonies were screened via Sanger sequencing (Functional Biosciences) to confirm the successful transformation of the plasmid construct. The constructs were then introduced into Z. mobilis ZM4 triple (DhsdS c , Dmrr, and Dcas3) or quadruple (DhsdS c , DhsdS p , Dmrr, and Dcas3) mutant background strains using a previously described conjugation method (74)(75)(76). Successful conjugation into Z. mobilis was confirmed via PCR, and 25% glycerol stocks were prepared of the engineered strains. Intracellular metabolite extractions. When bacterial cultures reached the appropriate OD 600 (0.35 or 0.5), 10 mL of liquid culture was extracted in the anaerobic chamber using a serological pipette. Cells were separated from the media by vacuum filtering the culture through a 0.45-mm-pore-size hydrophilic nylon filter (Millipore; catalog no. HNWP04700) applied to a sintered glass funnel. The nylon filter containing cells was immediately immersed cell-side down into a plastic petri dish (5.5-cm diameter) containing 1.5 mL cold (-20°C) extraction solvent (40:40:20 by % volume methanol-acetonitrile-water; all high-performance liquid chromatography [HPLC] grade) and kept on a chilled aluminum block. This process simultaneously lysed the cells, quenched metabolism, and dissolved intracellular metabolites. The petri dish was lightly swirled to ensure complete contact of solvent with the filter. Filters remained in the cold solvent for ;15 min before being repeatedly rinsed in the extraction solvent to collect any remaining cell debris and metabolites (77)(78)(79). The cell-solvent mixture was then transferred to a prechilled 1.5-mL microcentrifuge tube, removed from the anaerobic chamber, and centrifuged at 16,000 Â g for 10 min at 4°C, and the supernatant was collected for LC-MS analysis. Intracellular metabolite sample preparation for HPLC-MS. For the single-time-point experiments, 180 mL of extraction solvent containing intracellular metabolites was dried under N 2 gas and resuspended in 60 mL of solvent A (97:3 H 2 O:methanol with 10 mM tributylamine adjusted to pH 8.2 using 10 mM acetic acid). For the time course experiments, 90 mL of experimental sample was combined with 90 mL of intracellular metabolite extract (collected as described previously) from a reference sample of Z. mobilis ZM4 grown in ZMM containing universally labeled [U-13 C] glucose (Cambridge Isotope Laboratories; item no. CLM-1396-PK) as the sole carbon source. The same reference sample was used for all time course samples in a given experiment to correct for LC-MS variation between sample injections. The combined sample was then dried under N 2 gas and resuspended in 60 mL solvent A. Following resuspension, samples were briefly vortexed for 5 to 10 s and centrifuged at 16,000 Â g for 10 min at 4°C to remove any remaining cell debris. The supernatant was then transferred to an HPLC vial for LC-MS analysis. Metabolomics LC-MS methods. Metabolomics LC-MS analyses were conducted using a Vanquish ultra-high-performance liquid chromatography (UHPLC) system (Thermo Scientific) coupled to a hybrid quadrupole-Orbitrap mass spectrometer (Q Exactive; Thermo Scientific) equipped with electrospray ionization operating in negative-ion mode. The chromatography was performed at 25°C using a 2.1-by 100mm reverse-phase C 18 column with a 1.7-mm particle size (Water; Acquity UHPLC ethylene-bridged hybrid [BEH]). The chromatography gradient used solvent A and solvent B (100% methanol) and was as follows: 0 to 2.5 min, 5% B; 2.5 to 17 min, linear gradient from 5% B to 95% B; 17 to 19.5 min, 95% B; 19.5 to 20 min, linear gradient from 95% B to 5% B; and 20 to 25 min, 5% B. The flow rate was held constant at 0.2 mL/min. For the targeted metabolomics method, eluent from the column was injected into the MS for analysis until 18 min, at which point flow was redirected to waste for the remainder of the run. To increase the quality of signal for the MEP metabolites, an alternative method was employed in which only eluent from 7.5 to 18 min was injected into the MS, which permitted for higher injection volumes. The MS parameters included the following: full MS-single ion monitoring (SIM) scanning between 70 and 1,000 m/z and 160 and 815 m/z for the targeted metabolomics and MEP metabolite-specific methods, respectively; automatic gain control (AGC) target, 1e6; maximum injection time (IT), 40 ms; and a resolution of 70,000 full width at half maximum (FWHM). Quantification of intracellular metabolite concentrations. Intracellular metabolite concentrations were measured by growing six biological replicates of Z. mobilis ZM4 in ZMM containing [U-13 C] glucose as the sole carbon source and extracting intracellular metabolites using extraction solvent containing known concentrations of non-isotopically labeled standards. The solvent containing 13 C-labeled intracellular metabolites and 12 C-labeled standards was analyzed via LC-MS, and the ratio of labeled:unlabeled peak intensities was used to calculate the intracellular metabolite concentrations (31). Extracellular metabolites were collected and quantified to account for excreted pyruvate and MEcDP (9). At the time of extraction, 11 mL of bacterial culture was obtained via a serological pipette and centrifuged at 4,000 Â g for 10 min at 4°C. The supernatant (10 mL) was then subjected to filter sterilization and metabolite extraction, as described previously. Due to their chemical similarity (i.e., similar structure and the same molecular formula), we were unable to separate the isomers IDP and DMADP via the nonchiral column that we used for our LC-MS analyses. Thus, these isomers were quantified as a combined pool. Aniline derivatization and LC-MS separation of IDP and DMADP. To allow separate measurements of IDP and DMADP, we performed aniline derivatization (80) and used an alternative LC-MS method with a chiral column that allowed for the separation of pure standards of the isomers IDP and DMADP (Echelon Biosciences) and some cellular samples. Due to the larger particle size of the chiral column (5.0 mm), this method is less sensitive, and thus, the ability to separate IDP and DMADP was only successful in strains (i.e., DXS2_IspS) that produced high levels of these metabolites. Standards of IDP and DMADP were prepared at 35 mM in 100 mL of HPLC-grade H 2 O, and following intracellular metabolite extraction of cellular samples, 100 mL of extract was dried under N 2 gas and resuspended in 100 mL of HPLC-grade H 2 O. Following the addition of 10 mL of a 200-mg/mL solution of N-(3-dimethylaminopropyl)-N-ethylcarbodiimide hydrochloride (EDC) and 10 mL of a 6 M aniline solution, samples were vortexed for 2 h. The derivatization was stopped via the addition of 5 mL of triethylamine. Standards/samples were centrifuged at 16,000 Â g for 5 min at ambient temperature, and the supernatant was transferred to an HPLC vial for LC-MS analysis. The chromatography to separate the isomers IDP and DMADP was performed at 35°C using a 4.6-by 250-mm b-cyclodextrin chiral column with a 5.0-mm particle size (Astec Cyclobond I 2000). The chromatography gradient used solvent 1 (50 mM ammonium acetate, pH 4.5) and solvent 2 (90:10 acetonitrile: H 2 O) and was as follows: 0 to 2 min, 100% B; 2 to 30 min, linear gradient from 100% B to 40% B; 30 to 45 min, linear gradient from 40% B to 25% B; 45 to 70 min, 25% B; 70 to 71 min, linear gradient from 25% B to 100% B; and 71 to 81 min, 100% B. The flow rate was held constant at 0.6 mL/min. The MS parameters used were as described previously with a scanning range of 150 to 600 m/z. Metabolomics computational analysis. Data analysis was performed using the El-MAVEN (Elucidata) software (81). Compounds were identified based on retention times matched to pure standards. For the single-time-point experiments, fold changes in metabolite levels in the MEP enzyme overexpression strains were measured against Z. mobilis overexpressing GFP. Signal intensities were log 2 transformed and two-tailed t tests were performed assuming equal variance to measure P values. For the time course experiments, the 12 C parent signal (derived from the experimental sample) was divided by the fully labeled 13 C signal (derived from the spiked-in 13 C-labeled Z. mobilis reference sample) to normalize for LC-MS variation between injections. The 12 C-to-13 C ratio was then normalized to the OD 600 at which metabolites were extracted, and fold changes were measured against the preinduction (time point 0) state. Signal intensities were log 2 transformed, and paired two-tailed t tests were performed to measure P values. P values were then corrected for multiple-hypothesis testing using the Benjamini-Hochberg method using a false discovery rate (FDR) of 5% (82). Isoprene production measurements. To measure isoprene production, at least three biological replicates of Z. mobilis strains overexpressing GFP and isoprene synthase were grown anaerobically as described previously. Cultures were induced with 0.5 mM IPTG at an OD 600 of 0.35. At 3 h postinduction, 200 mL was transferred to a sealed vial and 2 mL was used for OD 600 measurements. Following 10 min of additional growth, 1 mL of headspace was sampled via a needle and syringe and injected into a fast isoprene sensor (Hills-Scientific) to calculate isoprene concentrations (43). Two-tailed t tests comparing isoprene production in the GFP overexpression strain to the strains overexpressing IspS were performed assuming equal variance to measure P values. Growth rates (h 21 ) and glucose consumption rates (mmol glucose grams dry cell weight 21 Á h 21 ) were obtained by anaerobically growing three biological replicates of Z. mobilis overexpression strains as described previously. OD 600 readings and glucose measurements were collected every hour until stationary phase was reached (see Fig. S6 in the supplemental material). Glucose measurements were performed by taking supernatants from bacterial growth cultures and diluting them 1/50 with HPLC-grade H 2 O, mixed 50:50 with 1 mM [U-13 C] glucose, and analyzed via LC-MS using a C 18 column (as described previously). The chromatography gradient used solvent A and solvent B (100% methanol) and was as follows: 0 to 2.5 min, 5% B; 2.5 to 8 min, linear gradient from 5% B to 95% B; 8 to 10.5 min, 95% B; 10.5 to 11 min, linear gradient from 95% B to 5% B; and 11 to 15 min, 5% B. The MS parameters used were as described previously with a scanning range of 70 to 1,000 m/z. The ratio of labeled:unlabeled peak intensities was used to calculate glucose concentrations. Growth and glucose consumption rates were normalized to grams per dry cell weight (gDCW) (10). Protein extraction and sample preparation for proteomics. At the time of metabolite extraction, 10 mL of bacterial culture was collected for one or three biological replicates, and cells were centrifuged for 2.5 min at 4,000 Â g at 4°C. The supernatant was discarded, and the cell pellets were flash frozen in liquid nitrogen and stored at 280°C until further analysis. To prepare the samples for proteomics analysis, cell pellets were thawed and lysed by resuspension in 6 M guanidine hydrochloride. The samples were subjected to three rounds of heating to 100°C for 5 min and re-equilibration to ambient temperature for 5 min. The total protein concentration was quantified using a Pierce bicinchoninic acid (BCA) protein assay kit (Thermo Scientific), and 50 to 100 mg of protein was used for further processing. Methanol was added to a final concentration of 90%, followed by centrifugation at 15,000 Â g for 5 min. The supernatant was discarded, and the protein pellets were dried for 10 min. The pellets were then resuspended in 200 mL of lysis buffer [8 M urea, 100 mM Tris (pH 8.0), 10 mM tris(2-carboxyethyl)phosphine (TCEP) hydrochloride, and 40 mM chloroacetamide] to denature, reduce, and alkylate proteins. Resuspended proteins were diluted to 1.5 M urea in 100 mM Tris (pH 8.0). Trypsin was added at a ratio of 50:1 sample protein concentration to trypsin and incubated overnight (;12 h) at ambient temperature. The trypsinization reaction was stopped using 10% trifluoroacetic (TFA) acid. Following protein digestion, each sample was desalted using a Strata-X 33 mM polymeric reversed-phase styrene divinylbenzene solid-phase extraction cartridge and was dried. Before LC-MS/MS analysis, samples were reconstituted in 0.2% formic acid, and peptide concentrations were measured using a Pierce quantitative colorimetric peptide assay kit (Thermo Scientific). Proteomics LC-MS analysis. For each analysis, 2 mg of peptides was loaded onto a 75-mm-inside-diameter (i.d.), 30-cm-long capillary with an imbedded electrospray emitter and packed in a 1.7-mm-particle-size C 18 BEH column. The mobile phases used were as follows: phase A, 0.2% formic acid; and phase B, 0.2% formic acid-70% acetonitrile. Peptides were eluted with a gradient increasing from 0% to 75% B over 42 min followed by a 4-min 100% B wash and 10 min of equilibration in 100% A for a complete gradient of 60 min. The eluting peptides were analyzed with an Orbitrap Fusion Lumos (Thermo Scientific) mass spectrometer. Survey scans were performed at a resolution of 240,000 with an isolation analysis at 300 to 1,350 m/z and AGC target of 1e6. Data-dependent top-speed (1-s) tandem MS/MS sampling of peptide precursors was enabled with dynamic exclusion set to 10 s on precursors with charge states 2 to 4. MS/ MS sampling was performed with 0.7-Da quadrupole isolation and fragmentation by higher-energy collisional dissociation (HCD) with a collisional energy value of 25%. The mass analysis was performed in the ion trap using the "turbo" scan speed for a mass range of 200 to 1,200 m/z. The maximum injection time was set to 11 ms, and the AGC target was set to 20,000. Proteomics computational analysis. Raw LC-MS files were analyzed using the MaxQuant software (version 1.5.8.3) (83). Spectra were searched using the Andromeda search engine against a target decoy database. Label-free quantitation and match between runs were toggled on, MS/MS tolerance was set to 0.4 Da, and the number of measurements for each protein was set to 1. Default values were used for all other analysis parameters. The peptides were grouped into subsumable protein groups and filtered to reach 1% false discovery rate (FDR) based on the target decoy approach. Log 2 -transformed label-free quantitation intensities were further processed to obtain log 2 fold change values relative to Z. mobilis overexpressing GFP or background signal determined by randomly generated signal derived from a distribution in the noise range. Thermodynamic estimates of MEP pathway reactions. Standard free energy estimates were obtained using the component contribution method (CCM) via the Python package equilibrator-api (version 0.4.7) (57). The following chemical parameters were used for our free energy estimates: pMg, 3: ionic strength, 0.25 M; temperature, 30°C; and pH 6 (84). Standard free energy estimates using the CCM were unavailable for the MEP reactions IspF and IspG due to the high uncertainty of the computationally predicted standard free energy of formation for MEcDP. For the DXS reaction, the aqueous CO 2 concentration was estimated based on Henry's law and the gaseous CO 2 concentration in the anaerobic chamber. For the IspD reaction, an intracellular estimate of 100 mM was used for diphosphate. SUPPLEMENTAL MATERIAL Supplemental material is available online only.
9,998.8
2023-03-30T00:00:00.000
[ "Biology", "Engineering", "Environmental Science" ]
Absolute frequency measurement of the $6s^2~^1S_0 \to 6s6p~^3P_1$ $F=3/2\to F'=5/2$ $^{\text{201}}$Hg transition with background-free saturation spectroscopy We report the development of a method for eliminating background-induced systematic shifts affecting precise measurements of saturation absorption signals. With this technique, we measured the absolute frequency of the $6s^2~^1\text{S}_0 \to 6s6p~^3\text{P}_1$ transition in $^{201a}\text{Hg}$ ($F=3/2\to F'=5/2$) to be $1181541111051(83)$~kHz. The measurement was referenced with an optical frequency comb synchronized to the frequency of the local representation of the UTC. This specific atomic line is situated on the steep slope of the Doppler background at room temperature, which results in frequency systematic shift. We determined the dependence of this shift on the properties of both the spectral line and the background of the measured signal. Introduction Saturation spectroscopy is a powerful Doppler-free technique providing detailed information about the electronic structure of atoms through very precise frequency measurements of the atomic transitions. The method is the basis for various laser frequency stabilization and locking techniques [1][2][3][4][5][6][7]. The saturation spectrum's high-frequency resolution is achieved thanks to a significant reduction of the Doppler broadening. However, the spectral lines obtained with the Doppler-free saturation spectroscopy are often situated on a non-flat Doppler background, which limits the precision of the measurements. This limitation is because the signal's maximum no longer corresponds to the atomic transition frequency. This frequency shift is usually estimated from the fit to the measured line and background profiles. Such treatment is sufficient enough for high-quality data and fitting models. However, the low signal-to-noise ratio measurements result in high uncertainties of the fitting parameters, which makes this method inefficient. In this paper, we present a novel method stemming from saturation spectroscopy but modified by employing a new way to deal with the undesirable background-induced systematic shift of the atomic transition frequency. The technique is fairly insensitive to temperature changes, which entail changes in the Doppler profile and, thus, in the shape of the measured signal's background. Using this method, we measured the absolute frequency of the 6 2 1 S 0 → 6 6 3 P 1 transition in 201 Hg ( = 3/2 → = 5/2) with a room-temperature mercury vapor cell. This line lies on the steep slope of the Doppler profile at room temperature ( Fig. 1), which made it possible to exploit the method's advantages fully. The uncertainty budget of the measurement includes all significant frequency shifting effects such as AC-Stark, Zeeman shift, and pressure-dependent collisional shift. The study of spectral lines of mercury has been a subject of interest for over a century [8][9][10][11][12][13][14][15][16]. Although the mercury's intercombination 1 S 0 → 3 P 1 line has been the subject of numerous spectroscopic studies (this applies mainly to isotope 198 Hg), the = 3/2 → = 5/2 201 Hg transition, to our knowledge, has not been measured directly. There were, however, reported results [11] of its isotope shift relative to 198 Hg (measured in [17]), from which one can Fig. 1. The saturation spectroscopy of the 1 S 0 → 3 P 1 transition in the room-temperature mercury vapor cell. Five Doppler-broadened profiles of the mercury isotopes with sub-Doppler Lamb dips are visible. At room temperature, the Doppler-broadened profiles are of ∼ 1 GHz widths, much more than the isotopic shifts of some of the Lamb dips. The overlapping leads to situating the sub-Doppler spectral lines at the slopes, while the one for the 201 Hg (shown in the inset) is the steepest. infer the frequency value to be 1181541128 (17) MHz. This value agrees with our result 1181541111051(83) kHz within the uncertainty. The advent of high-power diode lasers and a non-linear frequency conversion paved the way for more precise spectroscopic measurements of mercury [18] and enabled the exploration of new areas of fundamental research based on the mercury atom [19][20][21][22][23][24]. Recently, the isotopically resolved Hg spectrum of the 1 0 → 3 1 intercombination transition was used for developing a new method for measuring the gaseous elemental mercury concentration in the air [25]. Our measurement enriches this spectrum with a new value, which will increase the method's accuracy. Experimental setup The experimental setup scheme is shown in Fig. 2. The laser system (Toptica TA-FHG Pro) is based on a 1016 nm laser diode in an external resonator configuration (ECDL). The power of the infrared light is amplified up to 2 W with the tapered amplifier (TA). The frequency is doubled twice, resulting in the ultraviolet 254 nm laser beam with a power of up to 120 mW. The typical laser beam intensity used in the experiment is 140 mW/cm 2 . To stabilize and measure the frequency of the ECDL, an optical frequency comb is used. After amplification and frequency shifting by an acousto-optic modulator (AOM1), a part of the infrared light is spatially superimposed with the comb output beam and sent to an avalanche photodiode (APD) for the frequency beat detection. The beat note signal is mixed with the RF signal from the generator (GEN) so that the frequency of the resulting signal is down-converted and adapted to the operating range of the RF counter. Its readings are used to determine the absolute frequency of infrared, and thus ultraviolet, laser light. In addition, the mixed RF signal is provided to a digital phase detector, which outputs a DC signal proportional to the phase difference between the mixed RF signal and a reference 20.5 MHz signal. The DC output signal is provided to the PID controller (Toptica FALC110), which generates a signal modulating the ECDL's current and its piezo voltage. With this approach, the infrared laser light is frequency-narrowed to about 100 kHz and locked to the optical frequency comb mode. To determine the comb mode number, a small fraction of the infrared light is guided to the wavelength meter, whose uncertainty (60 MHz) Fig. 2. Scheme of the experimental setup consisting of a laser source (Toptica TA-FHG Pro), an optical frequency comb, and a saturation spectroscopy setup. The infrared 1016 nm laser beam generated by an ECDL is referenced to the optical frequency comb. The frequency of IR light is doubled twice to obtain the UV light at a wavelength of 254 nm. The spectroscopy setup consists of a spectroscopic mercury cell, an AOM2 in double-pass configuration, and two independent power stabilization systems for the pumping and probing beams. is a few times smaller than the comb repetition frequency (250 MHz). Ultraviolet light is sent to the spectroscopy system. Using the saturation effect of the atomic transition [26], a sub-Doppler Lamb dip is observed. While the spectral line is free from the Doppler broadening, its width is a few times larger than the natural linewidth due to power broadening, transient-time broadening, and pressure broadening. The spectroscopy signal is used as an input of the frequency stabilization system, which tunes the UV frequency to the center of the atomic line. The frequency shifting is realized with an acousto-optic modulator (AOM2), whose RF driving signal is generated by a µC-operated direct digital synthesizer (DDS). The UV frequency-shifted beam is separated with a Glan-Taylor prism (GTP1) and split into another two beams by using the prism (GTP2). These two beams act as the pumping and probing beams in the spectroscopy system. Their power ratio can be chosen with a half-wave plate in front of the GTP2. We applied both beams' independent power-lock systems with PD1 and PD2 to ensure good power stability. For typical experimental conditions, the relative power stability is at the level of 8.3 ppm and 6.5 ppm for the pumping and probing beam, respectively. To eliminate the influence of the Doppler background, we periodically block the pumping beam, which disables the saturation of the atomic transition and measure the background level, which is then digitally subtracted. Pumping and probing beams are sent through AR-coated wedged fused silica windows to a 1 mm thick cylindrical absorption cell containing Hg vapors. The cell is attached to a servo that allows its rotation, thus changing the length of the optical path of the probe beam inside the cell. The saturated absorption spectroscopy signal is detected with a photodiode (PD3) and sent to the microcontroller, where the UV frequency correction is calculated (see section 3). To account for the influence of the Zeeman effect and collision-induced shifts, the spectroscopic cell was placed inside a spectroscopic chamber shown in Fig. 3. The chamber enables thermal Fig. 3. Quarter-section of the spectroscopic chamber design. Two aluminum chambers surround the spectroscopic. The temperature stabilization is realized with two Peltier modules below the inner chamber. The excessive heat is removed from the system by coolant flowing through a copper plate. The cell is surrounded by eight offsetcompensated 3-axis magnetometers enabling magnetic field and magnetic gradient measurements. The chamber is equipped with six coils in three independent Helmholtz arrangements to compensate for a stray magnetic field. UV light passes through the chamber via four AR-coated wedged fused silica windows. stabilization at the level of 0.1 • C. The temperature reading is based on a measurement of a Pt100's resistance with the use of an analog-digital converter. The model used for translating the resistance into the temperature extends its uncertainty to 0.5 • C. The chamber is equipped with three pairs of coils in Helmholtz configuration to compensate for a stray magnetic field. Eight 3-axis offset-compensated magnetometers distributed around the cell enable both magnetic field strength (with the uncertainty of a few µT) and magnetic gradient measurements. The whole spectroscopic chamber consists of two aluminum chambers: the inner and the outer one depicted in Fig. 3 in grey and silver, respectively. The volume between the inner and outer chambers is pumped out to improve thermal insulation. The inner chamber is filled with nitrogen, preventing unwanted water condensation or freezing during chamber cooling. The temperature stabilization is realized with two Peltier modules below the inner chamber. The excessive heat is removed through the water-cooled copper plate. The temperature of the inner chamber is measured using a Pt100 temperature sensor attached to its wall. This provides feedback to the PI controller. Method To account for the background-induced frequency shift, we improved a digital lock method (DLM), which has proven to work very well for symmetric spectral lines [18,27,28]. The scheme of this method is shown in Fig. 4. The DLM is based on cyclical switching of the laser frequency between Fig. 4. Scheme of the DLM. The frequency of the laser light is shifted cyclically by Δ towards lower and higher values corresponding to the opposite slopes of the sub-Doppler spectral line obtained with the pumping beam enabled. After each frequency jump, the spectroscopic signal is measured, and the difference between two consecutive values = 1 − 2 is used to calculate the frequency correction by which the AOM is shifted after each cycle. two values UV + AOM − Δ and UV + AOM + Δ , which are related to opposite slopes of a measured line. After each frequency jump, the spectroscopy signal is measured by a photodiode. The difference between two consecutive values ( = 1 − 2 = UV + AOM −Δ − UV + AOM +Δ ) is used to calculate the frequency correction by which the AOM is shifted after each cycle. The µC calculates the frequency correction after every two frequency jumps. For the symmetric line, the laser frequency lying exactly at the center of the measured profile results in = 0, thus also = 0. With this loop, the laser frequency ( UV + AOM ) follows the spectral line's maximum position. The DLM works very well for symmetrical profile. However, it may lie on a slope when a spectral line shares a Doppler-broadened absorption profile with other spectral lines, such as those from other isotopes. The steeper the slope, the more systematic error will be introduced by the DLM (we describe this effect in detail in section 5). To solve this issue, we developed the modified digital lock method (mDLM). The scheme of the mDLM is depicted in Fig. 5. As a significant improvement, we introduced the on-the-fly background-eliminating scheme based on disabling the saturation of the atomic transition. It is managed by repeating the frequency jumping cycle with the blocked pumping beam. Once the pumping beam is disabled, the Lamb dip disappears, and only the Doppler-broadened absorption profile remains. The cycle of two frequency jumps and signal measurements is repeated, but this time it corresponds only to the background levels, subtracted from the previously obtained spectral line measurements. As a Then, the pumping beam is switched on and the spectroscopic signal of the sub-Doppler spectral line (solid red line) is measured for the same frequencies, giving 1 and 2 . The differences Δ 1 = 1 − ,1 and Δ 2 = 2 − ,2 are used to calculate the frequency correction by which the AOM is shifted. result, we get two values ,1 , ,2 related to the opposite background-free slopes and use them to calculate the frequency correction where The line+bg and bg correspond to the values measured in the presence and absence of the pumping beam, respectively. Switching off the pumping beam is accomplished with a mechanical shutter controlled by a TTL signal from the µC. The full-closing time of the shutter is 5 ms. The time frame of the entire measurement cycle of the mDLM is shown in Fig. 6 Microcontroller and direct digital synthesizer A home-made digital device (Fig. 7) consisting of a direct digital synthesizer Analog Devices AD9959 evaluation board (DDS) and a Kamami ZL26ARM evaluation board with an STM32F107 microcontroller (µC) were used to precisely control the frequency of the RF signal supplied to the acousto-optical modulator and to implement the frequency correction algorithm described in section 3. The reference clock for DDS is a 100 MHz signal coming from UTC (AOS) [29,30]. The DDS includes four output RF channels digitally tunable from 0 MHz to 210 MHz with around 100 mHz resolution. All DDS parameters are set by the µC through the SPI interface, allowing the output frequency to be switched in a microsecond which is negligible compared to the cycle time. The opening and closing of the shutter are controlled by the TTL signal from the µC digital output. The signal is sent to the TTL input of the shutter driver. The signal from the photodetector (PD3 in Fig. 2) is measured using a 12-bit ADC converter built in the µC. The reading from the photodiode is averaged over the multiple measurements (about a thousand points per 1 ms) to improve the signal-to-noise ratio. Fig. 7. Scheme of the digital device for background-free spectroscopy control. The microcontroller executes the frequency correction algorithm and communicates with peripheral devices: ADC -analog-digital converter for measuring the signal from the photodetector, DDS -direct digital synthesizer for precise frequency control of AOMs, digital ports for sending TTL signals, e.g., to the shutter. Ethernet communication acts as a user interface with the device. The digital background-free locking algorithm is implemented in a µC program written in C language. The program can also be switched to the background-free or background-included scan mode to observe the line profile and determine the initial lock parameters. All parameters of the µC operation are set through commands sent via the Ethernet network. The µC can also save user-selected parameters and measurements and share them via Ethernet. Background-induced systematic shift If the separation of the spectral lines is small compared to their Doppler widths, the absorption profiles overlap, and the associated Lamb dips are no longer situated at the profile center but on the slope. Consequently, the peak of the Lamb dip is frequency-shifted according to the atomic resonant frequency. To analyze this shift, we model the measured atomic transition as a Lorentz profile with a linear background. The latter is justified since the width of the Doppler-broadened absorption profile is much larger than the linewidth of the atomic transition. The observed spectral line intensity profile can be then written as where and are the amplitude and the width of the Lorentz profile, respectively, − 0 is the detuning from the atomic resonant frequency, and , together with , is the linear slope parameters. In the case of the DLM, the value DLM at which the light frequency is stabilized does not correspond to the resonant frequency 0 of the atomic transition but is shifted instead. Moreover, this shift depends on the jumps magnitude Δ in the frequency correction cycles. To calculate the frequency shift ( DLM − 0 ) as a function of Δ , one can solve the equation which can be reduced to where 5 = 16, The dependence of the frequency DLM satisfying the Eq. (4) on Δ is elaborate but for Δ < can be approximated as where consist of the profile and slope parameters. The background introduces a systematic error that cannot be avoided within DLM, even with infinitesimally small frequency jumps Δ . The mDLM solves this issue as the spectral line with the slope background (Fig. 8) is seen to be symmetric due to on-the-fly background subtracting ( Fig. 9). To validate the equation (6), the dependence of the frequency at which the laser light was stabilized as a function of the frequency jump Δ was measured for DLM and mDLM (Fig. 10) with the 1 S 0 → 3 P 1 201 Hg transition. For the DLM, unlike the mDLM case, the frequency value shifts rapidly with the frequency jump. Moreover, even for a small Δ , the discrepancy Δ DLM-mDLM between results obtained with both methods persists. In general, this deviation depends on the ratio / and the linewidth . Typically, Δ DLM-mDLM is much smaller than the linewidth. Hence, for the Lorentzian profile, the deviation can be approximated as For our specific experimental conditions, the discrepancy shown in Fig. (10) is of 280 kHz. µC software parameters In the mDLM, the frequency correction is calculated by the µC according to equation (2). The gain has to be adjusted accordingly to ensure the frequency correction is fast enough to keep up with the frequency drifts of the spectral line and the laser light. Fig. 11 shows the dependence of the frequency at which the laser light was stabilized on the gain. For high gain, the frequency correction is too extensive, which leads to high uncertainties. Too small gain makes the system unable to keep up with the frequency drifts, which results in scattering the results in the tens of kHz range. As shown in Fig. 10, in the mDLM the frequency at which the laser light stabilizes does not depend on the frequency jump Δ . Despite this, the parameter Δ should be adjusted according to the spectral linewidth . For the optimal frequency jump, the sensitivity of the mDLM is the highest. This is the case for the steepest parts of the slopes being probed. It corresponds to the maximum of the first derivative of the spectral line profile. For the Lorentzian profile, the optimal Δ = / √ 12. For this specific value, the deviation of the laser frequency from the spectral line's resonance results in the largest response of the correction system, shown in Fig. 12. Absolute frequency To validate the mDLM performance, we used it to determine the absolute frequency of the 1 S 0 → 3 P 1 transition in 201 Hg ( = 3/2 → = 5/2), located on the steepest slope of the Doppler profile among all Hg isotopes at room temperature. The determined absolute frequency is 1181541111051(83) kHz. To obtain this value, the optical frequency comb was used and the error budget was measured. The total uncertainty was calculated according to the following formula where stat = 3.9 kHz is the statistical uncertainty of the line position measured under known and controllable experimental conditions, Δ AC-Stark is the AC-Stark frequency shift, Δ Hg-Hg and Δ Hg-H 2 are the pressure shifts, and Δ Zeeman− are Zeeman shifts of the , , and -axis, respectively. All the components are described in the following subsections and the values are summarized in Table 1. AC-Stark shift The energy levels of the atoms are shifted under the influence of the electric field of the laser beam. To estimate the contribution of this effect, we measured the position of the spectral line for different UV light power (Fig. 13). A linear fit weighted by the points' uncertainties yields the AC-Stark shift correction factor of −148.6(9.4) kHz/mW. The uncertainties are mainly related to the accuracy of the power meter used (Thorlabs S130VC) and the power measuring method. The total uncertainty is deduced to be 5.4%. Pressure shift The pressure-dependent collisions between Hg atoms and residual gases affect the atomic transition frequency. Since the vapor pressure of Hg depends on the temperature, the spectroscopic cell temperature was actively stabilized. The average temperature of the cell during measurements was 19.97(50) • C, which corresponds to 0.1708(75) Pa vapor pressure [31]. The temperature uncertainty is limited by the accuracy of the analog-digital converter used in the Pt100-based temperature measurement scheme. We used the same approach as presented in [18] to estimate a collision-induced shift. The collisions between mercury atoms shift the transition frequency by −22 (22) kHz/Pa. The contribution from the collisions with residual gases was estimated, assuming molecular hydrogen as a dominant component, which is consistent with the fused silica cell residual gases composition [32]. H 2 pressure at room temperature is less than 0.13 Pa, corresponding to the spectral line shift of −35(35) kHz/Pa. Zeeman shift The 201 Hg isotope (abundance of 13.18% [33]) is a fermion with non-zero nuclear spin ( = 3/2), which leads to hyperfine splitting of the 3 1 state into three states ( = 5/2, 3/2, 1/2) [34]. The = 3/2 → = 5/2 transition is the measured one. The ground state 1 0 , = 3/2 splitting is negligible in a weak magnetic field as the nuclear Landé factor is much smaller than the orbital and the electron ones. The excited state 3 1 , = 5/2 splits into six substates characterized by magnetic quantum number . To a first approximation, the linear Zeeman effect splits the sublevels symmetrically and therefore has no contribution to the absolute frequency shift. On the other hand, the quadratic Zeeman effect does not cause splitting but can produce a non-negligible shift of the hyperfine levels. To estimate the Zeeman shift, the spectral line position was independently measured for different magnetic fields for each axis. The external magnetic field of the Helmholtz coils was varied over a range that did not split the spectral line measurably. The results corresponding to the vertical ( -axis, Fig. 14) and horizontal ( -axis, Fig. 15), both perpendicular to the UV beams, manifest significant quadratic and linear Zeeman effects. For the axial magnetic field component ( Fig. 16), a very large shift, which exhibits only a linear dependence in the measured range, is observed. The magnetic field was measured by eight 3-axis offset-compensated magnetometers located symmetrically around the spectroscopic cell. Each reading has an uncertainty related to the accuracy of the offset calibration. Each point in the graphs corresponds to the average value of the magnetometers' readings for a given axis. The variance of the readings specifies the uncertainties of the points. To account for the Zeeman effect, the magnetic field and its gradient were measured during the absolute frequency measurement. The uncertainty-weighted sensors' readings were averaged, resulting in six magnetic field values, one for each side of the spectroscopic cell. The uncertainty of these results was deduced from the calculated internal and external variances, where the larger of the two was chosen. The average of pairs of points along the same axis determines the value of the magnetic field at the center of the spectroscopic cell, which was calculated to be 43.6(3.6) µT, −11.6(4.3) µT, and −7.46(0.77) µT for x, y, and z-axis, respectively. Conclusions In summary, we demonstrated a novel method to stabilize the frequency of laser light on the position of a spectral line. The method is especially beneficial in the case of the atomic lines located on the slope of the absorption profile, which induces a systematic frequency shift. We derived a formula to calculate the background-induced shift, which was then compared with the Table 1. Systematic shifts and their uncertainties determined for the experimental conditions, i.e., UV beams total power of 0.440(24) mW, spectroscopic cell temperature of 19.97(50) • C, and magnetic field of 43.6(3.6) µT, −11.6(4.3) µT and −7.46(0.77) µT for x, y, and z-axis respectively. All results are in kHz. experimental results, showing full agreement. To demonstrate the performance of our method, we performed a measurement of the absolute frequency of the 1 S 0 → 3 P 1 201 Hg improving the accuracy by three orders of magnitude. Our result agrees with the previously measured value within the uncertainty.
5,797.4
2022-11-03T00:00:00.000
[ "Physics" ]